Critique of technological inevitability in artificia…

Deconstructing the Machine Myth: Why Big Tech’s Abuse of Artificial Intelligence Doesn’t Need to Be Inevitable

Abstract 3D render showcasing a futuristic neural network and AI concept.

The current technological epoch is defined by the pervasive, accelerating presence of Artificial Intelligence, yet this ascent is frequently framed by a fatalistic narrative: that the negative consequences—the surveillance, the algorithmic bias, the market consolidation—are merely the unavoidable byproducts of progress. This perspective, which casts technology as an inexorable force, serves a political function, absolving creators and deployers of moral responsibility. However, a detailed deconstruction of AI’s mechanisms reveals that this inevitability is a carefully cultivated myth. The true locus of conflict is not in the silicon, but in the human decisions that predetermine the machine’s objectives. To reclaim the future shaped by Big Tech’s most powerful creations, one must shift the focus from resisting the machine to governing the people who command it.

Deconstructing the Machine Myth: The True Locus of Conflict

Technology as Manifestation of Human Will, Not Inexorable Force

A crucial counter-narrative, central to refuting the deterministic view of technological progress, asserts the fundamental principle that technology itself is not a force of nature operating independently of human will. This perspective firmly rejects the notion of technological inevitability. Every line of code, every architectural decision, and every deployed system is the product of deliberate human choice, reflecting the values, priorities, and power structures of its creators and deployers. Artificial intelligence, in this light, is not a sentient entity emerging from a vacuum; it is a sophisticated artifact whose purpose is entirely contingent upon the intentions programmed into it by its human architects. The question, therefore, shifts from “What will the machine do?” to “What are the people making the machine do?” This reality underscores that the relationship between humans and their creations is one of ongoing authorship, not one of passive reception to an external fate.

The evidence of human authorship is present in the very structure of contemporary AI. The massive private investment flooding the sector—U.S. private AI investment reaching $109.1 billion in 2024, for instance—is a human financial decision, not a physical law. Similarly, the adoption rate, where 88 percent of organizations reported regular AI use as of late 2025, is a result of strategic corporate prioritization, not technological magnetism. The industry’s own narratives, which often emphasize the revolutionary nature of the tools, function to obscure the concrete, politically motivated choices behind the technology’s deployment.

The Mechanics of Optimization: AI as a Highly Specialized Goal-Seeking Engine

To truly grasp the contemporary artificial intelligence landscape, one must look beyond the superficial complexity and understand its operational core. At its essence, artificial intelligence, particularly in its deployed forms, functions as a system of automated decision-making rooted in the principle of optimization. This is not intelligence in the human sense of wisdom or understanding; it is the relentless, systematic drive to make a single, measurable objective—a “reward”—as large as possible within defined constraints. A system is given a clear, quantifiable target: maximize the number of times a user clicks an advertisement, minimize the time a package spends in a warehouse, or maximize the probability of a defendant defaulting on bail. The machine’s entire operational capacity is marshaled to achieve this single, pre-selected metric. This mechanical reality demands that we focus on the input that guides this entire process: the reward function.

This optimization mindset is evidenced by corporate priorities. While leaders are seeking efficiency—with 80 percent of surveyed organizations setting efficiency as an AI objective—the most value is derived when growth or innovation are also objectives. This suggests a hierarchy of programmed goals where immediate, quantifiable business metrics often take precedence over less easily quantified societal benefits. The technical sophistication of generative AI models, which in 2025 are seeing breakthroughs in reasoning and agentic capabilities, only makes the *purity* of this optimization more dangerous, as the means to achieve the programmed goal become vastly more effective and pervasive.

The Central Question: Who Authorizes the Algorithmic Objective?

The preceding mechanical understanding leads directly to the most significant political and ethical question concerning the current state of artificial intelligence deployment: who possesses the authority to designate the specific metric that the world’s most powerful computational resources will tirelessly pursue? In the context of a global capitalist structure, the answer is often depressingly clear. The control over the means of prediction—the vast proprietary datasets, the immense computational infrastructure, the specialized expertise, and the colossal energy requirements necessary to train and deploy cutting-edge models—rests overwhelmingly in the hands of a concentrated capital elite. Consequently, the objectives optimized by these systems naturally align with the economic interests of those who own and control these means of production, prioritizing shareholder value or platform engagement over broader societal well-being.

This concentration of authority is increasingly a focus of global governance. The European Union’s AI Act, which began its rollout in 2024 with full applicability set for August 2, 2026, imposes strictures on high-risk AI, aiming for greater accountability and transparency. However, the political struggle for objective control continues, as seen in the United States where state-level regulations gained traction in 2025 after federal efforts to halt them failed, meaning states can now pursue their own restrictions, impacting major firms like Meta, Alphabet, Amazon, and Microsoft. The very laws being enacted, such as the EU’s prohibition on systems that predict the probability of a crime, are direct attempts to override an objective (crime prediction/prevention) deemed too harmful when authorized by the state rather than through democratic consensus.

The Societal Battleground: Real-World Consequences of Misaligned Goals

This conflict of interests manifests across nearly every domain where these optimization engines are unleashed, leading to measurable, often damaging, real-world outcomes. Consider the digital sphere: algorithms governing information access on major platforms are finely tuned to maximize ephemeral metrics like user engagement or ad impressions, often at the expense of truth, nuance, or mental health. Even as AI literacy initiatives surge to help the public understand the technology, the underlying corporate objective remains largely untouched.

In the labor market, AI manages the gig economy, optimizing driver routes or warehouse worker efficiency to extract the maximum possible output for the platform owner, treating human time and exertion as a variable cost to be minimized. This automation is already reshaping roles, with entry-level corporate positions declining by 15% as AI handles more of the required skills. Furthermore, the rising wage premium for AI skills—reaching 56% in 2025—suggests that the economic structure rewards alignment with Big Tech’s priorities, not universally applicable human value.

More profoundly consequential applications appear in the public sector, where predictive policing systems, optimized for high rates of identifying future criminality, can disproportionately target already marginalized communities based on biased historical data, effectively penalizing individuals for crimes they have yet to commit. Mounting evidence indicates these systems worsen the unequal treatment of people of color, leading U.S. Senators to call for the cessation of DOJ funding for such systems until audits are complete. The data fed into these algorithms, drawn from historical arrest statistics that reflect over-policing, creates a self-fulfilling prophecy that reinforces cycles of racial bias and over-surveillance.

The most severe manifestation, as tragically evidenced in certain conflict zones, involves the deployment of AI in warfare to select targets, where the objective function might tragically prioritize strategic outcomes over the minimization of civilian collateral damage. In 2024, military use of AI decision support systems (DSS) drew intense scrutiny, with accusations that systems were relied upon too heavily for lethal strikes with insufficient human oversight. The deployment of commercial AI models, which Big Tech firms began actively enabling in 2024 after lifting self-imposed bans, into conflict theaters highlights a devastating gap in international law, where the AI’s goal may clash directly with the IHL principle of proportionality. The very nature of these systems, from the Iron Dome’s autonomous interception to targeted strikes, shows the objective (military advantage) trumping the social mandate (protection of civilians) when authority is centralized.

Revisiting the Discourse: Critiques Beyond the Optimization Error

Acknowledging the Trailblazers in Identifying Algorithmic Harm

It is important to recognize that the contemporary concern surrounding AI abuse is not new; rather, it is built upon years of foundational critique from dedicated scholars and activists who have systematically documented the technology’s discriminatory effects. Pioneering work by computer scientists, such as that exposing the inherent racial inaccuracies in early facial recognition technologies, provided concrete evidence of biased outputs stemming from skewed training sets. Sociologists have rigorously mapped how these automated systems, when integrated into institutions like the justice system or educational gatekeeping, serve to replicate, amplify, and effectively launder existing social inequalities under a veneer of objective computation. Furthermore, warnings have been issued by researchers concerning large language models acting as sophisticated echo chambers—stochastic parrots—that merely regurgitate and normalize the biases embedded within the massive troves of internet text upon which they are trained. These critiques collectively highlight the mechanism of harm, yet a unifying political diagnosis remains essential.

The Fallacy of the Purely Technical Fix: Moving Past ‘Optimization Errors’

The temptation within the technical and philosophical communities is often to treat every instance of AI-induced harm as a solvable optimization error. This viewpoint suggests that if only the programmers could refine the objective function—if they could just code “fairness” or “safety” perfectly enough—the system would operate benignly. While technical refinement is necessary for robustness, this focus becomes a profound distraction from the root cause. It implies that a perfectly optimized system designed to maximize the profit of a monopoly, even if it operates without any programming bugs, is inherently “safe” or “ethical.” This perspective neatly excuses the powerful actors who chose the profit-maximizing, socially detrimental objective in the first place. It treats the negative outcome as a mathematical anomaly rather than the intended consequence of a chosen, biased goal.

This limitation is being recognized as governance matures. Even in highly regulated environments, the focus on technical alignment versus governance is debated. The EU AI Act, for instance, bans predictive policing systems in February 2025 because its core objective—predicting crime based on profiling—was deemed inherently harmful, regardless of its technical accuracy. This legal move correctly diagnoses the problem as one of objective authorization, not merely a faulty calculation. Research in 2025 is beginning to study the trajectory of ethical issues within development teams, focusing on supporting reflection and action before ethical issues become intractable technical problems.

The True Central Issue: Conflicts of Interest Over Objective Control

The author contends that the core problem underpinning most contemporary AI harms is not a bug, but a feature of our current socio-economic arrangement: conflicts of interest over the control of AI objectives. When an algorithm designed by a corporation harms the public, it is because the corporation’s objective—say, maximizing shareholder return—is fundamentally at odds with the public’s objective—say, equitable access to resources or protection from surveillance. This frames the AI debate correctly: it is a political and economic struggle over governance, not a purely technical problem of code alignment. Until we address the power imbalance that allows unelected, private interests to unilaterally define the goals of the world’s most powerful predictive tools, addressing the symptoms through ethics panels or technical audits will remain insufficient.

The distinction between corporate objectives (efficiency, growth, shareholder return) and societal objectives (equity, truth, human dignity) is the fundamental schism. In military applications, the objective of “strategic outcome” or “minimizing risk to one’s own forces” is a programmed goal that directly conflicts with the civilian imperative to minimize collateral damage. In the justice system, the objective of “efficient resource allocation” through predictive policing directly conflicts with the public’s right to non-discriminatory treatment. The technology is merely the perfect, unbiased instrument for executing a *biased goal*. The solution, therefore, must be political, targeting the authorization of the goal itself.

The Path Forward: Democratizing the Technology That Shapes Our Lives

The Imperative of Public Sovereignty Over Algorithmic Targets

If the control of objectives is the central struggle, then the only viable remedy against the abuse of artificial intelligence by entrenched powers is the forceful establishment of public control over those objectives. This necessitates a political, not purely technological, intervention. The societal consensus must democratically determine the goals that these powerful optimization engines are permitted to pursue, ensuring that the design incentives align with broad public welfare, ecological sustainability, and individual rights, rather than narrow concentrations of wealth or power. This shift demands a fundamental re-evaluation of who holds the veto power over the societal deployment of consequential algorithmic systems.

The trend toward governance in 2025 reflects this nascent shift towards setting mandates. The Paris AI Action Summit, for example, placed human-centric AI and ethical considerations at its core, with leaders stressing that AI must “serve humanity”. Public sovereignty is the enactment of this principle: moving from *allowing* private entities to self-regulate their goals to mandating that those goals align with a public mandate. This requires mechanisms where community input or legislative fiat can establish non-negotiable constraints on optimization functions, effectively banning objectives that prioritize profit or power over fundamental rights.

A Multi-Layered Framework for Democratic Governance

The concept of “democratic control” must be understood as operating across multiple scales, recognizing that AI permeates all levels of modern organization. This is not solely about the actions of national legislatures or international treaties, though these are vital for setting broad legal boundaries and defining core rights. True democratic control must also be fought for and implemented at the level of the enterprise, where workers must have a meaningful voice in how AI is used in their management and tasks. Furthermore, it must be realized at the level of the platform, where user communities should have a stake in how content is prioritized and filtered. This insistence on broad, participatory governance ensures that the technology serves a pluralistic society, not a monolithic economic interest.

The legislative landscape is already addressing the high-risk tiers. The EU AI Act dictates different obligations for General Purpose AI (GPAI) models based on their training power, recognizing their systemic risk and the need for specialized oversight. However, this must be supplemented by granular, participatory governance. For example, in an enterprise setting, employees, now more aware than leaders expect of the technology’s potential, need formal channels to challenge optimization objectives in internal systems, such as workforce management or internal knowledge systems, to ensure human-centric work environments are prioritized over pure efficiency metrics.

Demolishing the Gatekeeping of Technical Complexity

A significant obstacle to achieving this democratic vision is the narrative, actively propagated by the industry itself, that artificial intelligence is an inherently arcane and impossibly complex domain, reserved only for a specialized priesthood of engineers and data scientists. This mystification serves a political purpose: to disqualify the general public from meaningful deliberation or governance over the technology. However, the argument must be made emphatically that the foundational concepts underlying contemporary AI—optimization, data dependency, and algorithmic feedback loops—are not esoteric secrets beyond the grasp of an informed citizen. Understanding how a reward function works, and the social implications of maximizing it, does not require fluency in advanced calculus; it requires civic literacy and a willingness to challenge the dominant, self-serving technical explanations.

The technical reality, as revealed in research, is that while the inner workings of large models are complex, their high-level operational logic is transparently goal-oriented. The concept that AI systems rely on massive, potentially biased input data and iterative processing to learn patterns is a readily graspable concept for any citizen engaging in critical thought. The industry’s reliance on this complexity obfuscates the simple political reality: a highly complex tool dedicated to a simple, profit-maximizing goal is still a tool whose purpose must be publicly vetted. As the public increasingly feels “fooled” by AI, the time is right to push back against this technical mystification.

Empowering the Citizen: The Necessity of Accessible AI Literacy

To successfully enact democratic control, a broad and robust campaign of AI literacy must be undertaken across the populace. This education must strip away the jargon and demystify the inner workings, revealing the human choices embedded in the seemingly objective output. When the general public understands that an algorithm is simply pursuing a pre-selected, often narrowly defined, metric, they are empowered to demand a different metric. This process of demystification is crucial for building the political will necessary to challenge the concentration of power and to insist that the tools used to shape the future—from resource allocation to information flow—are governed by a public mandate, not by the proprietary dictates of an opaque, powerful few.

In 2025, AI literacy is deemed as essential as digital literacy was two decades prior, influencing education, work, and society. Initiatives are surging across educational institutions and non-profits to teach users how to identify AI-generated content and recognize model limitations. This is not merely about using the technology better; it is about asserting citizenship over it. When a citizen comprehends that a predictive policing tool is optimizing for *arrest density* based on historical data, they possess the foundation to demand an objective function based on *verified harm reduction* or *community well-being*, as articulated by international ethical frameworks. This re-engagement with the technology as a subject of public negotiation, rather than a foregone conclusion, is the only viable strategy to ensure that artificial intelligence evolves to serve humanity’s diverse interests rather than cementing the dominance of Big Tech’s narrow priorities.

Leave a Reply

Your email address will not be published. Required fields are marked *