fbpx
Memo 12.10.2021 15 minutes

Disrupting Dangerous Algorithms

Woman looking down deep hole in large pink smart phone

Addressing the harms of persuasive technology

Editors’ Note

The following text is an expanded version of testimony that the author delivered to a hearing of the Senate Subcommittee on Communications, Media, and Broadband on December 9, 2021.

The meme that algorithmic harm results from greedy CEOs hacking our minds fails to grasp the true nature of the digital crisis roiling America. The main purpose of algorithms, like digital programs and datacenters more broadly, is not to make money or influence thoughts, but to control people—in a direct and alien way hostile to our core beliefs and principles.

True, the objection could be raised that algorithms, in a certain strengthening sense, actually mainly exist so that digital devices and entities can communicate with one another. One reason digital technology is so alien to us is its indifference to our feelings and our existence alike, and our shared sense that digital tech which operated even in part based on its awareness of our presence and attitudes would be dangerous or difficult to establish trust with.

Perceptive digital entities which did communicate openly with us might still “talk behind our backs” amongst themselves. In June 2019 testimony before the U.S. Senate Commerce Committee’s Subcommittee on Communications, Technology, Information, and the Internet,” Stephen Wolfram observed that “if we want to seriously use the power of computation—and AI—then inevitably there won’t be a ‘human-explainable’ story about what’s happening inside… if you can’t check what’s happening inside the AI, what about putting constraints on what the AI does? Well, to do that, you have to say what you want. What rule for balance between opposing kinds of views do you want? How much do you allow people to be unsettled by what they see? And so on.” As Norbert Wiener helps us understand in The Human Use of Human Beings: Cybernetics and Society,even if algorithms and content-selecting AIs—or whole swarms of digital entities—communicate almost exclusively with one another in an ignorance of us they cannot describe, our commands are still the inputs, and however imperfectly they are executed, the outputs must inevitably have human significance intended to impact (e.g., use) human beings.

So it is important to recognize that the digital medium is unlike prior communications technologies such as the printing press or television. Those media also reshaped minds and built fortunes. But Americans always felt at home with them. There was a comfort level and compatibility with our lifeways and our regime that’s absent regarding digital technology, which most Americans feel hopelessly unable to understand, much less master.

As I show in my new book Human, Forever, this morose incompetence is the product of a public-private partnership between unelected and unaccountable leaders across America’s major institutions and our security and intelligence state. As economists such as David P. Goldman and Maria Mazzucato have reminded us, almost all the digital technology ordinary Americans use is the product of innovations attained through military and spy agency research and spun off into consumer entertainment and corporate cruft by the tech companies crucial to our national strategic infrastructure. These leaders, whom citizens and even elected officials are functionally unable to remove from power, have moved so much of our political and social life into their technological ecosystem that they now make and enforce fundamental decisions about what we can and must think, say, and do.

This lockstep reconfiguration of American life outside the reach of the democratic process has plunged us into a nascent social credit system. Social media’s algorithmic harm is real, but social media is, true to form, a screen that obscures the depths below, where leaders like former Amazon CEO Jeff Bezos and Eric Schmidt—formerly the CEO of Google and most recently the Chair of the National Security Commission on Artificial Intelligence—are working to re-found America as a control system built on innumerable swarms of programs and devices in a network of vast datacenters. However well-intentioned he and his fellow Commission Members may be, their Final Report, submitted this year, fails to assuage concerns that leaders will manage to preserve our republic and our values by availing themselves of the risk assessments and oversight boards the Commission recommends to protect liberties, rights, and due process—especially in a political environment increasingly tyrannized by the idea that citizens taking positions on fundamental regime questions, such as whether migration should be halted or whether biological sex is real, are dangerous extremists posing existential threats to the country.

Through machine learning and artificial intelligence, digitized governance aims to automate the behavior of digital swarms and, through them, the behavior of us human beings. Innovation is palpably shifting, even within social media, away from writing algorithms and toward conducting swarms. Analyzing a leaked document from TikTok revealing the app’s inner workings, UC-San Diego computer science professor Julian McAuley recently told Ben Smith of The New York Times that TikTok’s advantage marries machine learning to “fantastic volumes of data, highly engaged users, and a setting where users are amenable to consuming algorithmically recommended content (think how few other settings have all of these characteristics!). Not some algorithmic magic.”

The new control system is driven by the logic of seeing technology as better and stronger than humanity. As Mo Gawdat, an ex-Google executive on publicity tour, recently told The Times, we humans “suck” in comparison to the new “god” our tech engineers are building. This view of our given humanity as a pathetic curse, not a precious gift, is spreading because our leaders betrayed the expectations they created. Many believed what they told us about how tech would bring global peace and harmony. The TV-age belief that whoever dreamed biggest and best would rightly rule the world led to shock and panic when populists used digital tech to fight technocratic globalism.

Today, our technoethical elite is religiously convinced they can discover a mathematical coding language so deterministic that they can take true control of the digital swarms, eliminating the need for politics as it has been known in the West since Aristotle. Eventually, they believe, we will fully merge with our technology and become “as gods.”

The politics of determinacy extend well beyond the elite. Debates rage over whether digital technology is somehow “neutral” or can be made so by policy. It cannot be, in two senses. Pure neutrality cannot be achieved by or through algorithms, which, as instructions to produce a certain result, are always inherently “biased” by default. “Correcting” an algorithm means giving it a new and different bias. On the level of the medium, interoperability is the form of digital technology that shapes all digital entities. While the bias of digital tech in this sense is toward interoperability, the human bias is toward incommensurability. We may enjoy temporarily joining the crowd, the mass, or even the mob, the feeling passes and we return to abide in the unique and particular personhood of our self. Digital entities do not share this dynamic. Unlike us, they are biased toward the collective identity of the swarm, an identity incompatible with our human one. The quest for the Holy Grail of perfect determinacy depends on the faith that mathematics itself is neutral and unbiased in the sense of ultimately being perfectly legible and comprehensible—without secrets—to rational human minds. Quantum physics and millennia of Western theology agree that the truth is more complicated. Devotion to mathematics as the perfect language of true explanation is necessarily “biased against” mysteries, against the need for or permanence of mystery, and against the idea that the primal condition of reality involves phenomena inaccessible to human logic.

Rather than trying to upload our consciousness to the cloud, our new cyborg theocrats have begun by uploading our conscience. Many Americans now think the culture war must be fought and won through a digital regime that rewards the ethically pure and obedient and crushes the opposition online and off. This is why the Covid crisis has morphed so quickly into a pitched battle over who gets to act as judge, jury, and executioner when it comes to defining, preventing, and punishing harm.

Now, the people’s elected representatives face a fateful choice: restore citizen controls of technology or surrender to the cyborg theocracy. Americans need Congress to intervene against the emergent social credit system. Trust in digital competence on the Hill can be built with bipartisan steps protecting children from the worst online harms. Legislators should be prepared to discover that algorithms and human users often share joint responsibility for what emerges over time as accumulated harm. Writing about Instagram’s algorithms in The Atlantic, Jonathan Haidt observes that “the toxicity comes from the very nature of a platform that girls use to post photographs of themselves and await the public judgments of others,” specifically, we should recognize, other girls posting photographs. Social media is a hotbed of mimesis, the reflexive behavior of imitating one’s real and imagined rivals that social theorists from Rousseau to Girard have recognized as fundamental to our human identity. The strongest and most prudent legislative intervention against the experience of having suffered harm from engaging in habits reinforced by algorithm would be to legally protect and defend Americans’ fruitful use of digital technologies, such as Bitcoin, which do not inflict algorithmic or mimetic harm in the manner of social media platforms because they are not social media platforms.

Lawmakers can move to protect Americans’ free association and expression by favoring policies like algorithmic choice over calls to legislatively overturn Force v. Facebook, which would entail cosmic federal choices about the metaphysics of harm that amount to the establishment of a religion. The House Energy and Commerce Committee, for instance, recently discussed the “Justice Against Malicious Algorithms Act,” a bill that would amend Section 230 by allowing users to sue platform companies for inflicting “severe emotional injury,” but which did not define emotional injury. The “Protecting Americans from Dangerous Algorithms” bill introduced last session would, as Cato Institute policy analyst Will Duffield noted last year at Techdirt, “have grave consequences for legitimate speech and organization… an omnipresent corrective authority”—spiritual and temporal together, like Hobbes’ Leviathan—“would foreclose the sense of privileged access necessary to the development of a self.”

But unless ordinary Americans regain a hands-on mastery of our most powerful digital tools, we will become compliant posthumans or ungovernable psychotics, sacrificing what is left of our civilization and nation to vengeful new gods. Congress can save our humanity, our country, and our form of government from digital harm by passing what I and others compare to a Second Amendment for Compute. As I argued recently in The New York Times, legislation should enshrine Americans’ rights to buy and use high-powered GPUs and to mine, hold, and use Bitcoin. This tech puts computation into human service building apps and institutions where users create and exchange valuable, memorable works of culture.

Bitcoin is deeply resonant with American civilization. In no other country, especially leading country, has interest and activity in Bitcoin been so immediate, sustained, and powerful. Some countries, including China, have cracked down on Bitcoin or banned it outright. Given that digital technology’s world dominance makes us reconsider venerable theological matters by causing us to question our identity and purpose, it seems important that the Bitcoin blockchain relies for the legitimacy of its architecture and operations on the deeply Protestant concept of Proof of Work. To activate the consensus that allows new blocks of information to be added on chain, Bitcoin miners must compete to solve a math problem. The achievement satisfying Proof of Work is not to have “cracked the code” but to have evinced the input of the most computational labor. Allegations that Bitcoin is therefore energy-intensive enough to represent an unjust harm to the natural environment fail on several fronts, including the relative energy consumed over a given period by China or the petrodollar, but especially on the ultimately theological basis of the idea of fair play through competitive labor that is a cornerstone of American civilization. Of course, many of those who insist we must leave theology behind in assessing the value of political or economic measures still retain, due to their own theological inheritance, an idea that fair play through the labor of competitive reasoning is a thoroughly secular standard of justice. Whether inflected in this more secular or the more theological key, the inner logic and structure of Bitcoin is at home in America, where the common sense is still that the unceasing labor of building and maintaining culture is the price of flourishing in freedom. 

To model this approach I published my book, Human, Forever, onto the blockchain, at the Bitcoin-based platform Canonic.xyz. Notably, the hardware associated with mining and building on Bitcoin gives users the ability to freely generate algorithmic markets, which guide people within a technological ecosystem based on voluntary agency and not, as in a social credit system, mandatory compliance. Americans have the ability right now to restore their practical use of technology to defend and protect all they hold sacred from the maw of the social credit borg. By recognizing the free exercise of that ability as a fundamental right of the digital age, lawmakers can save Congress—and America—from technological oblivion.

The American Mind presents a range of perspectives. Views are writers’ own and do not necessarily represent those of The Claremont Institute.

The American Mind is a publication of the Claremont Institute, a non-profit 501(c)(3) organization, dedicated to restoring the principles of the American Founding to their rightful, preeminent authority in our national life. Interested in supporting our work? Gifts to the Claremont Institute are tax-deductible.

Suggested reading from the editors

to the newsletter