A skeptical observer goes looking for AI bias.
AI and the Return of Creative Elitism
ChatGPT will replace writers who shouldn’t be.
There are many AI-related fears. It will send cars off cliffs. It will increase discrimination. It will lead to cultural insensitivity. The Japanese, for example, would prefer the trolley run over the child instead of the old person. Global corporations, NGOs, and administrative agencies are hijacking these fears in order to initiate regulatory capture and win a monopoly over Large Language Models like the one behind ChatGPT. They want to make you believe that they, and only they, can wrangle AGI before it turns us all into racist paperclips.
But perhaps the oldest and most persistent form of AI doomerism is “machines will take our jobs.” The current WGA strike, for example, is in-part aimed at the AI threat. Writers want studios to promise not to use ChatGPT for the bread-and-butter treatments, pitches, and drafts on which they make a living. They don’t want to be disrupted out of existence like the steel worker and the cab driver, and they’re right to worry. Studios are globalist corporations after all, of the same ilk that showed themselves happy to delete towns, cities, and even entire states under globalization. A certain species of Davos goon does not care at all about job loss when it’s happening to unpeople in the rube states. The Rust Belt looks like Syria for a reason.
However, where the previous labor nuke decimated the white working class in flyover states, this one will explode closer to the power center of Corporate America. Creative AIs like ChatGPT most threaten one of the Regime’s most powerful assets: the managerial class.
Industry used to make widgets, or ads about widgets, or widget-poems—the products were the point. In today’s workplace, achieving consensus is more important than output. You might say that the workplace is now the primary product of the workplace; the widgets just keep the lights on.
This phenomenon has many names. Corporate America. Cubicle Culture. James Burnham called it the Managerial Revolution. I prefer The Longhouse.
Twitter anon Lomez broke the intellectual internet with his “What is the Longhouse?” piece in First Things, in which he described the HR-driven cult of safetyism that governs modern work. Unlike the male-dominated offices of the past, whose guiding principle was “the work is king,” the Longhouse focuses 95 percent of its energies on itself. Cohesion is queen. Cohesion thrives on parroting and replication. And AI is replication.
The people who should feel threatened by AI are thus not only writers, but managers. Particularly managers LARPing as writers.
Camel Frankenstein
Probably as a consequence of being raised on toxic Boomerisms such as “be yourself” and “think different,” Millennials yearn for creative roles. They are encouraged to be creative in whatever position they find themselves. “Secretary” has become a swear word. Most job descriptions, no matter how banal the role, talk up the amazing levels of creativity the job entails.
Urban Outfitters sells hats that read “Creative Director.” Even salespeople call themselves creative consultants or strategists or VPs of creative just because it sounds better. The token girlboss, the DIB Apparatchik, the professional email sender, the adult daycare overseer, the TikTok project manager who checks boxes from the pool—all of them have been tricked into believing their job is to be creative.
But when everyone is creative, no one is creative. Let me give you two examples from my personal experience as a copywriter.
Pre-COVID, a major agency invited no less than eight sub-agencies up to San Francisco for a “creative brainstorm.” A large banking client needed us to break the Big Idea for its holiday giving campaign. Each agency had a different purported specialty: experiential, social, publicity, media, CSR/ESG, “multicultural” (meaning black), Woman-Owned™, Spanish-language. Around 40 marketing people—not just creatives but accounts, PMs, biz dev, strategists, everyone—spent the day in a high-end hotel conference room sucking down Peet’s coffee and Mendocino Farms.
We scattered into breakout groups and covered the walls in Post-its. An archetypal tall, graying, hipster-glasses Hype Dad executive wrangled us. We all worked hard to gain his favor, hoping ours would be chosen as one of the three Big Ideas he would pitch the client. He gave direct and incisive feedback on each idea, highlighting this and gently dismissing that. A professional with decent taste, he clearly deserved his position. He drove the team toward his vision, which was his job.
But just as we homed in on three concepts, he did something unexpected. He stopped the program, passed out sharpies, and instructed everyone in the room to vote via ranked choice for their favorite concepts. The ones with the most votes would be sent to the client. Creativity by democracy: every voice heard.
COVID only made it worse. Recently, on a freelance gig for another top-of-market agency, I was assigned to write taglines for a corporate client in the education sector. To properly present taglines, you gussy them up and place under a shroud, which you rip off at the perfect moment, otherwise they’ll fall flat. I usually write lead up “manifestos,” which I read, then I flip to the big tagline reveal on the next slide. This is how you “sell” in the line; your vision for the brand and what it all means.
My lead creative strategist in this instance suggested a different tack—one I’ve never heard before. She instructed me to build an Excel spreadsheet and write 10-20 taglines on it. Then make an individual feedback box for every member of the agency and client teams and to tag each team member one by one, with a note requesting their feedback on each line. Then I was supposed to modify each tagline based on the feedback of each team member: 10-15 people total, most of whom were not creatives.
No creative vision of any kind can survive such a process. A camel is a horse made by committee, but at least a camel is a living, breathing creature. This process creates Frankenstein monsters stitched together from dead parts and shocked alive by the occult power of “inclusion.” But again, the Longhouse’s objective isn’t ultimately to make good things. It’s to make an environment where everyone feels like they made something.
Lossy Google
ChatGPT is also a sort of Frankenstein, and each of its responses is a terrible little monster. In his fantastic New Yorker piece on how ChatGPT works, Ted Chiang describes it as a sort of undead replica of the entire internet.
“Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it.”
He’s saying, in effect, that ChatGPT is “lossy” Google. Generation loss (aka “lossy-ness”) occurs when we copy files or compress them, then re-open them. “Lossless” algorithms create exact replicas, while compression algorithms use “interpolation” to “guess” at storing similar parts of the original—“estimating what’s missing by looking at what’s on either side of the gap.”
That’s why photocopies, for instance, never look quite the same as the original. And when you copy a copy, then copy that copy, over and over again, you arrive at the eerie Deep Fried aesthetic that’s become popular in art circles today. The storage shortcuts create flaws, and the flaws begin to overtake the image, creating what appears to be a sort of original style. In a way, we’re seeing the “language” of the computer itself, a visualization of how it digests and communicates information.
Large Language Models like ChatGPT work in a similar fashion. They essentially “compress” all the writing on the entire Internet in lossy fashion. AI “training” means using “virtually unlimited computational power” to “identify extraordinarily nuanced statistical regularities,”—e.g., when the word “Nietzsche” appears, the phrase “misinterpreted by the Nazis” often appears in the subsequent paragraphs. When you prompt it, ChatGPT responds with a collage of these probabilities, which appears intelligent.
What we perceive as the intelligence of ChatGPT is in reality its inability to perfectly memorize all the data on the Internet. What you’re seeing isn’t writing, it’s word gunk—the median of hundreds or thousands of crappy blog posts related to your prompt. Think of an extremely low-quality mp3: you’re hearing music, and in a way, it sounds like something new simply because it’s such a warped version of the original. That’s effectively what ChatGPT does with Google. If the replica wasn’t low-res, it wouldn’t feel intelligent.
This is why ChatGPT’s writing is full of hallucinations, specific facts and references that look real but aren’t. For example, if you ask ChatGPT to write you a peer-reviewed medical article, it will provide a list of real-looking footnotes to publications that don’t exist. It takes all the references it’s been trained on, then intentionally scrambles them up to look original. Such hallucinatory references can actually be dangerous, which is why Big Pharma has banned ChatGPT from its operations.
Snowflake Fall
Unlike a lossy mp3, however, ChatGPT’s writing is also necessarily mediocre. The most “probabilistic” order of words is by definition the most common. There’s a universal rule in writing: it’s impossible to write characters smarter than yourself. You can of course affix characters with smarter traits—thesaurus words, Ivy League degrees, a higher IQ—but you can’t effectively embody a character that’s smarter than you, because you can’t think how they think. ChatGPT can never rise above median writing quality, because finding the median in writing quality is what it does.
The Longhouse flunkies of the world already are this sort of artificially intelligent. They go around digesting and regurgitating key phrases, and they think it makes them writers. The overseers of our brittle regime encourage this kind of plebeian creativity because they want us to feel purposeful amid the miserable existence they offer. You’ll own nothing, but you’ll be an artist. A romantic Boomer dream! The regime has convinced women that producing web copy for corporations generates more fulfillment than producing dinner for a family.
In the Soviet Union, “Poet” was an official state title. Likewise, the purpose of “creative consultant”-type roles is to entice us into indebting ourselves for life in exchange for decades of education centered around self-expression, of a kind generally associated with narcissism and/or self-pity. Today’s college essays, for example, are judged mostly on their ability to express personal trauma.
In reality, probably less than ten percent of people should go to college, and less than one percent should have creative jobs. Advertising genius David Ogilvy said, “No team can write an advertisement, and I doubt whether there is a single agency of any consequence which is not the lengthened shadow of one man.”
Great creative works require a singularity of vision and a team of non-creative people to carry out the execution. It needs a couple sparkers and many replicants. The Longhouse rejects this notion, flattening hierarchies and making sure everyone gets to “own” part of the final product. This is a great way to make sure everyone feels good while producing utter dogs**t.
Which is why the Longhouse has utterly decimated virtually every nook and cranny of professional creativity, from advertising to Hollywood to music. The “mainstream” of any of these industries has never been more disrespected and ignored by a global audience.
The rise of AI presents an opportunity to escape from the Longhouse. It heralds the fall of the snowflake and the return of creative elitism. Intentionally mediocre art isn’t useful to human beings, which is why we call it “content.” It hypnotizes us, placates us, but it does nothing to stimulate or inspire. Put another way: the fact that a machine made a Drake song doesn’t show that the machine has a rich inner life. It shows that Drake doesn’t. Creatives and investors who harness the power of such a mediocrity machine can disrupt the human mediocrity that holds us down.
Deep Fried memes have a certain appeal. As do record scratches, which aren’t music, but can be used to make music. ChatGPT writing is the same sort of thing. The dubstep DJ uses sound-gunk to his advantage. Writers will use word gunk to our advantage. Fiction writers will use it to get a feel for the language a certain kind of character uses (particularly mediocre ones!). Filmmakers will research common historical accent styles and phrases without relying on other people. Showrunners can map out tragic relationship arcs based on the crappy advice men and women tend to receive. Advertisers, God forbid, can actually start to speak like their audiences again without being constantly “safety checked” by so-called creatives.
The big question is whether those newly obviated blowhards will go gently into that good corporate night. Odds are they won’t, which explains the furor with which managerial types are demanding top-down restrictions, regulations, and control of this new tool—if they can be in charge of scolding people about it, then they can maintain the illusion of being the two things they most obviously are not: useful and original. This is exactly the illusion that AI punctures. It is also an illusion which our bureaucratized masters have devoted decades to maintaining at no small cost in human misery and social contortion. They are not going to just let us cook.
But with a little effort on our part, they might not be able to stop us. Art has a way of evolving to help the cream rise to the top. We may hope that this is what AI represents. We’ve found a way to automate the parrots, and in doing so we will dash the dreams of mediocre thinkers everywhere who had been prospering under an illusion that they were creative. It will end the glut of mediocre art by finally, thankfully, making the chaff redundant so that the real creatives can get back to work.
The American Mind presents a range of perspectives. Views are writers’ own and do not necessarily represent those of The Claremont Institute.
The American Mind is a publication of the Claremont Institute, a non-profit 501(c)(3) organization, dedicated to restoring the principles of the American Founding to their rightful, preeminent authority in our national life. Interested in supporting our work? Gifts to the Claremont Institute are tax-deductible.
AI threatens legacy press because they rely on style over substance.
The Beatles and AI.
On cultivating a healthy disgust for the almost-human.
Americans must demand agency in the development and implementation of AI.
The field is complex enough without catastrophizing and obscurantism.