Feature 06.19.2023 5 minutes

Who’s on the Other End of the Chatbot?

ChatBot – CPU Concept

Americans must demand agency in the development and implementation of AI.

Our tech shapes us. You and I have changed our habits, our daily rhythms, our postures to communicate effectively with our machines. We’ve put up with punch cards, CD-ROMs, floppy disks, command-line interfaces. We’ve learned to code, click, tap, refresh. For hours each day we hunch over our laptops and phones at the expense of our joints, neck, and spine.

The promise of the Large Language Model (LLM) is to interface with us on our own terms. Through voice or text alone, you can ask LLMs (and models for video, images, and audio) for whatever you want. You don’t even need to know how to code anymore. Just describe the kind of app you want made and ChatGPT, for example, will code it for you.

An ideal LLM would erase the distance between what you want, what you get, and how you get it. No more translating your desires into keystrokes or clicks: they can now come straight from your mouth. Speak and it’ll happen. That’s the kind of power that the Bible attributes to God the Father: “let there be code.” “let there be ad copy.Let there be whatever you demand. (And if you want to ask God the Son a question, there’s an AI for that too.)

Of course, this tech will still shape you. Because if you operate LLMs by using your words, then you will have to use the words it responds to, in the way it responds to them. Whoever creates these programs will help determine what you think to ask for, and on what terms.

Want an argument for more fossil fuels? You might have to find a workaround. Or wait until Elon Musk releases his own LLM. That is to say: it’s currently the makersof LLMs that set the terms of engagement. Companies like OpenAI, Google, Microsoft, and so on, that have spent billions of dollars and resources to make them, are also filtering, fine-tuning, and making sure certain biases remain.

This might be a matter of merely passing interest if LLMs were just going to be used as advanced chatbots—what does it matter if your toy is a little woke? But LLMs are already becoming ubiquitous and invisible. Microsoft, Google, and everyone else are going to start integrating them into software everywhere, so that their rules will be the rules of everything. And then political bias, censorship, and language control will show up inside, say, Microsoft Word. One day you’ll be typing out something innocuous about gender and politics. Next thing you know, a Clippy on steroids will pop up to ask if you really meant to say that.

In the Driver’s Seat

This is the bad news. The good news is that you have options. The better news is that LLMs, used properly, can help us end the often-abusive relationship we’re stuck in with technology and, ironically enough, help us become more fully human. Use them well and you can sharpen your thinking, explore your creativity, create useful things, and automate writing your weekly reports for your boss (who will have an AI read your report anyway. This is the future, by the way. Robots writing things for other robots).

The difference between surveillance hellscape and futuristic dreamland will come down to ownership and control. The optimal scenario would be for each user to have his own personal LLM (trained on his data, photos, emails, documents, notes, and so on). These would have to be limited to the user’s devices, and the user would have to retain strict sole privacy, ownership, and free expression.

There’s a long way between that ideal and even our current arrangement, in which purveyors of software and hardware alike can often help themselves to user data and intrude at will with unasked-for updates based on their own neo-Soviet style guides. Defending AI language models from this kind of interference will mean strong protections for personal data and control over the means of computation.

It will also mean freedom to experiment with and develop a wide range of LLMs so that if you don’t like ChatGPT, you can seek out other websites offering the same functionality. Right now that variety is proliferating, and it’s been compounded by the implementation of Plugins to expand what a given LLM can do.

The list of possibilities grows monthly, even weekly: text generation of allvarieties, Code Generation for new apps, image generation based on textdescriptions, videos, audio transcriptions, and so forth. The use cases are potentially limitless.

Imagine you’re a lawyer with all the case histories, court transcripts, and documents that go with your case. You can use an LLM to understand it better, see connections, formulate and organize your arguments, and write your opening statements and closing arguments.

Or imagine you have records of your health, your medical history, doctor’s visits, prescriptions, and everything else. Upload. Talk to your documents. Uncover what might be actually wrong, new potential treatments, diets, supplements, and more. It’s not a doctor, and you shouldn’t blindly follow the information you get. But what if you could go to your doctor with a little more understanding of your health—and challenge him when he’s trying to hook you on SSRIs?

What about your mental health? There’s an LLM for that—actually, there are many. One of them is SafeguardGPT, which simulates therapy sessions for its users.

But just because you can use an LLM for therapy—should you? Is there one better way to use it versus another? There’s a dilemma at the heart of every LLM: How much, and how, do you use it? What will you lose, and what will you gain? No technology is neutral, and certainly not an LLM. Usage will always form us in particular ways. Any and every time we outsource a task to an LLM, we lose the kind of formation that the task provided.

The path forward is to use LLMs with an awareness of what we gain and lose. LLMs are malleable. You can train, fine-tune, or use embeddings so as not to lose out on the formation you’d otherwise miss. You can have LLMs interrogate you, argue with you, challenge your assumptions, challenge what you’re saying and thinking. This will mean demanding agency and decision power over which LLMs we use and how.

You might not care about LLMs, but LLMs care about you. They’re being integrated into everything. If you cede understanding and control of this technology to large corporations and the government, you will grant those entities full control and influence over the language you’re using. Americans need active input into what kind of LLMs are available, and we need to take ownership over how we use them. You can either make these active choices now, or they will be made for you.

The American Mind presents a range of perspectives. Views are writers’ own and do not necessarily represent those of The Claremont Institute.

The American Mind is a publication of the Claremont Institute, a non-profit 501(c)(3) organization, dedicated to restoring the principles of the American Founding to their rightful, preeminent authority in our national life. Interested in supporting our work? Gifts to the Claremont Institute are tax-deductible.

Also in this feature

to the newsletter