
By Jim O’Donnell (University Librarian, Arizona State University)
Editor’s Note: This article was first published in the “Back Talk” column of the April 2023 issue of Against the Grain.
The deadline for Back Talk is here again and the thought occurs irresistibly: could I get ChatGPT to do it for me? I have a lot of recurring obligations to produce a few paragraphs of thoughtful prose, so wouldn’t it be nice if I could muse aloud in my living room something like, “Alexa, write me a thousand words on the effect of artificial intelligence on writing columns for Against the Grain,” and call it a day? I have a lot of beanie baby friends who’d be glad to have more time to distract me.

We’re not there yet. The explosion in discussion of this topic has been fueled by the promotion and marketing surrounding ChatGPT, an LLM product whose ancestry and patronage includes Elon Musk, Peter Thiel, and Microsoft — the company that specializes in cumbersome operating systems and super elaborate software that take decades to stabilize at a pretty mediocre level of performance. I’m not going to pay any attention to OpenAI’s business model and prospects: what will be, will be, and I wouldn’t bet against them, because in the world of technology systems, the mediocre may very well prevail. There’s always a good chance, of course, that they’ll get a lot of venture capital, burn through it, and go broke when they can’t find a way to make money on the deal. There’s always hope.
But what does this excitement tell us?
OK, in the first place, real progress in artificial intelligence has already happened and more will happen. I drive a thirty- one year old car that I bought because it had two then-new technologies that I thought could save my life, both of which were, in a primitive way, AI-powered. Anti-lock brakes can ingest larger quantities of data faster and more reliably than I can, can evaluate the data, and can then issue mechanical commands faster and more reliably than I can. Pretty good. Air bags are a little less smart — they ingest data and have one command to issue, which they do with an appreciable rate of false positive errors — but they can keep me alive too. We are all getting used to ways in which we’re letting machines do things they can do better than we can because they process data so well.
But the idea that a machine can do things with words as well as or better than we can — that’s a different story. I’m happy to point to the skepticism of some very smart people — like Noam Chomsky in his essay in the New York Times “The False Promise of ChatGPT” (3/8/23) — who think that any credible prose from AI is a long way off, but I’m also aware that we’re likely in for a flood of verbal trash of very little value. (It’s really interesting to me that the scholarly journal business has taken the position that ChatGPT can’t be an ”author” because it is not able to take responsibility for what it has produced. That “take responsibility” category will get a lot of attention now — see, for example, the current investigation of research misconduct at Stanford University, whose written policy is that every name on an author line should be that of someone who is fully accountable for everything in the article.)
Here I have one thought of my own though. What I find interesting is the way in which we get our brains into a twist whenever we start comparing some other entity competitively to human beings — whether it’s computers or extraterrestrial beings or animals. Just a few weeks ago, I fell into a wonderful book by the late British philosopher Mary Midgley, Beast and Man: the Roots of Human Nature (London 1978). In that book, she takes apart — quite humorously — the way people talk about animals. We make two mistakes: we anthropomorphize them and we demonize them.
We make animals humanlike when we name them and talk to them (and think they understand English) and imagine that they have emotional lives like ours. We’ll even do that when talking about ants in a hive. But then we demonize them into wild ravening destructive antihuman creatures. The example that struck me was the shark. A shark is a modest creature, doing what he evolved to do, doing what it takes to stay alive and comfortable. This involves some violence towards other creatures and a non-vegan diet. But just how terrifying is a shark? Count up, Midgley suggests, the number of human beings attacked by sharks in a year and then look around to ask just what human beings do to sharks. Which one is really the wild animal? Are human beings really so special? We’re not only mostly non-vegan, but also our violence to other creatures includes, atrociously, our own kind.
I was ready to read Midgley, I think, because of the AI conversations going on around us. We anthropomorphize machines and we demonize them. They’re going to do all the same sorts of things we do, only better — and we’re afraid of the competition — but they’re mindless and destructive, and heaven help us if they achieve some kind of mental autonomy. The risk there, of course, is not the machines, but it’s the humans. Design a machine to use its very basic skills — processing lots and lots of 1s and 0s rapidly — to do destructive and hurtful things and you won’t like the results.
Now, I wouldn’t disagree for a minute if someone said that we are reaching the point where the ability of human beings as a species to control themselves in the use of the technologies they’ve developed is getting shaky. But that’s been true at least since the invention of nuclear weapons … or gunpowder. Excitement about technology and its risks is really excitement about human beings and their social and moral limitations. We have met the enemy, Pogo says, and he is us. Chatbots? Just let them curl up on your lap and cuddle, until you need to swat one on the nose with a newspaper when it makes a mess.
Postscript: I didn’t ask Alexa my question, but I did pose it to ChatGPT. The worst aspect of the result is that I asked for 1,000 words and only got 518. No student looking to cheat on a writing assignment is going to accept that kind of inadequate response! If you can bluff your way vacuously for 518 words, you can bluff your way for 1,000 — even a chatbot should know that. But which version do you like better?
ChatGPT responds:
Artificial intelligence (AI) has revolutionized the way we live, work, and communicate, and the field of writing is no exception. Today, AI technologies are being used to generate content for various purposes, including writing columns for publications like Against the Grain.
The impact of AI on writing columns for Against the Grain can be seen in several ways. First and foremost, AI has made it possible to generate high-quality content quickly and efficiently. With the help of sophisticated algorithms and machine learning models, AI tools can analyze vast amounts of data and generate insightful, engaging, and thought-provoking content in a matter of seconds.
For example, AI-powered writing tools like GPT-3 (Generative Pre-trained Transformer 3) can generate entire articles, including columns, based on a given topic or prompt. These tools use natural language processing (NLP) techniques to understand the context of the topic and create content that reads like it was written by a human. This technology has the potential to significantly reduce the time and effort required to write a column, enabling writers to focus on other tasks.
Moreover, AI can also help writers to generate ideas for their columns. AI-powered tools can analyze the topics and themes that are popular on social media, news websites, and other platforms, and provide writers with suggestions for topics that are likely to generate interest and engagement. This can be particularly useful for writers who are struggling to come up with fresh and interesting ideas for their columns.
However, despite the benefits of AI in writing, there are also concerns about the impact of AI on the quality and authenticity of written content. For example, some experts worry that AI-generated content may lack the creativity, nuance, and emotional depth that is characteristic of human writing. Additionally, there is a risk that AI-generated content could be used to spread misinformation and propaganda, as it may be difficult to distinguish between content written by humans and that generated by machines.
Furthermore, some writers may feel threatened by the rise of AI in writing, as they worry that machines could eventually replace them in the workplace. While it is true that AI-powered tools can generate content more quickly and efficiently than humans, it is important to note that machines are not capable of the same level of creativity, emotional intelligence, and critical thinking that humans possess. Therefore, while AI may be a useful tool for generating content, it is unlikely to replace human writers entirely.
In conclusion, AI has had a significant impact on the field of writing, including the creation of columns for publications like Against the Grain. While AI-powered tools can generate high-quality content quickly and efficiently, there are concerns about the impact of AI on the quality and authenticity of written content. As such, it is important for writers to be mindful of the potential benefits and drawbacks of AI in writing, and to use these tools in a responsible and ethical manner. Ultimately, the best approach is likely to be one that combines the strengths of AI and human writers, resulting in a more productive, efficient, and creative writing process.
Maybe next time I attempt to get anything of use out of ChatGPT, I will include an instruction not to be repetitive (I see your attempt generated the same problem!)