Soon after ChatGPT was released, an artificial intelligence researcher from one of the big tech companies told me that I shouldn’t worry about how the technology would affect how students learn to write. In two years, she assured me, only aspiring professional writers would enroll in writing classes; no one else would need to write anymore.
I remembered that conversation recently when I got advance access to Google Docs’ new “Help me write” feature, which is expected to be rolled out to all users soon. Once you have this feature, it becomes the default in Google’s word processor: The magic wand appears every time you open a document, ready to generate and revise text for you. If you want to write yourself, you have to close the feature.
Just as it’s hard to imagine life before spell check today, we may soon forget what it was like to open a blank document and start typing without an AI “assistant” completing — or initiating — our thoughts.
Google’s “Help me write” is joining a crowded field of AI-powered writing assistants. Microsoft’s Copilot promises to “jump-start the creative process so you never start with a blank slate again” by providing “a first draft to edit and iterate on — saving hours in writing, sourcing and editing time.” Grammarly promises that its “personalized generative AI co-creator” will help you “compose and ideate” so “you never have to experience [writing] alone.”
These companies promise their AI assistants will boost our productivity, liberating us from the drudgery of writing so that we can use that time to do more important work. Here’s the problem: In many cases, writing is the important work.
Writing is hard because the process of getting something onto the page helps us figure out what we think — about a topic, a problem or an idea. If we turn to AI to do the writing, we’re not going to be doing the thinking either. That may not matter if you’re writing an email to set up a meeting, but it will matter if you’re writing a business plan, a policy statement or a court case.
While AI assistants might be able to help us with our own thinking, it’s likely that in many cases they’ll end up replacing that thinking. In a recent article in the Chronicle of Higher Education, Columbia undergraduate Owen Kichizo Terry described using ChatGPT not to edit his own ideas but to generate the substantive components of his college papers, leaving him only to stitch those ideas together. Using AI to generate ideas, create an outline and provide specific instructions for writing each paragraph, Terry wasn’t using an AI assistant; he had become the assistant — and so will we.
Once we let the chatbot fill the blank page, the bot’s text will shape our understanding of the topic — with whatever limitations, biases and errors go with it. To effectively assess AI-generated drafts, we’ll need to be able to ask difficult questions, analyze evidence, consider counterarguments — in other words, to do the same important work we do when we write ourselves. But if we no longer value doing our own writing — if every time we open a Google or Word document, we’re prompted to save time by turning to the bot — we may get to the point when we don’t know how to think for ourselves anymore. Even if we don’t lose our jobs to AI, we’ll lose what matters about them.
I drafted multiple versions of this essay before I got to the version you’re reading now. I didn’t use an AI assistant because I was not interested in finding out what an algorithm would predict someone could say about this topic. I wanted to figure out what was troubling me about it.
Here is where I ended up, at least for now: While the rollout of writing assistants is inevitable, our relationship to them, no matter how much tech companies suggest otherwise, is not. If we wave that magic wand uncritically, we risk outsourcing not just the mundane but the meaningful.
Jane Rosenzweig is the director of the Writing Center at Harvard College and the author of the “Writing Hacks” newsletter.