ChatGPT is not your friend. It did not come to save you from work.
When people say, “I know what I want to say, I just don’t know how to say it,” they see ChatGPT as the answer. They tell ChatGPT what they think they want to say, and then ChatGPT “clarifies” it for them.
There are so many problems with this process, and I’m going to do my best to give an orderly list—but each of the problems both stems from and builds on the others—a true Möbius strip of ethical and practical issues, an evil snake swallowing its tail. So let’s just begin here ….
It’s Stolen
The first is that it’s ethically dubious, and I’m not even talking about the part where you’re presenting it as your own work. Those words aren’t being created by a computer—they’re being re-created. ChatGPT is a Magnetic Poetry kit of stolen language. It has been “trained” by being fed thousands of sentences—many, if not most, of them without the consent of the authors. As a writer, I find this deeply disturbing, and I’m far from alone. Seventeen famous authors sued OpenAI for copyright infringement for using their work without permission. If we continue to use ChatGPT in the face of this knowledge, what are we saying we believe about ownership? Once the theft has happened, it can’t be fixed, so … oh well?
It’s Rehashed
The second is a direct result of its origins: Whatever it “helps” you to say is to varying degrees something it has read before. Would you like everything you write to be someone else’s recycled ideas? Again, ethically problematic from the perspective of submitting it to colleges as your own work—but not only that. Do you really believe that you have *nothing new* to say to the world? I don’t believe that. You shouldn’t, either.
It’s Lazy
ChatGPT allows for lazy thinking. I’m not here to criticize you for finding more efficient ways to get work done. This isn’t about a Puritanical belief in working hard for the sake of working hard, being a better person because you took the more difficult path.
If you used an Apple VisionPro to “work out” every day, you wouldn’t actually end up with muscles. If you want to be smart and thoughtful in hard things, you have to build those brain muscles. If ChatGPT figures out for you what you meant to say, then you didn’t develop the skill of writing out the poor version, thinking more about what you meant, and getting to the right version—which means you won’t be able to do that next time, either. That’s a problem for your thinking, not just your writing. But it’s a real problem for your writing because …
It’s Not Always Right
If you rely on ChatGPT because it’s usually right … what do you do when it doesn’t understand you? When I work with students, they might work out their story onscreen or out loud, and I might tell it back to them and ask, “Is that right? Is that what you are saying?” That helps them hear what they’re actually conveying to a reader or listener. A coach will show you where you aren’t clear and ask you to say it again, instead of saying it for you. And the process works because … I’m human. I read a lot—both published, well-written pieces, and student pieces that aren’t yet clear. I have gone through this authentic dialogue again and again. I know what you meant to say because I know what common grammatical errors are aiming for, and I have a human’s sense of inference—I can use context and other examples to see what you meant in your unclear phrasing. ChatGPT can only make connections among things it has “read” before. It can make completely inaccurate connections or read sarcasm as fact. It will eventually let you down.
It’s Stealing Your Birthright
This is the most abstract—and yet the most fundamental—point. Descartes wrote “I think, therefore I am.” Thinking, at some level, is the most fundamental aspect of being human, and if you let ChatGPT decide what you meant, you are outsourcing that gift. You are relinquishing your innate human right to have your own thoughts.
What do you do of any importance that doesn’t require thought? What will you do when you have a challenging question ChatGPT can’t answer for you? It can probably plan “three perfect days in Paris” for you, but it can’t tell you which sights you truly want to see most.
The harsh truth is, it’s just not true that “I know what I want to say, I just don’t know how to say it.” What that means is—you don’t actually know what you want to say. If you wait for ChatGPT to tell you what you think about it—how are you going to know if that was actually what you thought, or if it was just what ChatGPT told you that you thought? When you write your college essay, you explore your own memories and understand new things about yourself. If you skip the work, you miss that development.
And worst of all … How will you express the deeper thoughts that someone else hasn’t already thought? If everyone relies on ChatGPT, how will we address new questions and flesh out new ideas? Colleges are cracking down on the use of ChatGPT for this exact reason. College is not just another hoop you need to jump through to be allowed out into the “real world.” College is trying to train you to solve the world’s big problems (and we have a few!). I have read people commenting on social media that if you create a great paper using AI as a tool, then you should get a great grade.
That presumes that the goal of the exercise was to get a paper. It wasn’t. Better papers have already been written about whatever your undergraduate professors are assigning (those papers are what ChatGPT cheated off of). The goal was for you to learn how to find out what answers already exist, to learn how to synthesize them and communicate them effectively. Eventually, when you are ready, you will be prepared to seek answers that don’t already exist, synthesize them with existing information, and communicate them effectively. The world needs your best thinking because ChatGPT is not going to save us.
(Photo by Esteban López on Unsplash)