In today’s column, I am going to cover a snarky line that is increasingly being aimed at everyday authors and even professional writers in this new age of generative AI-produced content. The insidious line has been gaining steam and my best guess is that it will continue to flourish for quite a while, regrettably so. If you are anyone who writes on just about anything at all and opts to post the content for someone else to see, the odds are pretty substantial that you are eventually going to get this obnoxious line lobbed at you.
Part of the reason I bring up the contentious matter is that there is expanding confusion over what is written by a human versus what has been written by generative AI.
Society is getting mixed up and turned around because of this phenomenon. You see, we’ve not had AI like this before, at least to the extent that the AI was computationally good enough at generating human writing at a massive scale, and that appears fluent and nearly indistinguishable from human writing, and that is widely available at nearly zero cost to produce by just about everyone on planet earth that has an Internet connection and access to a generative AI app.
It’s kind of a modern-times change-up trifecta on the age-old act of writing.
This raises all sorts of AI Ethics and AI Law related ramifications, see my ongoing and extensive coverage on the ethical practices of AI and the legal concerns about AI at the link here.
Begin At The Beginning Of The Brewing Storm
Before I bring forth the snarky line and undertake a deep-dive analysis of it, I’d like to lay a foundation for what this is all about. Hang in there, the payoff is worth it.
The general precept is that it is now hard to discern human writing from generative AI writing.
I’m sure you’ve heard of generative AI, the darling of the tech field these days.
Perhaps you’ve used a generative AI app, such as the popular ones of ChatGPT, GPT-4, Gemini, Bard, Claude, etc. The crux is that generative AI can take input from your text-entered prompts and produce or generate a response that seems quite fluent. This is a vast overturning of the old-time natural language processing (NLP) that used to be stilted and awkward to use, which has been shifted into a new version of NLP fluency of an at times startling or amazing caliber.
The customary means of achieving modern generative AI involves using a large language model or LLM as the key underpinning.
In brief, a computer-based model of human language is established that in the large has a large-scale data structure and does massive-scale pattern-matching via a large volume of data used for initial data training. The data is typically found by extensively scanning the Internet for lots and lots of essays, blogs, poems, narratives, and the like. The mathematical and computational pattern-matching homes in on how humans write, and then henceforth generates responses to posed questions by leveraging those identified patterns. It is said to be mimicking the writing of humans.
I think that is sufficient for the moment as a quickie backgrounder. Take a look at my extensive coverage of the technical underpinnings of generative AI and LLMs at the link here and the link here, just to name a few.
Controversies abound about generative AI.
I have covered for example that there are various copyright and Intellectual Property (IP) rights legal cases underway about generative AI being data trained on content from the Internet that the AI maker might have infringed upon, see my analysis at the link here and the link here. Another issue is that students in school are at times using generative AI to compose their essays, which is generally frowned upon, but, meanwhile, students who sincerely wrote an essay by their own hand are getting falsely accused of using AI, see my discussion at the link here and the link here.
A zany twist that you might not be familiar with is that there are hand-wringing worries that the Internet itself will inevitably be overwhelmed with generative AI-produced content.
The deal goes like this.
Right now, we mainly have an Internet that consists of human-devised written content. I suppose that seems an obvious point. Allow me to nonetheless step further into it a bit.
People write stuff, at times nutty stuff, and post it on the Internet. Some liken this to the greatest democratization of writing in history since you no longer need to find a formal publisher to publish the things you might opt to write. No filters, no editors, no publishers per se that will restrict what you want to say in a written composition.
Sure, social media does have restrictions, but you can keep looking around to find a spot on the web to post your stuff, no matter how out-of-whack it might be. For my discussion of the murky devious dark web, meaning the part of the Internet that most people never see, I discuss this thorny issue at the link here. One way or another, you can post your writing on the Internet. Period, end of story.
Generative AI such as ChatGPT, Claude, Gemini, and other akin apps are being used vociferously to produce content, some of which are being posted to the Internet by those who opt to do so. Believe it or not, major mainstream media news sources have been turning to the use of generative AI to craft their content. This is cheaper and faster to do than using human writers. They usually have a teensy tiny indication to let you know that the content was derived by generative AI. To confuse you or trick you further, some will actually assign a human-like name to the content as though it was authored by a person, see my discussion at the link here.
Sneaky, beguiling, some would say outrageous and unethical. The retort is that this is efficient and effective, and the reader is no worse for wear by this practice.
Assume that content produced by generative AI keeps being posted on the Internet. Who is more prolific, human writers or generative AI? The resounding no-contest answer is generative AI. Generative AI can run circles around any number of alive human writers. All you need to do is toss more in-the-cloud servers at generative AI and keep those computer processing cycles running. The most massive scale content-producing writing mill of all time is here in our midst. Welcome to modern times.
The Irony Of A Return Of The Jedi Possibility
There is an irony afoot. You’ll find this mind-bending; I believe.
When trying to devise generative AI apps, the first step involves data training the AI on human writing as found on the Internet (I noted this point a moment ago). The Internet at this time is a humongous source of human writing and readily scanned to pattern-match on essays, narratives, poems, and all sorts of writing approaches. Out of this scanning comes the jaw-dropping fluency of generative AI.
But suppose that instead of scanning presumed human writing, the data training for generative AI comes across other generative AI-produced content on the web. There isn’t any straightforward means to determine which writing is which. All in all, generative AI might become principally trained on generative AI-produced content, known as synthetic data, versus actual human-written content. Some argue that this will doom generative AI in the sense that by being data trained on generative AI data the result will be watered-down generative AI that no longer presents a semblance of human-equivalent writing qualities.
Kind of the classic idea that a clone of a clone is going to be inferior and repeated series of clones upon clones will degrade things swiftly. For my in-depth look at this open question about the future of generative AI and whether generative AI is going to crumble or fall apart due to data training on synthetic data, see the link here.
There you have it, a quandary of our own making.
Generative AI can spit out vast volumes of written content that resembles human writing. To get there, the AI needs to be initially data-trained on human writing. But we might soon be flooded with generative AI-produced content that engulfs the Internet and readily swamps the smaller proportion of human-written content. In turn, it is believed (by some, not all) that data training on the resultant synthetic data will essentially dilute and underpower the ongoing data training of generative AI.
Yes, we might end up with generative AI that no longer seems capable of producing seemingly on-par human writing. A travesty. Or some see this as just deserts. Their logic is as follows. If generative AI no longer produces human-quality writing, we won’t want to use AI for doing our writing. How will writing be undertaken? Aha, we will go back to hand-crafted writing. Humans will once again prevail. AI is defeated. Humans win.
Imagine that. Quite a wild ride. Generative AI has initially swooped in and seems to have obliterated the need for human writing. Efforts to advance generative AI entail feeding the AI the outputs from other AI. This gradually and radically diminishes AI fluency (again, some believe this will occur, others say we can avoid the downfall). The world once more shifts attention to human writers.
Human writing will have a grand resurgence. It will be the latest rage and savored by all. Authors and writers are once again able to hold their heads high. Tough going is set aside. For an interim period, they were thought to be has-beens. Put summarily in the junk pile. Voila, redemption comes in the form of AI falling apart at the seams, and only human writers can rescue humankind.
Whew, a real tear-jerker. A story of immense heartfelt emotion ranging from heartbreak to epic heart-filled human heroics.
Boom, drop the mic.
The Snarky Line That Rears Its Ugly Head
Let’s return to the matter at hand, namely the snarky line that I’ve alluded to.
It goes something like this (make sure to read the line with an accusatory and overbearing tone):
- “Which LLM or generative AI wrote that for you?”
Here’s why that is snarky.
Suppose you are a writer who has written something that you believe is an essay of incredible craftsmanship. You tried to be clever, witty, and otherwise pour your heart into your writing. Maybe you spent many hours, possibly days, even weeks, composing the content. Inch by inch, it became your masterpiece.
Upon posting the content online, there is always a solid chance that some won’t like what you’ve written. There will be negative reactions, for sure. There will also hopefully be positive reactions. The positive reactions keep you energized and excited about writing. Thank goodness for niceties and a kind word from time to time.
Focus for a moment on the types of negative reactions that might arise. Your writing was illogical, some might proclaim. Your writing was pointless, others might say. On and on the bashing goes. The number of possibilities is nearly endless for the adverse remarks you might garner.
The latest such adverse or cutting remark would be to say that what you wrote wasn’t written by you and was instead written by generative AI or a large language model (LLM).
I assure you that this isn’t being lobbed as a compliment. In some distant future, maybe it might be. Perhaps, if the world does end up being flooded with generative AI writing, one supposes that if people prize the AI writing, your being compared to AI will be the highest form of flattery. Look at this person, they wrote as elegantly as a machine. Applaud them from the rooftops.
Not so today.
The person using this snarky line is suggesting or asserting that your writing is so bland and unremarkable that it must have been written by generative AI. It is rote. It is mundane. It is lacking in human flavor. It is the bottom of the bottom when it comes to writing.
Some people who use that snarky line believe they are stridently clever in doing so. It is a nearly perfect insult. The claim is that your writing was done by AI. They aren’t stating this as a fact. They are wording this as a question. This gives them plausible deniability that they are ostensibly seeking to ding you and take you down a notch. All they are trying to do, they would insist, wink-wink, is determine whether the writing was done by you or done by AI.
It is entirely innocent. It is merely a simple inquiry. If you blow up at the question, well, that’s on you. The person just floated a supposition. Don’t get your dander up.
Meanwhile, in their heart of hearts, they know exactly what they are doing. They have planted a seed that your writing is AI-based and ergo presumably simplistic or downright stupid. It is a free trajectory accusation. No one can pin them down for being mean-spirited or otherwise acting like a jerk.
Here’s their safety net.
Since there are writings widely posted that were done by AI, they have every right in this world to ask whether a piece of writing is human-devised or AI-written. No one can blast them for this question. It is honest. It is sensible in today’s era of generative AI. They stand righteously on the high ground.
Do some pose the question on that plainspoken basis?
Yes, some use that question and believe themselves to be asking honestly. They don’t realize perhaps that a lot more are using that question as a clubbing device. Those who ask the question with fair and balanced intent are providing cover for those who turn it into a needling conniving underhanded dagger of an insult.
The ones that do this with the evil intent are dancing with great delight that others will see their comment. It might stoke others to take the bait. They too will begin to guess that perhaps the writing was done by AI. An avalanche of herd-like behavior can ensue. The writer has no room to breathe and little chance to fight off the onslaught.
Notice too that the snarky line contains no foul words and has nothing overtly abrasive in it. Again, this is why it is so ideal. Social media rules will allow the line to be posted for all to see. If the person had used foul language in their put-down line, the remark might have been instantly stricken from the record by editors or automatic screening filters. Others would likely also pile on and berate the person for their abusive language.
Yep, this snarky line is growing fast and provides minimal effort to utilize while potentially gleaning maximum insult or affront on the writer being targeted.
It is the gem of writing barbs, snubs, and jibes.
Trying To Cope Is A Lot Harder Than You Think
If you are a writer and haven’t gotten this affront hurled at you, thank your lucky stars. I would also cogently suggest that you count the hours or days until it does happen. Enjoy the pleasant time until you get this line hurled in your direction. It will happen. Prepare yourself accordingly.
What can you do in response to this snarky line, if or when it arises?
There isn’t much that will overcome the blunt instrument. It is just that incredibly good as a takedown. You can valiantly try to fight it. I wish you well.
One approach consists of denying the claim. You respond by stating outrightly that you are a human writer, and that you wrote the piece completely by your own human hand. Be forceful and strike back with directness. That should finish the matter.
Nope, it usually won’t.
The chances are that this will spur the insult thrower into further action. Oh, I thought your post must have been AI. It sure seemed like it. Maybe you should consider changing your writing style so that it doesn’t resemble AI. Have you gotten this comment before? I’m sure you must have. And so on, the blathering goes.
As a writer, that kind of response is likely to get your blood boiling. The barb producer is goading you into additional discourse. Once again, the wording seems utterly innocent and aboveboard. Meanwhile, more insults are being threaded together. You “must have” been told this before, which is a backhanded insult that your style of writing wholly seems like it is mundane or blithering and AI-devised. Etc.
What would you do at that juncture?
You could continue the discourse and try to respond to each of the added jibes. That will be like the old adage of wrestling with a pig, whereby you get muddier, and the pig enjoys the whole process. I am saying that carrying on those dialogues is typically fruitless and only adds more fuel to the fire.
There is an additional risk you take when responding with a fire-back perspective.
If the insult thrower keeps things polite and civil, your protests cannot be over-the-top. Anyone else seeing the back-and-forth will potentially give credit to the barb maker and consider you pretentious if your comments are strongly worded. It will appear that you are on the defense.
Maybe you are on the defense because it is true that you used AI. You are desperately trying to cover up your infraction or transgression. The more heated you get, the more it looks like you must be guilty. The hole that the insult started is being dug deeper by your own exhortations.
Sad face.
And a deplorable happy face for the snarky line and the snark that made it.
Okay, maybe you should opt to ignore the comment. Pretend it never happened. Blissfully continue without hesitation or disruption.
That seems a viable option, especially if the chances are that no one else will see the line. The risk you take is that others come along, they see the line, and they too start to believe that your writing was done by AI. Maybe they do not take any action and merely file the remark in their mind.
How many people might see the line, place it into their noggins, and keep that at the back of their minds whenever they see any of your other writing? Not sure. Hard to guess. Here’s what can occur. Say, I seem to vaguely remember this was the writer who was possibly an AI writing system. I wonder if that ever got cleared up. Doubt has been raised. It follows you wherever you go, and wherever you post your writing.
In this use case, because you didn’t respond, the line remains there as a permanent mark against you. The idea is that you never said it wasn’t true. This omission on your part is going to be troubling. Certainly, if you were a human, you would immediately denounce the question and assert your humanness. Without that clarification, the line stands as being meritorious.
See the bind you are in?
The Rabbit Hole Is Extraordinarily Deep
This is mind games on steroids.
You are darned if you do, and darned if you don’t. Responding might stoke the fires. You are adding fuel. Not responding can be taken as a default indication that you are AI. This seems logical. A human would be indignant and respond. AI would not be indignant and would not care to respond. Since there isn’t any response, it must be that AI wrote the piece. That’s impeccable logic, for some.
Mull this over.
Get a glass of fine wine, sit for a few minutes, use mindfulness techniques, and see what you can divine.
I’d bet that you might have come up with an alternative way to respond. This is what some try. They go the route of using a satirical reply. This gets them on the record and thus others will forever know that there was a reply. At the same time, the reply is considered tongue-in-cheek and might be sufficient to end the discourse.
Yes, that’s it, use a pithy satirical retort.
Like most things in life, even that proposed solution has rough tradeoffs and loose ends.
Suppose you say that yes, you are AI, and kudos to the person that they finally caught you after all those years of erstwhile writing. The person is a genius. Someone finally figured out the hidden-in-plain-sight puzzle.
One problem there is that not everyone will necessarily get the drift of your satire. Satire is often delivered via facial gestures and vocal tones. In writing, satire can miss the mark. On your behalf, I certainly hope that most people will comprehend the joking nature of it. I hope so, for your sake.
Those who don’t grasp it will potentially take the response as an honest admission.
I don’t want to be the bearer of bad news, but I must do so. Be extremely wary of the fact that you seemingly have now admitted to being AI. This means that others who come along can take your comment and opt to run with it. Hey, this writer said they were AI. Wow, the truth is now known. By their own admission.
Your satire gets turned into a confession. Others who don’t care whether it was satire or don’t think to ask will run with the ball. The next thing you know, you have been branded as AI. This will be hard to shake free from. Why? Because you said it.
Sure, you will try to undercut that you said so. You will declare over and over again that you were joking around. You were using satire. The thing is, once again, the more defensive you become, the more people will become convinced it must be a true statement. The assumption will be that the AI has been set up to crank out denials right and left. That’s the amazing thing about generative AI. It can keep going and going like the Energizer Bunny.
Unless you are fully confident that a satirical remark will work, perhaps not making any kind of admission and instead keeping true to the fact that you are a human, please be cautious of using satire as your go-to in this use case.
Some try the rather brief and non-satirical approach of just saying you are a human, and your feelings are hurt by being accused or implied as being AI. This can help get others on your side. If you merely state you are a human, that won’t likely invoke much support for your response. By adding that your feelings are hurt, there will be some brave souls that will take up your mantle. They will defend you to the ends of the earth.
I trust that I’ve given you a slew of useful ideas about how to respond.
You will need to decide for each given situation what makes the most sense. Your reply is bound to be instance-dependent. Where the posting arises, what you’ve written, who wrote the snarky line, and a plethora of additional factors will dictate which response is going to get the most bang for the buck.
Good luck.
Conclusion
When I bring up this topic at my various talks and presentations on AI, I get all kinds of responses from attendees. Any who are writers or prospective writers will instantly pipe up and have questions or other thoughts on the weighty matter.
Let’s do a sampler.
One is that everyone should summarily ignore such a snarky line. Let it be. Move on with your life. Well, as I mentioned earlier, the issue here is that the Internet never forgets. The line will sit there, like an online snarky timebomb, waiting for its day in the sun. It’s up to you to decide whether you are okay with that being out there. If you believe the odds are near zero that it will ever be an issue, fine, I agree that letting the line slide is seemingly a sensible choice.
Another comment I get is that by talking about the snarky line, I am possibly promulgating it. The logic is that people who don’t know about the line will now be cognizant of it. They will start to use it. My efforts to combat it or condone it are lamentedly fostering it.
I decidedly understand this concern and appreciate the sentiment. I liken this to my writing about AI safety issues and cybersecurity considerations, such as at the link here and the link here. When I discuss how AI might have gotten hacked, I often get a few comments that by discussing the hack, I am going to increase awareness for evildoers who otherwise had not thought about AI hacking.
This is a somewhat philosophical question. Are we better off not talking about something that is happening, in hopes it will somehow fizzle out on its own? Or would we be better off discussing such matters, increasing broad awareness, and aiding all in understanding the nature of a looming issue or problem that needs to be addressed?
I tend to steer in the direction that knowledge is useful and should be judiciously shared.
I’ve got two more collected comments to discuss and then we’ll conclude this discussion for now.
One heated comment is that this all seems out of proportion. Nobody cares whether a writer is suggested or accused of being AI. It is a nothing burger. The writer should just keep on writing. Let the world do what it does.
I generally find that such a proclamation is made by those who aren’t writers. They are unaware that if a writer gets branded as being AI, they are likely to encounter troubles. They are seen as plagiarists, see my coverage at the link here. Their merit as a writer is questioned. It isn’t necessarily that they aren’t human, instead, it is that they seem to be cheating by using AI to aid in their writing. For some writers, that is the kiss of death as a writer.
Many news outlets, research journals, and the like will not accept writing from a writer if the writer has used AI in the writing of their submission. They want purely human-written material. Some soften this by allowing a writer to declare what portion was AI written, and if the AI portion seems less crucial or otherwise within the editorial guidelines of AI usage, the writer can proceed. This though is a dicey path. You can end up in a protracted battle over whether the AI-written content is suitable. This consumes a lot of precious time and effort that might have been spared by not using AI at all.
I would claim that this is a serious matter and that on the surface it might seem inconsequential or perhaps even comical, but to those who care about writing and do writing, this is a real issue of real proportions for their livelihood, career, and legacy.
The last comment is that when writing about the snarky line about writing, a respondent will think themselves especially clever to say that the discussion itself was written by AI.
So, which LLM or generative AI wrote this?
None.
That’s the surefire honest-to-goodness truth and don’t be trying any of those snarky lines or even non-snarky retorts or twisters otherwise.
As the great English Romantic poet once said: “Fill your paper with the breathings of your heart.” That’s what I’ve tried to do here. I am a human. Don’t try any of your snarky tricks on that.
Thanks for being human.