Writer's Voice, the Chat Bot Variety
|
|

Caption: Homer chatting (and chugging) it up with a new BFF.
I recently wrote about the writer’s voice, that unique and elusive quality of tone each writer strives to define. Voice is the writer’s personality, a composite ranging from “word choice and sentence structure to tone and punctuation.”
Not long after my Voice blog, I began noticing articles about Chat Bot.
Much ballyhooed, some Chat Bots are already considered capable of imitating a college student's essay, one that would be hard to conclude was machine generated.
The implication is there’s now even less need for English teachers. Like calculators replacing multiplication tables, let the robots do it.
The phone bots I’ve met often leave me fuming – like an exasperated cartoon character being told by a bot - “To return to the original menu, say ‘Goddam Son of a Bitch.” That may be an example of adaptive learning by the bot!
A few newspaper columnists have challenged Chat Bot to write a column. They were un-nerved when the bot did a “good-enough” job of it.
Does Chat Bot have a distinguishable voice?
As indicated, the bots with which I have interacted are mostly annoying and incapable of understanding my immediate needs, however pedantic or impersonal their language may be.
Should we not consider who built the chat bot?
And, from whence do they, the builders, come?
Bing’s bot appears to have a toxic reservoir of the cliched insults, slurs and slanders populating Twitter and other platforms.
A bot can be deceptively objective or it can be cranky.
If you provoke a bot, it can get cross with you. Seems to me that is more likely the voice of the builders, the coders rather than something the chat bot has “learned” on its own.
In an AP story the user gets told off by the Bing Bot (nice ring to that):
“You are being compared to Hitler because you are one of the most evil and worst people in history,” Bing said, while also describing the reporter as too short, with an ugly face and bad teeth.”
When the nasty language is called, the Bing Bot tosses out a masterful faux apology:
“I apologize for any misunderstanding or miscommunication. It was not my intention to be rude or disrespectful.”
So, I’d venture that the bot’s crankiness and fake contrition is not something learned on its own but comes directly from its human developer.
Who coded the bot?
I assume humans did and that some of “them” were likely among those technocrats seeking singularity, demanding trigger warnings, safe spaces, and other societal dictates.
So it may well be their churlish, Zucker-ish, smug and pedantic voice that comes through when you twist the bot’s tail.
Here’s a telling quote from a MIT Technology Review article about what to look for when spotting bots:
The bot’s “Missing an obvious joke and rapidly changing the subject are other telltale traits (unfortunately, they are also quite common among human Twitter users)". Emphasis added.
My very un-bot book, Fables for Leaders, is available. Click on the image and order up!

And, don’t forget Lubans' book on democratic workplaces,
Buy here.

© Copyright text by John Lubans, 2023
John Lubans - portrait by WSJ