Can we please stop applying human traits to chat bots?
Let‘s start the new year off with a rant shall we?#
Tech world we need to talk. Not you normal people just out at the grocery store, you all couldn‘t be bothered all that much with AI or LLMs, so I‘ll just talk to the tech folks this time. I have been watching and listening to videos and podcasts all year, in varying levels of hype about AI and these super cool but also highly probably crappy LLMs and all their catchy new features. Every host of these shows has a slightly different take on AI, but one thing that comes up a lot that drives me up the wall is when folks anthropomorphize LLMs. So this is a petition to cut it out.
What does anthropomorphize mean?#
I‘ve typed it twice and I‘m exhausted. Thank goodness for spell check VS Code plugins.
Anthropomorphize: verb;
attribute human characteristics or behavior to (a god, animal, or object).
“Anthropomorphize” is a long word that means to assign human characteristics, behaviors, or traits to an object that necessarily isn‘t human. And we humans do it all the time to everything. Did you know that ships “have a gender”? (They‘re women). Cars are mostly women too. Its very common to give a car a human name. So common, that there are movies about it. Remember Herbie and Christine? Those cars tell how old I am, but had human names and were sentient with human emotions like jealousy and anger.
I‘m neither an anthropologist nor a neurologist, so I haven‘t got foggiest idea why we anthropomorphize non-human objects, but we do it a lot. We say that really smart birds can “talk” to us. (accidental analogy to LLMs). When the technological marvels in our world don‘t do what we expect, we say things like “It must be grumpy today”. There are a million examples.
In general, I would say that we anthropomorphize—I‘m going to set a record for the number of times that word appears in an article—things that we care about and to which we have attachments. Not always, but I would guess that the closer the object is to us in our daily lives, the more likely we are to assign it human qualities.
Therefore, I feel the need to implore you to
Please fight the urge to anthropomorphize LLMs and chat bots#
I know this will be easier said than done, that perhaps the ship has already sailed, but I think its important and it drives me crazy when I hear LLMs assigned human qualities out in the wilds of the techiverse.
I‘m not going to name names or call anyone out specifically, but here are a few examples I‘ve heard or seen recently that drove me crazy.
Heard a podcast personality refer to an LLM as a “he”
I cannot even begin to describe how weird it is to me to assign an LLM a gender. We have enough problems with description, reference and assignment of gender for humans! (Don‘t get it twisted, trans rights are human rights) The last thing we need is to start assigning gender to LLMs, especially if the default is “he”. As an unwitting member of the patriarchy…eww.
Heard a podcast personality say they “had a conversation with Claude”
No, podcast person, you didn‘t. You did not have a conversation with the statistical word predictor machine. A conversation is between two human beings. You cannot have a conversation with a pile of binary. You typed your words into an input box, and a pile of math sent you a formatted API response.
Anytime anyone says an LLM is “thinking”
I cannot tell you how many times I‘ve heard human beings say things like “I prompted it, and it thought for a bit, came up and an answer that it thought was correct”. LLMs do not think! There is no actual thinking going on. I hope that all of us knows that LLMs are calculating which word is the most likely to come next and printing that to the screen.
Its important that we don‘t think of these things as human-like#
I know that its going to be difficult to come up with words that describe these prediction machines without using human-like terminology. The output they create is human-like enough to trick our brains just like the squawking sounds of birds that “can talk”. Birds aren‘t ever talking. The ones that do it the best are smart enough to know that mimicking our sounds gets them attention and treats. LLMs aren‘t smart. They cannot be, they don‘t have brains. Thankfully, they can only do what they have been programmed to do, even if that program is open-ended and non-deterministic in its results.
But I think its important that we try not to anthropomorphize these objects. I think that if we go too far in that direction we will lose our ability to discern the quality of their output. There have already been discussions online about who is “nice“ to their LLMs and whether or not we should be polite when typing into form fields on the UIs that front them. If we anthropomorphize them too much, who is to say that we won‘t start being hesitant to correct these models out of that same desire to be polite? It might be a bit of a stretch, but I compare this to like when couples start not feeling comfortable kissing in front of the family dog because they assigned some human feeling or emotion to the dog that couldn‘t give a flying rats ass and has no idea what is even going on.
We should never hesitate to keep these models in their place, as tools (of varying usefulness) in a tool belt. When my hammer stops doing its job, I won‘t console it or try to teach it, I‘ll just use a different hammer. The more we assign human qualities to these objects, the more objectivity we lose in our judgement of their usefulness. We will start giving these models that provide atrocious output “points for trying” to our detriment.
Model and bot creators stack the deck against us#
The builders that make these chat bots do this on purpose. They KNOW that giving their prediction state machine a human-like name—looking at you Claude, Devin, Watson, Siri, Alexa (credit due to ChatGPT for NOT doing that surprisingly, honorable mentions to Grok and Gemini)—will entrench that attachment in our brains. They know more of the psychology of anthropomorphism than I do. It reminds me of the quote from I, Robot where Will Smith’s character says to the robot doctor scientist lady “Why do you give them faces? Try to friendly them all up, make ‘em look human?”
It is 100% by design.
The output these LLMs create refer to themselves with human pronouns out of that same design and a smidge of necessity. The builders want us to get attached to their product so we‘ll give them all our money. But the LLMs have been trained on human speech, and only humans have any concept of “self” and therefore only humans need a way to refer to themselves. So these models were likely predestined to refer to themselves using human language, since that is the only language they could possibly have adopted for their output.
End rant#
I know its hard, but can we please try to stick to pronouns like “it” to refer to these things? Can we stop pretending that these bots are human-like? In researching the names for popular chat bots, I stumbled on a Reddit thread where folks were having their chat bots name themselves. Could we also stop doing that please? It will be tricky, and the responses these tools give us will trick our brains, so we‘re fighting our own internal pattern recognition machines inside our skulls (that are like way, way, way, way, way, way better than the bots and a trillion percent more convincing to us as well) every time we read the output from a bot.
But if we stay on this same path, guided by the folks trying to make trillions of dollars off our ever-increasing context usage, we‘ll start losing our objectivity about them.
So remember kids, its a prediction machine not a person.
/rant
The End