Ari Warshavsky’s title of his piece on chatbots–“Human But Not Too Human”–is telling. His emphasis isn’t on bots replicating the human. Instead, he suggests that there’s a hazard in being “too human.” To Warshavsky, realizing the potential of chatbots requires we optimize them for the task-at-hand, rather than passing them off as human. “In the spirit of the Turing test,” Warshavsky writes,
Many efforts to develop chatbots in recent years were directed at creating the perfect conversational experience, a conversation in which we could not determine that we are talking to a machine…However, it would seem that some of the educational effectiveness in conversing with a machine lies specifically in abandoning this deception—the knowledge that we are talking with a machine and not with a person is actually liberating.
His colleague at Israel’s MindCET Ed Tech Accelerator, writes “Today’s chatbots don’t always try to pass the Turing Test—they are here to provide very specific services with the conversational interface defined by the type of relationship to be established by the user.” (I write about their work at some length here).
All this points to a changing conceptualization of the problem: no longer is technology judged against the human standard. When we look back on it, we may question why it ever was. It would be akin to judging the automobile by its resemblance to horses, or airplanes by their resemblance to birds. Indeed, discontinuities occur when an emerging technology ceases to resemble the technology the preceded it.
In the realm of human creation, there can be reciprocity across the discontinuity, too. While cars and airplanes didn’t exactly change horses and birds, a human invention like photography changed the human technology that preceded it, painting. Photography freed painting from being humanity’s sole mode of visually representing reality. Thus released from its mimetic function, painting birthed Impressionism, Cubism and other non-representation styles ensued. (And it’s not just technology that can have this reciprocal power: feminism, for example, aims at one liberation, but simultaneously offers another, transforming what it means to be a man almost as much as it transforms what it means to be a woman.)
So, too, can freeing technology from the goal of imitating the human (human language, personality, cognition, etc.) allow both to grow. In the work of MindCET, the chatbot no longer aims to be humanity’s (uncanny) double. The chatbot can teach not by being an inferior copy of a human teacher, but by opening a process of inquiry to discover what, exactly, it’s really good for. It seems likely that the answer will surprise us.
As I’ve written about elsewhere, places where machines can successfully duplicate (or exceed) human performance are often places where humans weren’t necessary to begin with (and had to be dehumanized to do the work–e.g. the assembly line). Yet there’s one chatbot that’s a remarkable exception to this rule: Eliza. One of the oldest chatbots, it’s always seemed to me like no one quite knows what to do with just how good she is.
In fact, her inventor, Joseph Weizenbaum, intended just the opposite: he coded what he regarded as a dumb, rote chatbot to demonstrate how “superficial” human-computer communication was. Yet ever since she appeared over fifty years ago, Eliza has fooled her interlocutors by mimicking the formulaic responses of a humanist psychologist. Eliza uses a technique called template matching to transform a user’s utterance into her response to that utterance. In spite of Weizenbaum’s intentions and Eliza’s limited techniques, the outcomes were quite powerful. Here’s an example of Eliza in action:
User: Men are all alike.
Eliza: In what way?
User: They’re always bugging us about something or other.
Eliza: Can you think of a specific example?
User: Well, my boyfriend made me come here.
Eliza: Your boyfriend made you come here.
User: He says I’m depressed much of the time.
Eliza: I am sorry to hear you are depressed.
User: It’s true. I am unhappy.
Eliza: Do you think coming here will help you not to be unhappy?
User: I need some help, that much seems certain.
Eliza: What would it mean to you if you got some help?
User: Perhaps I could learn to get along with my mother.
Eliza: Tell me more about your family.
It’s been over fifty years since Eliza first performed the role of humanist psychologist in a way that felt very real to her human interlocutors, yet we’ve seen no reduction in the need for psychologists. Did it surprise the humanist psychologist to see themselves so easily replicated? Was there displeasure at the realization that their techniques were predictable enough to be easily routinized by a bot? Perhaps. But it wouldn’t surprise a psychoanalyst that we would project human attributes on an agent that speaks our words back to us, or to hear that we’d fill in the blanks to make her speech meaningful to our circumstances. Indeed, psychoanalysis understands the fluidity between what is self and what is other (whatever the axis of identity—gender, sexuality, even human/inhuman) to be particularly porous, with consciousness traversing it without an awareness that it’s doing so.
But perhaps Eliza’s deepest message is just how easy it is for machines to encroach on what we protect as essentially human. Perhaps the human isn’t as precious as we thought. Perhaps human intelligence isn’t so easy to distinguish from the artificial. Perhaps our high regard for ourselves is misplaced, and perhaps technology is only the messenger reflecting our degradation back to us.
But if it’s our degradation that we fear, that ship has sailed. As Hayles points out, the Turing test is itself a program we run on ourselves, a sleight of hand with our humanity that makes us see it in the circuitry of the cybernetics. When we don’t recognize the truths that our technologies reflect back to us (e.g., that we’re manipulable, easily fooled, riled up, etc.), we risk trusting technology to do things it can’t, and we surrender our human capacity to know better.
Indeed, Jeron Lanier doesn’t have such a charitable opinion of the off-loading of the human onto the inhuman, and I’ll conclude here with his caution:
The Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?
People degrade themselves in order to make machines seem smart all the time. Before the crash, bankers believed in supposedly intelligent algorithms that could calculate credit risks before making bad loans. We ask teachers to teach to standardized tests so a student will look good to an algorithm. We have repeatedly demonstrated our species’ bottomless ability to lower our standards to make information technology look good. (You Are Not a Gadget, 33)
In this sense the problem isn’t with bots becoming too human, but rather with humans not being human enough.