There were dozens of panels on VR & AI at SXSW EDU, and I had a hard time choosing between them. There were several things that drew my attention to this one, but most were things that made me think I would hate it.
First among the reasons I thought I’d hate it was that it hit close to home: at Rosetta Stone Labs, I had spend some time working on chatbots and similar mechanisms through which we might simulate natural conversation. I’m even an inventor on a patent application for a method of “producing controlled variation in automated teaching system interactions.” I’ve always been fascinated by this problem, but I’m also deeply frustrated by the paths I’ve seen chatbots taking. I worried that hyping chatbots as the next it-thing in ed tech would only degrade the conversation further.
My suspicion of chatbots was triggered at an even deeper level by the provocative title of the session: “outsource relationships.” While clearly designed to push buttons, the phrase reminded me of a particularly pivotal conversation in my own thinking about ed tech. I’d just transitioned from R & D to Sales in order to be closer to learners and to see Rosetta Stone in action (warts and all). I hoped that what I learned could inform future R & D breakthroughs, and I was filled with optimism. It was a time of full-on ed tech hype, so when I paid my old friend Amos Blanton (@AmosLightnin) from the MIT Media Lab’s Lifelong Kindergarten (via my psychology graduate program) a visit, I expected that we would bond over how we had both abandoned our mental health careers to revolutionize education with technology.
But Amos had drawn a different conclusion. I’m sure these aren’t his exact words, but I’ve never quite recovered from what he said to me over dinner near MIT’s campus. “The problem with ed tech,” he said, “is that you view learning as a matter of content. What we know is that learning isn’t about content but about relationships.” Suddenly, I was back in psychology our seminars, Irvin Yalom’s famous “It’s the relationship that heals” ringing in my mind. For me, healing, changing, and learning are inextricably interwoven, and my faith that relationships are at the heart of all of them is deep. In my experience, if we think back to transformative moments in our education, the learning almost always came in the context of a powerful relationship, even when the immediate cause might have been a life-altering book or mind-blowing idea.
I feared that panelists would be guided by a shallow definition of learning and relationship, and that the panel might never dig into the depths where these topics need to take us. I needn’t have worried. Avi Warshavsky, MindCET CEO, once began an article on chatbots with three long paragraphs on Martin Buber’s distinction between “I-Thou” and “I-It” encounters. Elsewhere, he cites Freud’s uncanny to account for our dark ambivalence about the almost-human. His colleague and MindCET’s Pedagogy Director Ilan Ben Yaakov’s work on Rabbinic responses to technology further assured me that this wasn’t a team hyped-up on chatbot magical thinking or deluded by flimsy ideas of learning and relationships.
Warshavsky began his thoughts on the relationship between AI and learning by reflecting on the advent of photography, and the way it transformed painting. He reminded us that the Impressionists and Modernists who questioned the very idea of a realistic representations of reality were watching an emerging medium do what only painting had done in the past: capture an image. The physics of photography could capture the exact light from a scene in a way that painting never could, mimicking the way our eye would capture the exact same light (albeit it in black-and-white and only if the scene would be still for minutes at a time). Warshavsky seemed to suggest that the advent of photography marked a period of dramatic growth and expansion of both the old and new medium.
By analogy, the advent of technological tools for knowledge and learning could inspire not just a new era for technology, but a new era for education, too. Like the encounter between painting and photography–when one medium butts up against another–the encounter between education and technology needn’t diminish the scope or ambition of either; rather, it can release both to explore new directions.
Critical to Warshavsky’s argument was the ways that human intelligence and machine intelligence meet. He suggested that AI might in fact help us understand the ways that intuition and calculation come together to form what we know as human intelligence. One without the other can be powerful, but incomplete.
Warshavsky recounted three distinct ways we employ machine intelligence, 1) machine-as-tool (e.g., a GPS that gets us to our destination or a relay switch that directs a message); 2) machine as a way to outsource skilled cognitive activities (e.g., calculators); and 3) machines as an extension of our body (prosthetics, smart phones). To Warshavsky, AI suggests a new direction, as humans and machines learn to collaborate. He quotes Gary Kasparov, saying “It’s time for humans and machines to work together.” After all, a human and a machine together can beat either a human or a machine at chess.
From here, Warshavsky became more bullish, positioning AI as a new Copernican revolution, inasmuch as he sees it changing how humans perceive themselves. To Warshavsky, the chatbot has a special place in this revolution, as education is rooted in the structure of Socratic dialogue, where the back-and-forth itself is the “midwife of learning.” MindCET, Warshavsky’s “EdTech Innovation Center,” has been focusing on chatbots and researching students as they engaged in chat. They report observing students being less anxious and inhibited with non-human agents. Children feel more free to experiment and take risks when they are unafraid of human judgment, yet they also feel an emotional connection with their bot interlocutor. At MindCET, they see this as suggestive of a new kind of personalized learning.
To me, Warshavsky’s ambitions for AI hadn’t been fully persuasive, but he concluded with a critical point that differentiated his ideas from so many others. He was unapologetic about a chatbot’s imperfections. But for him, these imperfections were a feature, not a bug. Learning was meant to be inefficient, a long journey, full of dead-ends, trials-and-errors, messes, and missed opportunities. Indeed, this is exactly what it feels like to chat with a bot. In this way, his vision for personalized learning wasn’t simply adding an efficiency to the same old process, but rather breaking the system open—perhaps not quite to deep human relationships-that-heal but to flawed dialogues that teach, as much in their imperfection as their utility.
In fact, Eran Hadas goes even farther in his article in EdTech MindSet, MindCET’s house magazine. He notes the very human temptation to mess with chatbots. Sometimes we’re mischievous (making dirty jokes to Siri); sometimes we’re malicious (“Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day,” screamed the headline at theverge.com).
And as a Philadelphian, I can’t not mention this:
(On what we do to AI agents in Philadelphia and this man’s boast)
With this in mind, releasing bots to work with children and teenagers seems like a recipe for disaster, but what if being a social experiment that can go horribly awry is the most important lesson anyone can learn? Indeed, any interaction involves generating a “theory of mind,” an evolving model for your interlocutor’s emotional and cognitive processes and how you interact with them. When our interlocutor is a machine, the way that we generate a theory of mind is akin to the way we figure out game mechanics in a video game. For teenagers whose most important psychological (and existential) task is figuring out how to navigate a world of other people, gamifying the way we mature our theory of mind might be the best outcome of all, even as it’s messy, divergent, profane, unpredictable and just weird. As Hadas says,
There is something that arouses curiosity when we know that there is no person on the other side. We want to test the limits of the bot, to repeat things and see if the response is the same, or to try to get it to do something funny. The conversation becomes a kind of game, which is a preferred method of learning.
In other words, with a non-human agent we learn more because we’re more active. We have to do more work in the exchange and we can experiment, iterate, and learn to learn with an other who has no feelings to hurt and no judgment to fear. By foregrounding the weirdness of it all and defamiliarizing the academic dialogue, the chatbot may foreground the learning process as one does the cognitive and emotional work of encountering an other.
Following Warshavsky, Ilan Ben Yaakov echoed Maya Georgieva in positioning us at a transition from an internet of information (where we create, consume, and share information with one another) to an internet of experience (where we create, consume, and share experiences with one another). (see my write up of Georgieva’s session here). What, he asks, would the wikipedia of experience be?
He critiqued the current state of VR for being individual (rather than social), passive (rather than active/interactive); and unreal. “Here,” he said, “is where AI can help… AI brings intelligence to VR. VR brings experience to AI.” AI, for example, can help VR recognize objects—a chair, a table, etc–thus allowing the user richer ways to interact with these objects. He offered the image of a human playing checkers with a robot in VR, with the robot learning checkers from the human, while the robot teaches the human to interact with robots. Neither would be possible without AI.
And while neither playing checkers nor gamifying the development and practice of theory of mind rises to Martin Buber’s I-Thou encounter, we may yet have something to learn from machines that could better prepare us for the essentially messy business of education and human otherness.