The idea for this blog was born in a moment of frustration. It came when I was reading a piece that I mostly agreed with, by a writer I admired.
The piece, “The Futile Resistance Against Classroom Tech” by David Perry appeared in the December 2017 issue of The Atlantic. Its melodramatic title belied banal speculation about the future, coupled with a misrecognition of the present. Perry is bullish about the future and impatient towards those anyone who thinks they can stand in its way, but in the way that is reminiscent of 50’s futurists were bullish about TV phones.
This is the last generation, pending an apocalypse, in which it’s possible to imagine separating students from their tech. It’s a moment to begin seriously thinking about the pedagogy of teaching a cyborg.
The teacher, trained to teach in the 2010s, wants to say, “close your laptops and put away your phones.” But when the phone is embedded in a fingernail, what can a teacher do? …It’s still fairly easy to spot students using their cell phones in class— but when the smart pen or smart textbook sends messages directly to the contact lenses of students, teachers aren’t likely to even notice.
Perry’s vision for the a near future where cyborg students have computers in their fingernails or displays in their contact lenses completely misses the point: cyborg education is here, and it has little to do with whether teachers ban personal devices from the classroom. This isn’t the last generation of anything; rather, it’s the first generation that every day looks to YouTube before it looks to a teacher. This generation already has read-write access to an unprecedented corpus of cognition, emotion, and expression, theirs and that of their entire network—not to mention the accumulation of human knowledge and cultural production on servers, at their fingertips, wherever, whenever, they are. Cyborg teachers teaching cyborg students is our contemporary moment.
For Perry, the impact of technology on learning is a matter of gadgets and teacher surveillance. But cyborg education has never been about a devices—even those embedded in fingernails or cornea—or about the rules that will regulate them. Technological devices—and the rules that govern them—have been intertwined with learning since long before the the abacus and the stylus. Indeed, our learning-mind has always been augmented by learning-tools (and occupied with learning how to learn with learning tools).
What’s changed is this: for the cyborg student, cognition, hypothesis-testing and decision-making are distributed between human and technology, executive functioning and algorithm. In the cyborg, data flows freely between silicon-based processors and carbon-based life.
It follows that the education of a cyborg can finally be decoupled from the acquisition of information and the rote practice of skills. Instead, education can become a lifelong process of cognitive and emotional adaptation; of the management of attention and processing capacity; and of the distribution of cognitive tasks between our minds, other minds, and our technologies. Finally, this vision only makes sense if its communal, as it’s no longer about the performance of any given individual but about performance of whole networks and ecologies.
Moreover, the optimist in me thinks it’s about something more: our newfound capacity to create, author, produce and collaborate offers opportunities for democratization and equity that were never available when knowledge—its record, its mode of production, its processing and transformation—were the domains of a privileged few. Access to information, resources, and processing power has never been more widely available. This can finally bury a model of education and learning that dates back to the industrial revolution, when consuming a standardized curriculum of content prepared masses for standardized roles in a highly rationalized, hierarchical social organization. Though the social order and industrial economy were never quite as static or monolithic as we might imagine, that vision informed a model of education that aimed to produce graduates filled with pre-determined content and practiced at rote skills. Teaching methods and pedagogies; textbooks and curricula; the design of schools and the funding of educational systems all followed from this vision.
The radical disruption will occur when the edifice of static learning outcomes, delivered to students in the form of pre-packaged content, and then assessed on standardized tests is abandoned, once and for all. The ed tech industry’s biggest mistake has been trying to maximize the efficiency and efficacy of this old model of content consumption and skill building, rather than being at the vanguard of its disruption. It doesn’t matter if content and skills are delivered on personalized platforms, with adaptive systems, optimized by AI and machine learning. Optimizing speed and efficacy don’t change the fact that you’re living in the old paradigm, driving towards the same old outcomes; closing achievement gaps isn’t useful if those left behind are lifted to grade level in a useless paradigm.
Students are no longer dependent on experts or gated repositories for the information they need to succeed; they don’t need to consume a set of prerequisites to partake in a shared culture (cf Matthew Arnold), nor do they need to memorize a set of information or practice a set of rote skills to succeed. in fact, they’re not dependent on information at all. They’ve lived their whole lives a land of networked expertise on-demand and information at our fingertips.
If education is finally liberated from information, memorization, drills, etc., then it can finally realize its true purpose: student’s self-discovery and personal development, their journey to become citizens who can live and work effectively in a diverse, changing society. That journey starts by breaking out of set curricula, rejecting prescriptions for must be learned and how it will be assessed. Instead, the primary directive can only be freeing learning to discover its own path. We can only see what this means by letting learners free to do, create, produce, collaborate, connect, and ultimately to see what happens. It’s only this last—the seeing what happens—that is true learning. If we know the desired outcome beforehand, it’s not learning, it’s the repetition of someone else’s learning.
We evaluate learning by testing memory, but if you’re really learning, you’re not just inscribing information in your long-term memory; rather, your entire mind is changing. What is learned isn’t an object for your faculties to process, regurgitate, evaluate, apply, etc. Learning is when those faculties themselves change, adapt, expand or gain capacities they didn’t previously have. Those mental faculties have always had tools (tools in your body itself: eyes, ears, fingers; and external tools to write with, see with, etc. But until now, technology always existed external to the activities of the faculties (thinking, reasoning, feeling, imagining, forecasting, etc). Now, we think, reason, feel, imagine, forecast, etc with (and within) technology. Our processing, creating, expressing, and learning is essentially networked with others, both human and machine (which are sometimes indistinguishable).
We won’t detect learning by evaluating pre-defined learning outcomes; rather, we’ll only be able to tell if there’s real learning going on by looking for outcomes in the world. As Kurt Lewin’s theorization of action research taught us, you only learn about the world by trying to change it. Learning outcomes will be determined by the changes we witness. When we learn, we change. The imperative to learn and change has never been more urgent; and the technology enabling (and threatening) our capacity to learn has never been greater.