Agency, Automation & Reciprocity, Part I: Lanier, Rushkoff, O’Reilly

SOMETIMES TALK ABOUT TECHNOLOGY TAKES ON AN EITHER/OR QUALITY IN A BOTH/AND WORLD.

We can’t seem to decide whether technological breakthroughs have ushered in an era of unprecedented human freedom and vitality, or whether we’ve enslaved ourselves to algorithms that direct our movements and dose our neurotransmitters? Has our economic well-being been stolen by the tech oligarchs of a new Gilded Age, or can untold efficiencies scale new plentitudes, a great leap forward in quality of life, health, and education? Should we be terrified of the technologies that are taking our jobs and hijacking our cognition, or grateful for the automation that’s freed us from toil, vitalized the entertainment and distractions we love, and augmented our cognition?

I’ve been reading the works of three authors who seem uniquely capable of thinking through the both/and complexity of these (and other) issues.  Douglas RushkoffJaron Lanier and Tim O’Reilly each describe–and articulate a vision for–an ambivalent relationship between human and machine.  It’s not your imagination: we are both programmer and programmed, living both a dream and a nightmare.  But Rushkoff, Lanier and O’Reilly all emphasize our human agency in determining our fate, even as their urgency hints that it’s getting to be too late.

This post will be anything but a comprehensive overview of their work.  Their thoughts are vast and penetrating, thrilling and depressing.  Among the things I won’t cover, Lanier offers unique visions both for a web-based economy and for virtual reality that deserve deep reading and consideration.  O’Reilly presents a history of how we’ve ended up here that unifies a lifetime’s worth of tech stories into one coherent narrative.  And Rushkoff’s outline of the values embedded in circuit boards, CPU’s, compilers and code clarifies what’s been engineered in (and out) of our contemporary moment.

What I will focus on here is the way they interrogate human agency and how they paint a picture of human-machine interaction that captures the mutual affectation, reciprocity, and the both/and ambivalence of the relationship.  In this way, they bring to mind a key insight of the psychoanalytic tradition: that the boundaries between self and other, subject and object, and actor and acted-upon are essentially blurred and porous.  In this world, the truth itself is a moving target, and we can  trust neither  our self-awareness nor our awareness of the world around us.  Moreover, individuals can’t be known outside of their systems and networks; and the human (including our biology, neurology and psychology) and the technological intertwine.  This means that even our best ways of knowing–our reasoning, observation, the scientific method–can’t offer knowledge untainted by our own embeddedness in what we’re knowing.

While I’m pretty sure Lanier, O’Reilly and Rushkoff wouldn’t explicitly sign on to a psychoanalytic approach, what they describe has deep implications for the psychodynamics of our era.  Especially key to each is a technology’s moment of emergence, when we encounter something new and negotiate its adaptation and adoption.  In other words, they all ask what people do when they encounter a new technology?  Do they adapt to their technology?  Or does the technology adapt to them?

Before we can ask Siri and Alexa to order carry-out, or groceries, we have to adapt to making requests of inanimate objects.  Before we use Lyft to get a ride, we have to adapt to the very idea of being connected to on-demand transportation. As Douglas Rushkoff points out throughout his powerful polemic, Program or Be Programmed, for the relationship to work, there’s an essential reciprocity here–even as machines are being optimized for humans, humans are being optimized for machines.  For Rushkoff, the balance of power and influence is critical: it’s up to us to remain the programmers, and to assure that the machines reflect values of our choosing, and not the other way around.

Lanier insists that the human aspects of this two-way relationship remain front-and-center, writing, “the most important thing to ask about any technology is how it changes people.” (You Are Not a Gadget, p. 36).  Tim O’Reilly is similarly questioning, though he is the most enthusiastic of the bunch, marveling at the ways machines augment human capacity, making us capable of things we never dreamt of.

For O’Reilly, transformation is the hallmark of a revolutionary technology, but he echoes the others, asserting that defining question of our era is whether we will passively adapt to technology without critically interrogating the values baked into its design, or whether we can actively take up our agency and determine what our technologies value and prioritize. Like Rushkoff’s Program or Be Programmed, O’Reilly’s title–WTF: What’s the Future and Why It’s Up to Us–foregrounds our agency and responsibility for maintaining the human–and the humanity–in the age of massive change and upheaval.

In Part II of this blog entry, I’ll outline how we might think through this question of agency and the relationship between human and technology in the context of labor, its meaning, and its automation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s