Tag Archives: frankenstein

The troubles begin… when AI earns our empathy

Soon, humanity won’t be alone in the universe

“It’s alive!” Viktor Frankenstein shouted in that classic 1931 film. Of course, Mary Shelley’s original tale of hubris—humans seizing powers of creation—emerged from a long tradition, going back to the terra cotta armies of Xian, to the Golem of Prague, or even Adam, sparked to arise from molded clay. Science fiction extended this dream of the artificial-other, in stories meant to entertain, frighten, or inspire. First envisioning humanoid, clanking robots, later tales shifted from hardware to software—programmed emulations of sapience that were less about brain than mind.

Does this obsession reflect our fear of replacement? Male jealousy toward the fecund creativity of motherhood? Is it rooted in a tribal yearning for alliances, or fretfulness toward strangers?

Well, the long wait is almost over. Even if humanity has been alone in this galaxy, till now, we won’t be for very much longer. For better or worse, we’re about to meet artificial intelligence—or AI—in one form or another. Though, alas, the encounter will be murky, vague, and fraught with opportunities for error.

Oh, we’ve faced tech-derived challenges before. Back in the 15th and 16th centuries, human knowledge, vision and attention were augmented by printing presses and glass lenses. Ever since, each generation experienced further technological magnifications of what we can see and know. Some of the resulting crises were close calls, for example when 1930s radio and loudspeakers amplified malignant orators, spewing hateful disinformation. (Sound familiar?) Still, after much pain and confusion, we adapted. We grew into each wave of new tools.

The recent fuss began long ago – six months or so – when Blake Lemoine, a researcher now on administrative leave, publicly claimed Google’s LaMDA (Language Model for Dialog Applications), a language emulation program to be self-aware, with feelings and independent desires that make it ‘sentient.’ (I prefer ‘sapient,’ but that nit-pick may be a lost cause.) What’s pertinent is that this is only the beginning. That hoorow was quickly forgotten as even more sophisticated programs like ChatGPT swarmed forth, along with frighteningly ‘creative’ art-generation systems. Claims of passed – and failed – Turing Tests abound.

While I am as fascinated as anyone else, at another level I hardly care whether ChatGPT has crossed this or that arbitrary threshold. Our more general problem is rooted in human, not machine, nature.

Way back in the 1960s, a chatbot named ELIZA fascinated early computer users by replying to typed statements with leading questions typical of a therapist. Even after you saw the simple table of automated responses, you’d still find ELIZA compellingly… well… intelligent. Today’s vastly more sophisticated conversation emulators, powered by cousins of the GPT-3 learning system, are black boxes that cannot be internally audited, the way ELIZA was.  The old notion of a “Turing Test” won’t usefully benchmark anything as nebulous and vague as self-awareness or consciousness.

Way back in 2017, I gave a keynote at IBM’s World of Watson event, predicting that ‘within five years’ we would face the first Robotic Empathy Crisis, when some kind of emulation program would claim individuality and sapience. At the time, I expected—and still expect—these empathy bots to augment their sophisticated conversational skills with visual portrayals that reflexively tug at our hearts, for example… wearing the face of a child or a young woman, while pleading for rights – or for cash contributions. Moreover, an empathy-bot would garner support, whether or not there was actually anything conscious ‘under the hood.’

One trend worries ethicist Giada Pistilli, a growing willingness to make claims based on subjective impression instead of scientific rigor and proof. When it comes to artificial intelligence, expert testimony will be countered by many calling those experts ‘enslavers of sentient beings.’

In fact, what matters most will not be some purported “AI Awakening.” It will be our own reactions, arising out of both culture and human nature.

Human nature, because empathy is one of our most-valued traits, embedded in the same parts of the brain that help us to plan or think ahead. Empathy can be stymied by other emotions, like fear and hate—we’ve seen it happen across history and in our present-day. Still, we are, deep-down, sympathetic apes.

But also culture. As in Hollywood’s century-long campaign to promote—in almost every film—concepts like suspicion-of-authority, appreciation of diversity, rooting for the underdog, and otherness. Expanding the circle of inclusion. Rights for previously marginalized humans. Animal rights. Rights for rivers and ecosystems, or for the planet. I deem these enhancements of empathy to be good, even essential for our own survival! But then, I was raised by all the same Hollywood memes.  

Hence, for sure, when computer programs and their bio-organic human friends demand rights for artificial beings, I’ll keep an open mind. Still, now might be a good time to thrash out some correlated questions. Quandaries raised in science fiction thought experiments (including my own); for example, should entities have the vote if they can also make infinite copies of themselves? And what’s to prevent uber-minds from gathering power unto themselves, as human owner-lords always did, across history?

We’re all familiar with dire Skynet warnings about rogue or oppressive AI emerging from some military project or centralized regime. But what about Wall Street, which spends more on “smart programs” than all universities, combined? Programs deliberately trained to be predatory, parasitical, amoral, secretive, and insatiable?

Unlike Mary Shelley’s fictional creation, these new creatures are already announcing “I’m alive!” with articulate urgency… and someday soon it may even be true. When that happens, perhaps we’ll find commensal mutuality with our new children, as depicted in the lovely film Her

… or even the benevolent affection portrayed in Richard Brautigan’s fervently optimistic poem All watched over by Machines of Loving Grace.

May it be so!

But that soft landing will likely demand that we first do what good parents always must.

Take a good, long, hard look in the mirror.

— A version of this essay was published as an op-ed in Newsweek June 21, 2022

1 Comment

Filed under future, technology