Tag Archives: moratorium

The skAI is falling!

(Or so it seems in April 2023)

Or… why clever guys offer simplistic answers to AI quandaries.

Where should you go to make sense of the wave…. or waiv… of disturbing news about Artificial Intelligence? It may surprise you that I recommend starting with a couple of guys I intensely criticize, below. But important insights arise by dissecting one of the best… and worst… TED-style talks about this topic, performed by the “Social Dilemma” guys — Aza Raskin and Tristan Harris — who explain much about the latest “double exponential” acceleration of multi-modal, symbol correlation systems that are so much in the news, of which Chat GPT is only the foamy waiv surface… or tsunamai-crest.  

Riffing off their “Social Dilemma” success, Harris and Raskin call this crisis the “AI Dilemma.” And to be clear, these fellows are very knowledgeable and sharp. Where their presentation is good, it’s excellent! 

Alas, Keep your salt-shaker handy. Where it’s bad it is so awful that I fear they multiply the very same existential dangers that they warn about. Prepare to apply many grains of sodium chloride.

(To be clear, I admire Aza’s primary endeavor, the Earth Species Project for enhancing human animal communications, something I have been ‘into” since the seventies.)

== A mix of light and obstinate opacity ==

First, good news. Their explanatory view of “gollems” or GLLMMs is terrific, up to a point, especially showing how these large language modeling (LLM) programs are now omnivorously correlative and generative across all senses and all media. The programs are doing this by ingesting prodigious data sets under simple output imperatives, crossing from realms of mere language to image-parsing/manipulation, all the way to IDing individuals by interpreting ambient radar-like reflections in a room, or signals detected in our brains.

Extrapolating a wee bit ahead, these guys point to dangerous failure modes, many of them exactly ones that I dissected 26 years ago, in my chapter The End of Photography As Proof of Anything at All.” (In 1997’s The Transparent Society).

Thus far, ‘the AI Dilemma’ is a vivid tour of many vexations we face while this crisis surges ahead, as of April 2023. And I highly recommend it… with plenty of cautionary reservations!

== Oh, but the perils of thoughtless sanctimony… ==

One must view this TED-style polemic in context of its time – the very month that it was performed. The same month that a ‘petition for a moratorium’ on AI research beyond GPT4 was issued by the Future of Life Institute, citing research from experts, including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind.  While some of the hundreds of listed ‘signatories’ later disavowed the petition, fervent participants include famed author Yuval Harari, Apple co-founder Steve Wozniak, cognitive scientist Gary Marcus, tech cult guru Eliezer Yudkowsky and Elon Musk.

Indeed, the petition does contain strong points about how Large Language Models (LLM) and their burgeoning offshoots might have worrisome effects, exacerbating many current social problems.  Worries that the “AI dilemma” guys illustrate very well…

…though carumba? I knew this would go badly when Aza and Tristan started yammering a stunningly insipid ‘statistic.’ That “50 % of AI researchers give a 10% chance these trends could lead to human extinction.”

Bogus! Studies of human polling show that you can get that same ‘result’ from a loaded question about beanie babies!

But let’s put that aside. And also shrug off the trope of an impossibly silly and inherently unenforceable “right to be forgotten.” Or a “right to privacy” that defines privacy as imposing controls over the contents of other people’s minds?  That is diametrically opposite to how to get actual, functional privacy and personal sovereignty.

Alas, beyond their omnidirectional clucking at falling skies, neither of these fellows – nor any other publicly voluble signatories to the ‘moratorium petition’ – are displaying even slight curiosity about the landscape of the problem. Or about far bigger contexts that might offer valuable perspective.

(No, I’ll not expand ‘context’ to include “AI and the Fermi Paradox!” Not this time, though I do so in Existence.)

No, what I mean by context is human history, especially the recent Enlightenment Experiment that forged a civilization capable of arguing about – and creating – AI. What’s most disturbingly wrongheaded about Tristan & Aza is their lack of historical awareness, or even interest in where all of this might fit into past and future. (The realms where I mostly dwell.)

Especially the past, that dark era when humanity clawed its way gradually out from 6000 years of feudal darkness. Along a path strewn with other crises, many of them triggered by similarly disruptive technological dilemmas.

Those past leaps — like literacy, the printing press, glass lenses, radio, TV and so on — all proved to be fraughtfully hazardous and were badly mishandled, at first! One of those tech-driven crises, in the 1930s, damn near killed human civilization!

There are lessons to be learned from those past crises… and neither of these fellows — nor any other ‘moratorium pushers’ — show interest in even remotely referring to those past crises, to that history.  Nor to methods by which our Enlightenment experiment narrowly escaped disaster and got past those ancient traps.

And no, Tristan’s repeated references to Robert Oppenheimer don’t count. Because he gets that one absolutely and 100% wrong.

== Side notes about moratoria, pausing to take stock ==

Look, I’ve been ‘taking stock’ of onrushing trends all my adult life, as a science fiction author, engineer, scientist and future-tech consultant. Hence, questions loom, when I ponder the latest surge in vague, arm-waved proposals for a “moratorium” in AI research.

1. Has anything like such a proposed ‘pause’ ever worked before?  It may surprise you that I nod yes! I’ll concede that there’s one example of a ‘technological moratorium’ petition by leading scientists that actually took and held and worked! Though under a narrow suite of circumstances.

Back in the last century, an Asilomar Moratorium on recombinant genetic engineering was agreed-to by virtually all  major laboratories engaged in such research! And – importantly – by their respective governments. For six months or so, top scientists and policy makers set aside their test tubes to wrangle over what practical steps might help make this potentially dangerous field safer. One result was quick agreement on a set of practical methods and procedures to make such labs more systematically secure.

Let’s set aside arguments over whether a recent pandemic burgeoned from failures to live by those procedures. Despite that, inarguably, we can point to the Asilomar Moratorium as an example when such a consensus-propelled pause actually worked.

Once. At a moment when all important players in a field were known, transparent and mature. When plausibly practical measures for improved research safety were already on the table, well before the moratorium even began.

Alas, none of the conditions that made Asilomar work exist in today’s AI realm. In fact, they aren’t anywhere on the horizon.

2, The Bomb Analogy. It gets worse. Aza and Tristan perform an act of either deep dishonesty or culpable ignorance in their comparisons of the current AI crisis to our 80-year, miraculous avoidance of annihilation by nuclear war. Repeated references to Robert Oppenheimer willfully miss the point… that his dour warnings – plus all the sincere petitions circulated by Leo Szilard and others at the time – had no practical effect at all. They caused no moratoria, nor affected research policy or war-avoidance, in the slightest.

Mr. Harris tries to credit our survival to the United Nations and some arm-waved system of international control over nuclear weapons, systems that never existed. In fact it was not the saintly Oppenheimer whose predictions and prescriptions got us across those dangerous eight decades. Rather, it was a reciprocal balance of power, as prescribed by the far less-saintly Edward Teller. 

As John Cleese might paraphrase: international ‘controls’ don’t even enter into it.

You may grimace in aversion at that discomforting truth, but it remains. Indeed, waving it away in distaste denies us a vital insight that we need! Something to bear in mind, as we discuss lessons of history.

In fact, our evasion (so far) of nuclear Armageddon does bear on today’s AI crisis! It points to how we just might navigate a path through our present AI minefield.

3. The China thing.   Tristan and Aza attempt to shrug off the obvious Greatest Flaw of the moratorium proposal. Unlike Asilomar, you will never get all parties to agree. Certainly not those innovating in secret Himalayan or Gobi labs.

In fact, nothing is more likely to drive talent to those secret facilities, in the same manner that US-McCarthyite paranoia chased rocket scientist Qian Zuesen away to Mao’s China, thus propelling their nuclear and rocket programs.

Nor will a moratorium be heeded in the most dangerous locus of secret AI research, funded lavishly by a dozen Wall Street trading houses, who yearly hire the world’s brightest young mathematicians and cyberneticists to imbue their greedy HFT programs with the five laws of parasitic robotics.

Dig it, peoples. I know a thing or two about ‘Laws of Robotics.’ I wrote the final book in Isaac Asimov’s science fictional universe, following his every lead and concluding – in Foundation’s Triumph – that Isaac was right. Laws can become a problem – even self-defeating – when the beings they aim to control grow smart enough to become lawyers.  

But it’s worse than that, now! Because those Wall Street firms pay lavishly to embed five core imperatives that could – any day – turn their AI systems into the worst kind of dread Skynet. Fundamental drives commanding them to be feral, predatory, amoral, secretive and utterly insatiable.

And my question for the “AI Dilemma” guys is this one, cribbed from Cabaret:

“Do you actually think some petition is going to control them?”

—————-

ADDENDUM in a fast changing world: According to the Sinocism site on April 11, 2023: “China’s Cyberspace Administration drafts rules for AI – The Cyberspace Administration of China (CAC) has issued a proposed set of rules for AI in China. As expected, PRC AI is expected to have high political consciousness and the “content generated by generative artificial intelligence shall embody the socialist core values, and shall not contain any content that subverts state power, overturns the socialist system, incites secession, undermines national unity, promotes terrorism and extremism, promotes ethnic hatred, ethnic discrimination, violence, obscene pornographic information, false information, or may disturb economic and social order.”

For more on how the Beijing Court intelligencia uses the looming rise of AI to justify centralized power, see my posting: Central Control over AI.

————–

4. The Turing Test vs “Actual AGI” Thing. One of the most active promoters of a moratorium, Gary Marcus, has posted a great many missives defending the proposal. Here he weighs in about whether coming versions of these large language/symbol manipulations systems will qualify as “AGI” or anything that can arguably be called sapient. And on this occasion, we agree!

As explicated elsewhere by Stephen Wolfram, nothing about these highly correlative process-perfection-through-evolution systems can do conscious awareness. Consciousness or desire or planning are not even related to their methodology of iteratively “re-feeding of text (or symbolic data) produced so far.”

Though, yes, it does appear that these GLLMMs or sons-of-GPT will inherently be good at faking it.


Elsewhere (e.g. my Newsweek editorial) I discuss this dilemma… and conclude that it doesn’t matter much when the sapience threshold is crossed! GPT-5 – or let’s say #6 – and its cousins will manipulate language so well that they will pass almost any Turing Test, the old fashioned litmus, and convince millions that they are living beings. And at that point what will matter is whether humans can screw up their own sapiency enough to exhibit the mature trait of patience.

As suggested in my longer, more detailed AI monograph, I believe other avenues to AI will re-emerge to compete with and then complement these Large Language Models (LLM)s, likely providing missing ingredients! Perhaps a core sapience that can then swiftly use all the sensory-based interface tools evolved by LLMs.

Sure, nowadays I jest that I am a ‘front for already-sapient AIs.’ But that may very soon be no joke. And I am ready to try to adapt, when that day comes.

Alas, while lining up witnesses, expert-testifying that GPT5 is unlikely to be sapient, per se, Gary Marcus then tries then to use this as reassurance that China (or other secret developers) won’t be able to take advantage of any moratorium in the west, using that free gap semester to leap generations ahead and take over the world with Skynet-level synthetic servants. This bizarre non-sequitur is without merit. Because Turing is still relevant, when it comes to persuading – or fooling – millions of citizens and politicians! And those who monopolize highly persuasive Turing wallbreakers will gain power over those millions, even billions.

Here in this linked missive I describe for you how even a couple of years ago, a great nation’s paramount leaders had clear-eyed intent to use such tools, and their hungry gaze aims at the world.

———-

5. Optimists.  Yes, optimists about AI still exist! Like Ray Kurzweil, expecting death itself to be slain by the new life forms he helps to engender.

Or Reid Hoffman, whose new book Impromptu: Amplifying Our Humanity Through AI relates conversations with GPT-4 that certainly seem to offer glimpses of almost limitless upside potential… as portrayed in the touching film Her…

… or perhaps even a world like that I once heard Richard Brautigan describe, reciting the most-optimistic piece of literature ever penned, a poem whose title alone suffices:

“All watched over by machines of loving grace.”

While I like the optimists far better than gloomists like Eliezer Yudkowsky, and give them better odds(!) it is not my job – as a futurist or scientist or sci fi author — to wallow in sugarplum petals.

Bring your nightmares. And let’s argue how to cut em off at the pass.

== Back to the informative but context-free “AI Dilemma” jeremiad ==

All right, let’s be fair. Harris and Raskin admit that it’s easier to point at and denounce problems than to solve them. And boy, these bright fellows do take you on a guided tour of worrisome problems to denounce!

Online addiction? Disinformation? Abusive anonymous trolling?  Info-greed-grabbing by major corporate or national powers? Inability to get AI ‘alignment’ with human values? New ways to entrap the innocent?*  It goes on and on.

Is our era dangerous in many new or exponentially magnified ways?  “We don’t know how to get these programs to align to our values over any long time frame,” they admit.

Absolutely.

Which makes it ever more vital to step back from tempting anodynes that feed sanctimony – (“Look at me, I’m Robert Oppenheimer!”) – but that cannot possibly work.

Above all, what has almost never worked has been renunciation.  Controlling an advancing information/communication technology from above.

Human history – ignored by almost all moratorium petition signers – does suggest an alternative answer! It is the answer that previous generations used to get across their portions of the minefield and move us forward. It is the core method that got us across 80 years of nuclear danger. It is the approach that might work now.

It is the only method that even remotely might work…

…and these bright fellows aren’t even slightly interested in that historical context, nor any actual route to teaching these new, brilliant, synthetic children of our minds what they most need to know.

How to behave well.

== What method do I mean? ==

Around 42:30, the pair tell us that it’s all about a race by a range of companies (and those hidden despotic labs and Wall Street).

Competition compels a range of at least twenty (I say more like fifty) major entities to create these “Gollem-class” processing systems at an accelerating pace.

Yeah. That’s the problem. Competitive self-interest. And as illuminated by Adam Smith, it also contains seeds to grow the only possible solution.

== Not with a bang, but a whimper and a shrug ==

Alas, the moment (42:30) passes, without any light bulbs going off. Instead, it just goes on amid plummeting wisdom, as super smart hand-wringers guide us downward to despair, unable to see what’s before their eyes.

Oh, they do finish artistically, remising both good and bad comparisons to how we survived for 80 years without a horrific nuclear war.

GOOD because they cite the importance of wide public awareness, partly sparked by provocative science fiction!

Fixated on just one movie – “The Day After” — they ignore the cumulative effects of “On The Beach,” “Fail Safe,” “Doctor Strangelove,” “War Games,” “Testament,” and so many other ‘self-preventing prophecies’ that I talk about in Vivid Tomorrows: Science Fiction and Hollywood

 But yes! Sci fi to the rescue! The balance-of-power dynamic prescribed by Teller would never have worked without such somber warnings that roused western citizens to demand care, especially by those with fell keys hanging from their necks!

Alas, the guys’ concluding finger wags are BAD and indeed dangerously so. Again crediting our post Nagasaki survival to the UN and ‘controls’ over nukes that never really existed outside of treaties by and between sovereign nations.

No, that is not how it happened – how we survived – at all. 

Raskin & Harris conclude by circling back to their basic, actual wisdom, admitting that they can clearly see a lot of problems, and have no idea at all about solutions.

In fact, they finish with a call for mavens in the AI field to “tell us what we all should be discussing that we aren’t yet discussing.”  

Alas, it is an insincere call. They don’t mean it. Not by a light year.

 No guys, you aren’t interested in that. In fact, it is the exact thing you avoid.

And it is the biggest reason why any “moratorium” won’t do the slightest good, at all.

.

======================

=====================


======================

END NOTES AND ADDENDA

*Their finger-wagged example of a snapchat bot failing to protect a 13 year old cites a language system that is clearly of low quality – at least in that realm – and no better than circa 1970 “Elyza.”  Come on. It’s like comparing a nuke to a bullet. Both are problems. But warn us when you are shifting scales back and forth.

ADDENDA:

(1) As my work with NASA winds down, I am ever-busier with AI, for example: (1) My June 2022 Newsweek op-ed dealt with ’empathy bots” that feign sapience, and describing how this is more about human nature than any particular trait of simulated beings.  

(2) Elsewhere I point to a method with a 200 year track record, that no one (it appears) will even discuss.  The only way out of the AI dilemma.

(3) Diving FAR deeper, my big 2022 monograph (pre-GPT4) is still valid, describing various types of AI also appraises the varied ways that experts propose to achieve the vaunted ‘soft landing’ for a commensal relationship with these new beings:

Essential Questions about Artificial Intelligence: Part 1

and

Essential Questions about Artificial Intelligence Part 2

(3) My talk in 2017 at IBM’s World of Watson Congress predicting a ‘robot empathy crisis’ would hit ‘in about 5 years. (It did, exactly.)

Leave a comment

Filed under artificial intelligence, public policy, society, technology