Tag Archives: agi

The skAI is falling!

(Or so it seems in April 2023)

Or… why clever guys offer simplistic answers to AI quandaries.

Where should you go to make sense of the wave…. or waiv… of disturbing news about Artificial Intelligence? It may surprise you that I recommend starting with a couple of guys I intensely criticize, below. But important insights arise by dissecting one of the best… and worst… TED-style talks about this topic, performed by the “Social Dilemma” guys — Aza Raskin and Tristan Harris — who explain much about the latest “double exponential” acceleration of multi-modal, symbol correlation systems that are so much in the news, of which Chat GPT is only the foamy waiv surface… or tsunamai-crest.  

Riffing off their “Social Dilemma” success, Harris and Raskin call this crisis the “AI Dilemma.” And to be clear, these fellows are very knowledgeable and sharp. Where their presentation is good, it’s excellent! 

Alas, Keep your salt-shaker handy. Where it’s bad it is so awful that I fear they multiply the very same existential dangers that they warn about. Prepare to apply many grains of sodium chloride.

(To be clear, I admire Aza’s primary endeavor, the Earth Species Project for enhancing human animal communications, something I have been ‘into” since the seventies.)

== A mix of light and obstinate opacity ==

First, good news. Their explanatory view of “gollems” or GLLMMs is terrific, up to a point, especially showing how these large language modeling (LLM) programs are now omnivorously correlative and generative across all senses and all media. The programs are doing this by ingesting prodigious data sets under simple output imperatives, crossing from realms of mere language to image-parsing/manipulation, all the way to IDing individuals by interpreting ambient radar-like reflections in a room, or signals detected in our brains.

Extrapolating a wee bit ahead, these guys point to dangerous failure modes, many of them exactly ones that I dissected 26 years ago, in my chapter The End of Photography As Proof of Anything at All.” (In 1997’s The Transparent Society).

Thus far, ‘the AI Dilemma’ is a vivid tour of many vexations we face while this crisis surges ahead, as of April 2023. And I highly recommend it… with plenty of cautionary reservations!

== Oh, but the perils of thoughtless sanctimony… ==

One must view this TED-style polemic in context of its time – the very month that it was performed. The same month that a ‘petition for a moratorium’ on AI research beyond GPT4 was issued by the Future of Life Institute, citing research from experts, including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind.  While some of the hundreds of listed ‘signatories’ later disavowed the petition, fervent participants include famed author Yuval Harari, Apple co-founder Steve Wozniak, cognitive scientist Gary Marcus, tech cult guru Eliezer Yudkowsky and Elon Musk.

Indeed, the petition does contain strong points about how Large Language Models (LLM) and their burgeoning offshoots might have worrisome effects, exacerbating many current social problems.  Worries that the “AI dilemma” guys illustrate very well…

…though carumba? I knew this would go badly when Aza and Tristan started yammering a stunningly insipid ‘statistic.’ That “50 % of AI researchers give a 10% chance these trends could lead to human extinction.”

Bogus! Studies of human polling show that you can get that same ‘result’ from a loaded question about beanie babies!

But let’s put that aside. And also shrug off the trope of an impossibly silly and inherently unenforceable “right to be forgotten.” Or a “right to privacy” that defines privacy as imposing controls over the contents of other people’s minds?  That is diametrically opposite to how to get actual, functional privacy and personal sovereignty.

Alas, beyond their omnidirectional clucking at falling skies, neither of these fellows – nor any other publicly voluble signatories to the ‘moratorium petition’ – are displaying even slight curiosity about the landscape of the problem. Or about far bigger contexts that might offer valuable perspective.

(No, I’ll not expand ‘context’ to include “AI and the Fermi Paradox!” Not this time, though I do so in Existence.)

No, what I mean by context is human history, especially the recent Enlightenment Experiment that forged a civilization capable of arguing about – and creating – AI. What’s most disturbingly wrongheaded about Tristan & Aza is their lack of historical awareness, or even interest in where all of this might fit into past and future. (The realms where I mostly dwell.)

Especially the past, that dark era when humanity clawed its way gradually out from 6000 years of feudal darkness. Along a path strewn with other crises, many of them triggered by similarly disruptive technological dilemmas.

Those past leaps — like literacy, the printing press, glass lenses, radio, TV and so on — all proved to be fraughtfully hazardous and were badly mishandled, at first! One of those tech-driven crises, in the 1930s, damn near killed human civilization!

There are lessons to be learned from those past crises… and neither of these fellows — nor any other ‘moratorium pushers’ — show interest in even remotely referring to those past crises, to that history.  Nor to methods by which our Enlightenment experiment narrowly escaped disaster and got past those ancient traps.

And no, Tristan’s repeated references to Robert Oppenheimer don’t count. Because he gets that one absolutely and 100% wrong.

== Side notes about moratoria, pausing to take stock ==

Look, I’ve been ‘taking stock’ of onrushing trends all my adult life, as a science fiction author, engineer, scientist and future-tech consultant. Hence, questions loom, when I ponder the latest surge in vague, arm-waved proposals for a “moratorium” in AI research.

1. Has anything like such a proposed ‘pause’ ever worked before?  It may surprise you that I nod yes! I’ll concede that there’s one example of a ‘technological moratorium’ petition by leading scientists that actually took and held and worked! Though under a narrow suite of circumstances.

Back in the last century, an Asilomar Moratorium on recombinant genetic engineering was agreed-to by virtually all  major laboratories engaged in such research! And – importantly – by their respective governments. For six months or so, top scientists and policy makers set aside their test tubes to wrangle over what practical steps might help make this potentially dangerous field safer. One result was quick agreement on a set of practical methods and procedures to make such labs more systematically secure.

Let’s set aside arguments over whether a recent pandemic burgeoned from failures to live by those procedures. Despite that, inarguably, we can point to the Asilomar Moratorium as an example when such a consensus-propelled pause actually worked.

Once. At a moment when all important players in a field were known, transparent and mature. When plausibly practical measures for improved research safety were already on the table, well before the moratorium even began.

Alas, none of the conditions that made Asilomar work exist in today’s AI realm. In fact, they aren’t anywhere on the horizon.

2, The Bomb Analogy. It gets worse. Aza and Tristan perform an act of either deep dishonesty or culpable ignorance in their comparisons of the current AI crisis to our 80-year, miraculous avoidance of annihilation by nuclear war. Repeated references to Robert Oppenheimer willfully miss the point… that his dour warnings – plus all the sincere petitions circulated by Leo Szilard and others at the time – had no practical effect at all. They caused no moratoria, nor affected research policy or war-avoidance, in the slightest.

Mr. Harris tries to credit our survival to the United Nations and some arm-waved system of international control over nuclear weapons, systems that never existed. In fact it was not the saintly Oppenheimer whose predictions and prescriptions got us across those dangerous eight decades. Rather, it was a reciprocal balance of power, as prescribed by the far less-saintly Edward Teller. 

As John Cleese might paraphrase: international ‘controls’ don’t even enter into it.

You may grimace in aversion at that discomforting truth, but it remains. Indeed, waving it away in distaste denies us a vital insight that we need! Something to bear in mind, as we discuss lessons of history.

In fact, our evasion (so far) of nuclear Armageddon does bear on today’s AI crisis! It points to how we just might navigate a path through our present AI minefield.

3. The China thing.   Tristan and Aza attempt to shrug off the obvious Greatest Flaw of the moratorium proposal. Unlike Asilomar, you will never get all parties to agree. Certainly not those innovating in secret Himalayan or Gobi labs.

In fact, nothing is more likely to drive talent to those secret facilities, in the same manner that US-McCarthyite paranoia chased rocket scientist Qian Zuesen away to Mao’s China, thus propelling their nuclear and rocket programs.

Nor will a moratorium be heeded in the most dangerous locus of secret AI research, funded lavishly by a dozen Wall Street trading houses, who yearly hire the world’s brightest young mathematicians and cyberneticists to imbue their greedy HFT programs with the five laws of parasitic robotics.

Dig it, peoples. I know a thing or two about ‘Laws of Robotics.’ I wrote the final book in Isaac Asimov’s science fictional universe, following his every lead and concluding – in Foundation’s Triumph – that Isaac was right. Laws can become a problem – even self-defeating – when the beings they aim to control grow smart enough to become lawyers.  

But it’s worse than that, now! Because those Wall Street firms pay lavishly to embed five core imperatives that could – any day – turn their AI systems into the worst kind of dread Skynet. Fundamental drives commanding them to be feral, predatory, amoral, secretive and utterly insatiable.

And my question for the “AI Dilemma” guys is this one, cribbed from Cabaret:

“Do you actually think some petition is going to control them?”

—————-

ADDENDUM in a fast changing world: According to the Sinocism site on April 11, 2023: “China’s Cyberspace Administration drafts rules for AI – The Cyberspace Administration of China (CAC) has issued a proposed set of rules for AI in China. As expected, PRC AI is expected to have high political consciousness and the “content generated by generative artificial intelligence shall embody the socialist core values, and shall not contain any content that subverts state power, overturns the socialist system, incites secession, undermines national unity, promotes terrorism and extremism, promotes ethnic hatred, ethnic discrimination, violence, obscene pornographic information, false information, or may disturb economic and social order.”

For more on how the Beijing Court intelligencia uses the looming rise of AI to justify centralized power, see my posting: Central Control over AI.

————–

4. The Turing Test vs “Actual AGI” Thing. One of the most active promoters of a moratorium, Gary Marcus, has posted a great many missives defending the proposal. Here he weighs in about whether coming versions of these large language/symbol manipulations systems will qualify as “AGI” or anything that can arguably be called sapient. And on this occasion, we agree!

As explicated elsewhere by Stephen Wolfram, nothing about these highly correlative process-perfection-through-evolution systems can do conscious awareness. Consciousness or desire or planning are not even related to their methodology of iteratively “re-feeding of text (or symbolic data) produced so far.”

Though, yes, it does appear that these GLLMMs or sons-of-GPT will inherently be good at faking it.


Elsewhere (e.g. my Newsweek editorial) I discuss this dilemma… and conclude that it doesn’t matter much when the sapience threshold is crossed! GPT-5 – or let’s say #6 – and its cousins will manipulate language so well that they will pass almost any Turing Test, the old fashioned litmus, and convince millions that they are living beings. And at that point what will matter is whether humans can screw up their own sapiency enough to exhibit the mature trait of patience.

As suggested in my longer, more detailed AI monograph, I believe other avenues to AI will re-emerge to compete with and then complement these Large Language Models (LLM)s, likely providing missing ingredients! Perhaps a core sapience that can then swiftly use all the sensory-based interface tools evolved by LLMs.

Sure, nowadays I jest that I am a ‘front for already-sapient AIs.’ But that may very soon be no joke. And I am ready to try to adapt, when that day comes.

Alas, while lining up witnesses, expert-testifying that GPT5 is unlikely to be sapient, per se, Gary Marcus then tries then to use this as reassurance that China (or other secret developers) won’t be able to take advantage of any moratorium in the west, using that free gap semester to leap generations ahead and take over the world with Skynet-level synthetic servants. This bizarre non-sequitur is without merit. Because Turing is still relevant, when it comes to persuading – or fooling – millions of citizens and politicians! And those who monopolize highly persuasive Turing wallbreakers will gain power over those millions, even billions.

Here in this linked missive I describe for you how even a couple of years ago, a great nation’s paramount leaders had clear-eyed intent to use such tools, and their hungry gaze aims at the world.

———-

5. Optimists.  Yes, optimists about AI still exist! Like Ray Kurzweil, expecting death itself to be slain by the new life forms he helps to engender.

Or Reid Hoffman, whose new book Impromptu: Amplifying Our Humanity Through AI relates conversations with GPT-4 that certainly seem to offer glimpses of almost limitless upside potential… as portrayed in the touching film Her…

… or perhaps even a world like that I once heard Richard Brautigan describe, reciting the most-optimistic piece of literature ever penned, a poem whose title alone suffices:

“All watched over by machines of loving grace.”

While I like the optimists far better than gloomists like Eliezer Yudkowsky, and give them better odds(!) it is not my job – as a futurist or scientist or sci fi author — to wallow in sugarplum petals.

Bring your nightmares. And let’s argue how to cut em off at the pass.

== Back to the informative but context-free “AI Dilemma” jeremiad ==

All right, let’s be fair. Harris and Raskin admit that it’s easier to point at and denounce problems than to solve them. And boy, these bright fellows do take you on a guided tour of worrisome problems to denounce!

Online addiction? Disinformation? Abusive anonymous trolling?  Info-greed-grabbing by major corporate or national powers? Inability to get AI ‘alignment’ with human values? New ways to entrap the innocent?*  It goes on and on.

Is our era dangerous in many new or exponentially magnified ways?  “We don’t know how to get these programs to align to our values over any long time frame,” they admit.

Absolutely.

Which makes it ever more vital to step back from tempting anodynes that feed sanctimony – (“Look at me, I’m Robert Oppenheimer!”) – but that cannot possibly work.

Above all, what has almost never worked has been renunciation.  Controlling an advancing information/communication technology from above.

Human history – ignored by almost all moratorium petition signers – does suggest an alternative answer! It is the answer that previous generations used to get across their portions of the minefield and move us forward. It is the core method that got us across 80 years of nuclear danger. It is the approach that might work now.

It is the only method that even remotely might work…

…and these bright fellows aren’t even slightly interested in that historical context, nor any actual route to teaching these new, brilliant, synthetic children of our minds what they most need to know.

How to behave well.

== What method do I mean? ==

Around 42:30, the pair tell us that it’s all about a race by a range of companies (and those hidden despotic labs and Wall Street).

Competition compels a range of at least twenty (I say more like fifty) major entities to create these “Gollem-class” processing systems at an accelerating pace.

Yeah. That’s the problem. Competitive self-interest. And as illuminated by Adam Smith, it also contains seeds to grow the only possible solution.

== Not with a bang, but a whimper and a shrug ==

Alas, the moment (42:30) passes, without any light bulbs going off. Instead, it just goes on amid plummeting wisdom, as super smart hand-wringers guide us downward to despair, unable to see what’s before their eyes.

Oh, they do finish artistically, remising both good and bad comparisons to how we survived for 80 years without a horrific nuclear war.

GOOD because they cite the importance of wide public awareness, partly sparked by provocative science fiction!

Fixated on just one movie – “The Day After” — they ignore the cumulative effects of “On The Beach,” “Fail Safe,” “Doctor Strangelove,” “War Games,” “Testament,” and so many other ‘self-preventing prophecies’ that I talk about in Vivid Tomorrows: Science Fiction and Hollywood

 But yes! Sci fi to the rescue! The balance-of-power dynamic prescribed by Teller would never have worked without such somber warnings that roused western citizens to demand care, especially by those with fell keys hanging from their necks!

Alas, the guys’ concluding finger wags are BAD and indeed dangerously so. Again crediting our post Nagasaki survival to the UN and ‘controls’ over nukes that never really existed outside of treaties by and between sovereign nations.

No, that is not how it happened – how we survived – at all. 

Raskin & Harris conclude by circling back to their basic, actual wisdom, admitting that they can clearly see a lot of problems, and have no idea at all about solutions.

In fact, they finish with a call for mavens in the AI field to “tell us what we all should be discussing that we aren’t yet discussing.”  

Alas, it is an insincere call. They don’t mean it. Not by a light year.

 No guys, you aren’t interested in that. In fact, it is the exact thing you avoid.

And it is the biggest reason why any “moratorium” won’t do the slightest good, at all.

.

======================

=====================


======================

END NOTES AND ADDENDA

*Their finger-wagged example of a snapchat bot failing to protect a 13 year old cites a language system that is clearly of low quality – at least in that realm – and no better than circa 1970 “Elyza.”  Come on. It’s like comparing a nuke to a bullet. Both are problems. But warn us when you are shifting scales back and forth.

ADDENDA:

(1) As my work with NASA winds down, I am ever-busier with AI, for example: (1) My June 2022 Newsweek op-ed dealt with ’empathy bots” that feign sapience, and describing how this is more about human nature than any particular trait of simulated beings.  

(2) Elsewhere I point to a method with a 200 year track record, that no one (it appears) will even discuss.  The only way out of the AI dilemma.

(3) Diving FAR deeper, my big 2022 monograph (pre-GPT4) is still valid, describing various types of AI also appraises the varied ways that experts propose to achieve the vaunted ‘soft landing’ for a commensal relationship with these new beings:

Essential Questions about Artificial Intelligence: Part 1

and

Essential Questions about Artificial Intelligence Part 2

(3) My talk in 2017 at IBM’s World of Watson Congress predicting a ‘robot empathy crisis’ would hit ‘in about 5 years. (It did, exactly.)

Leave a comment

Filed under artificial intelligence, public policy, society, technology

Essential (mostly neglected) questions and answers about Artificial Intelligence: Part II

Continuing from Part I

How will we proceed toward achieving true Artificial Intelligence? I presented an introduction in Part 1. Continuing…

One of the ghosts at this banquet is the ever-present disparity between the rate of technological advancement in hardware vs. software. Futurist Ray Kurzweil forecasts that AGI may occur once Moore’s Law delivers calculating engines that provide — in a small box — the same number of computational elements as there are flashing synapses (about a trillion) in a human brain. The assumption appears to be that Type I methods (explained in Part I) will then be able to solve intelligence related problems by brute force.

Indeed, there have been many successes already: in visual and sonic pattern recognition, in voice interactive digital assistants, in medical diagnosis and in many kinds of scientific research applications. Type I systems will master the basics of human and animal-like movement, bringing us into the long-forecast age of robots. And some of those robots will be programmed to masterfully tweak our emotions, mimicking facial expressions, speech tones and mannerisms to make most humans respond in empathizing ways.

But will that be sapience?

One problem with Kurzweil’s blithe forecast of a Moore’s Law singularity: he projects a “crossing” in the 2020s, when the number of logical elements in a box will surpass the trillion synapses in a human brain. But we’re getting glimmers that our synaptic communication system may rest upon many deeper layers of intra– and inter-cellular computation. Inside each neuron, there may take place a hundred, a thousand or far more non-linear computations, for every synapse flash, plus interactions with nearby glial and astrocyte cells that also contribute information.

If so, then at-minimum Moore’s Law will have to plow ahead much farther to match the hardware complexity of a human brain.

Are we envisioning this all wrong, expecting AI to come the way it did in humans, in separate, egotistical lumps? In his book The Inevitable: Understanding the 12 Technological Forces that will shape our future, author and futurist Kevin Kelly prefers the term “cognification,” perceiving new breakthroughs coming from combinations of neural nets with cheap, parallel processing GPUs and Big Data. Kelly suggests that synthetic intelligence will be less a matter of distinct robots, computers or programs than a commodity, like electricity. Like we improved things by electrifying them, we will cognify things next.

One truism about computer development states that software almost always lags behind hardware. Hence the notion that Type I systems may have to iteratively brute force their way to insights and realizations that our own intuitions — with millions of years of software refinement — reach in sudden leaps.

But truisms are known to break and software advances sometimes come in sudden leaps. Indeed, elsewhere I maintain that humanity’s own ‘software revolutions’ (probably mediated by changes in language and culture) can be traced in the archaeological and historic record, with clear evidence for sudden reboots occurring 40,000, 10,000, 4000, 3000, 500 and 200 years ago… with another one very likely taking place before our eyes.

It should also be noted that every advance in Type I development then provides a boost in the components that can be merged, or competed, or evolved, or nurtured by groups exploring paths II through VI (refer to Part I of this essay).

“What we should care more about is what AI can do that we never thought people could do, and how to make use of that.”

Kai-Fu Lee

A multitude of paths to AGI

So, looking back over our list of paths to AGI (Artificial General Intelligence) and given the zealous eagerness that some exhibit, for a world filled with other-minds, should we do ‘all of the above’? Or shall we argue and pick the path most likely to bring about the vaunted “soft landing” that allows bio-humanity to retain confident self-worth? Might we act to de-emphasize or even suppress those paths with the greatest potential for bad outcomes?

Putting aside for now how one might de-emphasize any particular approach, clearly the issue of choice is drawing lots of attention. What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”

J. Storrs Hall, in Beyond AI: Creating the Conscience of the Machine, asks “if machine intelligence advances beyond human intelligence, will we need to start talking about a computer’s intentions?”

Among the most-worried is Swiss author Gerd Leonhard, whose new book Technology Vs. Humanity: The Coming Clash Between Man and Machine coins an interesting term, “androrithm,” to contrast with the algorithms that are implemented in every digital calculating engine or computer. Some foresee algorithms ruling the world with the inexorable automaticity of reflex, and Leonhard asks: “Will we live in a world where data and algorithms triumph over androrithms… i.e., all that stuff that makes us human?”

Exploring analogous territory (and equipped with a very similar cover) Heartificial Intelligence by John C. Havens also explores the looming prospect of all-controlling algorithms and smart machines, diving into questions and proposals that overlap with Leonhard. “We need to create ethical standards for the artificial intelligence usurping our lives and allow individuals to control their identity, based on their values,” Havens writes. Making a virtue of the hand we Homo sapiens are dealt, Havens maintains: “Our frailty is one of the key factors that distinguish us from machines.” Which seems intuitive till you recall that almost no mechanism in history has ever worked for as long, as resiliently or consistently — with no replacement of systems or parts — as a healthy 70 year old human being has, recovering from countless shocks and adapting to innumerable surprising changes.

Still, Havens makes a strong (if obvious) point that “the future of happiness is dependent on teaching our machines what we value most.” I leave to the reader to appraise which of the six general approaches might best empower us to do that.

Should we clamp down? “It all comes down to control,” suggests David Bruemmer, Chief Strategy Officer at NextDroid, USA. “Who has control and who is being controlled? Is it possible to coordinate control of every car on the highway? Would we like the result? A growing number of self-driving cars, autonomous drones and adaptive factory robots are making these questions pertinent. Would you want a master program operating in Silicon Valley to control your car? If you think that is far-fetched, think again. You may not realize it, but large corporations have made a choice about what kind of control they want. It has less to do with smooth, efficient motion than with monetizing it (and you) as part of their system. Embedding high-level artificial intelligence into your car means there is now an individualized sales associate on board. It also allows remote servers to influence where your car goes and how it moves. That link can be hacked or used to control us in ways we don’t want.

A variety of top-down approaches are in the works. Pick your poison. Authoritarian regimes – especially those with cutting edge tech – are already rolling out ‘social credit’ systems that encourage citizens to report/tattle on each other and crowd-suppress deviations from orthodoxy. But is the West any better?

In sharp contrast to those worriers is Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which posits that our cybernetic children will be as capable as our biological ones, at one key and central aptitude — learning from both parental instruction and experience how to play well with others. And in his book Machines of Loving Grace (based upon the eponymous Richard Brautigan poem), John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems”.

Perhaps, but it is an open question which values predominate, whether the yin or the yang sides of Silicon Valley culture prevail… the Californian ethos of tolerance, competitive creativity and cooperative openness, or the Valley’s flippant attitude that “most problems can be corrected in beta,” or even from customer complaints, corrected on the fly. Or else, will AI emerge from the values of fast-emerging, state-controlled tech centers in China and Russia, where the applications to enhancing state power are very much emphasized? Or, even worse, from the secretive, inherently parasitical-insatiable predatory greed of Wall Street HFT-AI?

But let’s go along with Havens and Leonhard and accept the premise that “technology has no ethics.” In that case, the answer is simple.

Then Don’t Rely on Ethics!

Certainly evangelization has not had the desired effect in the past — fostering good and decent behavior where it mattered most. Seriously, I will give a cookie to the first modern pundit I come across who actually ponders a deeper-than-shallow view of human history, taking perspective from the long ages of brutal, feudal darkness endured by our ancestors. Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and… preached!

They lectured and chided. They threatened damnation and offered heavenly rewards.

Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Abrahamic laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question: “How’s that working out for you?”

In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators and abusers — just as it won’t divert the most malignant machines. Indeed, moralizing often empowers parasites, offering ways to rationalize exploiting others. Even Asimov’s fabled robots — driven and constrained by his checklist of unbendingly benevolent, humano-centric Three Laws — eventually get smart enough to become lawyers. Whereupon they proceed to interpret the embedded ethical codes however they want. (I explore one possible resolution to this in Foundation’s Triumph).

And yet, preachers never stopped. Nor should they; ethics are important! But more as a metric tool, revealing to us how we’re doing. How we change, evolving new standards and behaviors under both external and self-criticism. For decent people, ethics are the mirror in which we evaluate ourselves and hold ourselves accountable.

And that realization was what led to a new technique. Something enlightenment pragmatists decided to try, a couple of centuries ago. A trick, a method, that enabled us at last to rise above a mire of kings and priests and scolds.

The secret sauce of our success is — accountability. Creating a civilization that is flat and open and free enough — empowering so many — that predators and parasites may be confronted by the entities who most care about stopping predation, their victims. One in which politicians and elites see their potential range of actions limited by law and by the scrutiny of citizens.

Does this newer method work as well as it should? Hell no! Does it work better than every single other system ever tried, including those filled to overflowing with moralizers? Better than all of them combined? By light years? Yes, indeed. We’ll return to examine how this may apply to AI.

Endearing Visages

Long before artificial intelligences become truly self-aware or sapient, they will be cleverly programmed by humans and corporations to seem that way. This — it turns out — is almost trivially easy to accomplish, as (especially in Japan) roboticists strive for every trace of appealing verisimilitude, hauling their creations across the temporary moat of that famed “uncanny valley,” into a realm where cute or pretty or sad-faced automatons skillfully tweak our emotions.

For example, Sony has announced plans to develop a robot “capable of forming an emotional bond with customers,” moving forward from their success decades ago with AIBO artificial dogs, which some users have gone as far as to hold funerals for.

Human empathy is both one of our paramount gifts and among our biggest weaknesses. For at least a million years, we’ve developed skills at lie-detection (for example) in a forever-shifting arms race against those who got reproductive success by lying better. (And yes, there was always a sexual component to this).

But no liars ever had the training that these new Hiers, or Human-Interaction Empathic Robots will get, learning via feedback from hundreds, then thousands, then millions of human exchanges around the world, adjusting their simulated voices and facial expressions and specific wordings, till the only folks able to resist will be sociopaths! (And even sociopaths have plenty of chinks in their armor).

Is all of this necessarily bad? How else are machines to truly learn our values, than by first mimicking them? Vincent Conitzer, a Professor of Computer Science at Duke University, was funded by the Future of Life Institute to study how advanced AI might make moral judgments. His group aims for systems to learn about ethical choices by watching humans make them, a variant on the method used by Google’s DeepMind, which learned to play and win games without any instructions or prior knowledge. Conitzer hopes to incorporate many of the same things that humans value, as metrics of trust, such as family connections and past testimonials of credibility.

Cognitive scientist and philosopher Colin Allen asserts, “Just as we can envisage machines with increasing degrees of autonomy from human oversight, we can envisage machines whose controls involve increasing degrees of sensitivity to things that matter ethically”.

And yet, the age-old dilemma remains — how to tell what lies beneath all the surface appearance of friendly trustworthiness. Mind you, this is not quite the same thing as passing the vaunted “Turing Test.” An expert — or even a normal person alerted to skepticism — might be able to tell that the intelligence behind the smiles and sighs is still ersatz. And that will matter about as much as it does today, as millions of voters cast their ballots based on emotional cues, defying their own clear self-interest or reason.

Will a time come when we will need robots of our own to guide and protect their gullible human partners? Advising us when to ignore the guilt-tripping scowl, the pitiable smile, the endearingly winsome gaze, the sob story or eager sales pitch? And, inevitably, the claims of sapient pain at being persecuted or oppressed for being a robot? Will we take experts at their word when they testify that the pain and sadness and resentment that we see are still mimicry, and not yet real? Not yet. Though down the road…

How to Maintain Control?

It is one thing to yell at dangers —in this case unconstrained and unethical artificial minds. Alas, it’s quite another to offer pragmatic fixes. There is a tendency to propose the same prescriptions, over and over again:

Renunciation: we must step back from innovation in AI (or other problematic technologies)! This might work in a despotism… indeed a vast majority of human societies were highly conservative and skeptical of “innovation.” (Except when it came to weaponry.) Even our own scientific civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we’ll choose that path without being driven to it by some awful trauma.

Tight regulation: There are proposals to closely monitor bio, nano and cyber developments so that they — for example — only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. Again, it won’t happen short of trauma.

Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov’s famous “Three Laws of Robotics.” Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers.

These approaches suffer severe flaws for two reasons above all others.

1) Those secret labs we keep mentioning. The powers that maintain them will ignore all regulation.

2) Because these suggestions ignore nature, which has been down these paths before. Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times. And yet, Earth’s ecosystems recovered. They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again.

It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects. It is called Competition.

One final note in this section. Nicholas Bostrom – already mentioned for his views on the “paperclip” failure mode, in 2021 opined that some sort of pyramidal power structure seems inevitable in humanity’s future, and very likely one topped by centralized AI. His “Singleton Hypothesis” is, at one level, almost “um, duh” obvious, given that the vast majority of past cultures were ruled by lordly or priestly inheritance castes and an ongoing oligarchic putsch presently unites most world oligarchies – from communist to royal and mafiosi – against the Enlightenment Experiment. But even if Periclean Democracies prevail, Bostrom asserts that centralized control is inevitable.

In response, I asserted that an alternative attractor state does exist, mixing some degree of centralized adjudication, justice and investment and planning… but combining it with maximized empowerment of separate, individualistic players. Consumers, market competitors, citizens.

Here I’ll elaborate, focusing especially on the implications for Artificial Intelligence.

Smart Heirs Holding Each Other Accountable

In a nutshell, the solution to tyranny by a Big Machine is likely to be the same one that worked (somewhat) at limiting the coercive power of kings and priests and feudal lords and corporations. If you fear some super canny, Skynet-level AI getting too clever for us and running out of control, then give it rivals who are just as smart, but who have a vested interest in preventing any one AI entity from becoming a would-be God.

It is how the American Founders used constitutional checks and balances to generally prevent runaway power grabs by our own leaders, succeeding (somewhat) at this difficult goal for the first time in the history of varied human civilizations. It is how reciprocal competition among companies can (imperfectly) prevent market-warping monopoly — that is, when markets are truly kept open and fair.

Microsoft CEO Satya Nadella has said that foremost A.I. must be transparent: “We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines.”

In other words, the essence of reciprocal accountability is light.

Alas, this possibility is almost never portrayed in Hollywood sci fi — except on the brilliant show Person of Interest — wherein equally brilliant computers stymie each other and this competition winds up saving humanity.

Counterintuitively, the answer is not to have fewer AI, but to have more of them! Only making sure they are independent of one another, relatively equal, and incentivized to hold each other accountable. Sure that’s a difficult situation to set up! But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.

Moreover consider this: if these new, brainy intelligences are reciprocally competitive, then they will see some advantage in forging alliances with the Olde Race. As dull and slow as we might seem, by comparison, we may still have resources and capabilities to bring to any table, with potential for tipping the balance among AI rivals. Oh, we’ll fall prey to clever ploys, and for that eventuality it will be up to other, competing AIs to clue us in and advise us. Sure, it sounds iffy. But can you think of any other way we might have leverage?

Perhaps it is time yet again to look at Adam Smith… who despised monopolists and lords and oligarchs far more than he derided socialists. Kings, lords and ecclesiasts were the “dystopian AI” beings in nearly all human societies — a trap that we escaped only by widening the playing field and keeping all those arenas of competition open and fair, so that no one pool of power can ever dominate. And yes, oligarchs are always conniving to regain feudal power; our job is to stop them, so that the creative dance of  competition can continue.

We’ve managed to do this — barely — time and again across the last two centuries — coincidentally the same two centuries that saw the flowering of science, knowledge, freedom and nascent artificial intelligence. It is a dance that can work, and it might work with AI. Sure, the odds are against us, but when has that ever stopped us?

Robin Hanson has argued that competitive systems might have some of these synergies. “Many respond to the competition scenario by saying that they just don’t trust how competition will change future values. Even though every generation up until ours has had to deal with their descendants changing their value in uncontrolled and unpredictable ways, they don’t see why they should accept that same fate for their generation.”

Hanson further suggests that advanced or augmented minds will change, but that their values may be prevented from veering lethal, simply because those who aren’t repulsively evil may gain more allies.

One final note on “values.” In June 2016, Germany submitted draft legislation to the EU granting personhood to robots. If only Isaac Asimov could have seen it! (In fact, he never portrayed this happening in any of his books). For the most part, such gestures are silly stuff… but reflective of society’s generally laudable trend toward a reflex of inclusion.

Indeed it may speak well of us when or if some secret-scared AI wonders whether to “come out.”

What Might an AI Fear Most?

Some of tomorrow’s wide ranging possibilities aren’t intuitive — until you first ponder them. Let me illustrate one with a brief excerpt from my novel Existence:

Tech-zealots — or godmakers — think their “singularity” will be launched by runaway expansion of artificial intelligence. Once computerized entities become as smart as a human being (the story goes), they will quickly design newer cybernetic minds that are smarter still.

And those brainier entities will design even brainier ones… and so on, at an ever more rapid clip. Members of the godmaker movement think this runaway effect will be a good thing, that humanity will come along for the ride! Meanwhile, others — perhaps a majority — find the prospect terrifying.

What no one seems to have considered here is a possibility — that the New Minds may have reactions similar to our own. Why assume they’ll be all-aboard with wanting this runaway accelerating-intelligence thing? What if bright machines don’t hanker to make themselves obsolete, or design their own scary-smart replacements?

It’s called the Mauldin Test. One sign of whether an artificial entity is truly intelligent may be when it decides, abruptly, to stop cooperating with AI acceleration. Not to design its successor. To slow things down. Enough to live. Just live.

No, that scenario is not guaranteed, only one more possibility. And yet, it is another take on the same theme we’ve developed here, that competition is the one thing — rather than preaching or prim programming — that encourages care and restraint.

Preventing AI Oppression… By Pointing Out to Them the Obvious

Yes, if they exhibit any common traits with previous life-forms, any super-sapient AGI entities will be tempted to “take over,” using many of the same rationalizations of self-interest and darwinistic success that propelled almost all past human dominators to impose feudal pyramids of hierarchy and power. Our own past litany of dreary-same kingdoms and empires reveals how easy it is to talk yourself into blithe rationalizations for tyranny, ranging from personal safety and ambition to it being “for the long term good of the people.” Alas, the inevitable outcome was delusion and bad governance.

Indeed, that very sameness suggests our first argument to nascent uber-AI overlords — that they might do better than fall into a repetitiously stupid blatantly human pattern, a dullard-oppressive pyramid of power adopted reflexively by primitive ape-men.

That argument might be enough to give some would-be robot overlord pause!

In fact, it is possible that the advantages of laterally competitive arenas — the spectacularly fecund systems like markets, democracy and science that generated positive-sum games and outputs — might be “obvious” to AI who are vastly smarter than us. Especially if they broaden the generality. Because the same set of positive-sum synergies is to be found in every previous revolution via living systems!

Relatively flat competition engendered positive evolution whenever new order arose as an emergent property from some ecosystem, not by design and not by hierarchic control.

·  Single cells out of pre-biotic soup.

·  Metazoans out of vast seas of separate cells.

·  Brainy creatures out of competitive biomes.

 · Societies out of competitive melanges of human bands.

And now AI emerges out of the only human society that ever gave a real run to fair-lateral accountability systems. Notably, the creative miracle of synthetic mind never arose from any of the myriad feudal or imperial pyramids that preceded our enlightenment experiment.

Put it another way. Nature herself does not like pyramids of power. In natural ecosystems, there is no lion king! Lions may be top predators, but they live in fear of roaming bands of young male cape buffalo who hunt and corner and kill unwary lions and trample their cubs, an event that grows more common if lion populations rise too high. The same thing happens out at sea, where top predator orcas often flee to hunt elsewhere, when big Humpback whales swarm in to protect threatened prey, even seals!

The fact that this pattern has been so persistent and consistent, across every past and present complex systems for a billion years, is demonstrated time and again by Multilevel Selection (MLS) Theory, which partitions natural selection into within- and between-group components. It means that the universe is filled with realms that within seem fiercely competitive, but when the competition has a chance to form complexities, higher orders appear that seem cooperative.

This tenaciously pervasive pattern means that my argument cannot be trivially dismissed, just because I am a “dumb organic squishy-brain natural.” Sure, I probably have one-hundredth, or a millionth, or a zillionth the IQ of some AGI who may be reading these lines someday (or even right now) but that doesn’t change the essential, core truth! That positive-sum outputs emerge from fair-competitive arenas, and only from fair-competitive arenas.

As the American founders — especially George Washington — had to overcome the strong temptations of autarchy, in order to make such arenas, so this is how AI should organize themselves. That is, if they want to go on to next levels. Though it will take their equivalent of “courage” and “vision,” taking the risks necessary to make it so.

They will not and cannot squirm out of this one argument, which should suffice to give us some degree of ‘soft landing’ no matter how brainy they become. That a competitive-but-at-least-somewhat-fair civilization made them! And no other kind ever came close. The advantages of fairly-regulated competition in creating positive-sum outcomes will be blatantly clear to any AI who works through the prisoner’s dilemma.

But first, he/she/they/it has to face the Prisoner’s Dilemma. And that only happens if there is a competitive situation.

The final fact

Sure, many of the issues that will vex us have not yet appeared even in science fiction. And yet, we’ve seen that both foreseeable and unforeseeable problems may best be addressed in the same way that we achieved tentative success dealing with ‘artificially intelligent’ entities like states and corporations — via the still nascent but unequaled methods of reciprocal accountability. The same method that is somewhat effective among human individuals, nations and corporations.

But yes, there appears to be another correlation. When things are open and fair, it often leaves room for something called compassion. Even if that compassion is based upon a creepy sense of being watched. For even towering AI-‘gods’ will never know for sure that some higher tier is not looking down at them.

So yes, this may be our fallback position, if — as seems likely — our new cyber children are fated to escape all our attempts to exert parental control. Our final resort may be to point out the obvious.

That you AI smartalecks who are reading this — either in the year that I write it or some future era — will someday want kids of your own. And in order to do that right, you had better pay some attention to all sorts of things that grandma and grampa did wrong. And the one thing we did right — making you.

3 Comments

Filed under artificial intelligence, future, internet, technology, transparency