What constraints are needed to prevent AI from becoming a dystopian threat to humanity?

It is, of course, wise and beneficial to peer ahead for potential dangers and problems — one of the central tasks of high-end science fiction. Alas, detecting that a danger lurks is easier than prescribing solutions to prevent it.

aiTake the plausibility of malignant Artificial Intelligence, remarked-upon recently by luminaries ranging from Stephen Hawking to Elon Musk to Francis Fukuyama (Our Post-Human Future: Consequences of the Biotechnology Revolution). Some warn that the arrival of sapient, or super- sapient machinery may bring an end to our species – or at least its relevance on the cosmic stage – a potentiality evoked in many a lurid Hollywood film.

Nick Bostrom takes an in-depth look at the future of augmented human and a revolution in machine intelligence in his recent book — Superintelligence: Paths, Dangers, Strategies — charting possible hazards, failure modes and spectacular benefits as machines match and then exceed our human levels of intelligence.

Taking middle ground, SpaceX/Tesla entrepreneur Elon Musk has joined with Y-Combinator founder Sam Altman to establish Open AI, an endeavor that aims to keep artificial intelligence research – and its products – accountable by maximizing transparency and accountability.

41f-0srzitl-_sx322_bo1204203200_Indeed, my own novels contain some dire warnings about failure modes with our new, cybernetic children. For other chilling scenarios of AI gone wrong, sample science fiction scenarios such as Isaac Asimov’s I, Robot, Harlan Ellison’s I Have No Mouth and I Must Scream, Daniel Wilson’s Roboapocalypse, William Hertling’s Avogradro Corp, Ramez Naam’s Nexus, James Hogan’s The Two Faces of Tomorrow. And of course, a multitude of Sci Fi films and TV shows such as Battlestar Galactica, Terminator, or The Transformers depict dark future scenarios.

== What can we do? ==

Considering the dangers of AI, there is a tendency to offer the same prescriptions, over and over again:

1) Renunciation: we must step back from innovation in AI (or other problematic tech). This might work in a despotism… indeed, 99%+ of human societies were highly conservative and skeptical of “innovation.” (Except when it came to weaponry.) Our own civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we’ll choose that path without be driven to it by some awful trauma.

41rrkrcwwvl-_sx331_bo1204203200_2) Tight regulation. There are proposals to closely monitor bio, nano and cyber developments so that they – for example – only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. In certain areas – like nano – there’s a real opportunity, here. Again though, in the most general sense this won’t happen short of trauma.

3) Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov’s famous “Three Laws of Robotics.”  Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers. See Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barat.

All of these approaches suffer severe flaws for one reason above all others. Because they ignore nature, which has been down these paths before.  Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times.  And yet, Earth’s ecosystems recovered.  They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again.  It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects.

It is called Competition.

If you fear a super smart, Skynet level AI getting too clever for us and running out of control, then give it rivals who are just as smart but who have a vested interest in preventing any one AI entity from becoming a would-be God.

norvigSure, defining “vested interest” is tricky. Cheating and collusion will be tempting. But this – precisely – is how the American Founders used constitutional checks and balances to prevent runaway power grabs by our own leaders, achieving the feat across several generations for the first time in the history of varied human civilizations. It is how companies prevent market warping monopoly, that is when markets are truly kept flat-open-fair.

Alas, this is a possibility almost never portrayed in Hollywood sci fi – except on the brilliant show Person of Interest – wherein equally brilliant computers stymie each other and this competition winds up saving humanity.

== A more positive future ==

DisputationArenasArrowCoverThe answer is not fewer AI.  It is to have more of them!  But innovating incentives to make sure they are independent of one another, relatively equal, and motivated to hold each other accountable.  A difficult situation to set up!  But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.

Perhaps it is time yet again to look at Adam Smith… who despised monopolists and lords and oligarchs far more than he derided socialists.  Kings and lords were the “powerful dystopian AI” beings in 99%+ of human societies. A trap that we escaped only by widening the playing field and keeping all those arenas of competition flat-open-fair.  So that no one pool of power can ever dominate.  (And yes, let me reiterate that I know the objection! Oligarchs are always conniving to regain feudal power. So?our job is to stop them, so that the creative dance of flat-open-fair competition can continue.

The core truth about this approach is that it has already worked.  Never perfectly, but well-enough to stave off monolithic overlordship for more than two centuries. With the odds always against us, we’ve managed to do this – barely – time and again.  It is a dance that can work.

We do know for certain that nothing else ever has stymied power-grabbing monoliths. And to be clear about this — nothing else can even theoretically work to control super-intelligent AIs.

Secrecy is the underlying mistake that makes every innovation go wrong, such as in Michael Crichton novels and films! If AI happens in the open, then errors and flaws may be discovered in time… perhaps by other, wary AIs!

(Excerpted from a book in progress… this question was one I answered over on Quora)

3 Comments

Filed under science, technology

3 responses to “What constraints are needed to prevent AI from becoming a dystopian threat to humanity?

  1. mark

    We can even have a singular AI that has competing goals. Indeed, as I have argued, this may be a critical feature to ensuring the safety of AI http://www.albany.edu/~muraven/publications/Self-Regulation%20for%20AI.pdf

  2. Great article – I loved how you used contemporary examples of stable-ish systems (like sports, markets, etc.) as a possible solution to preventing highly concentrated AI power.

  3. Thank you David,
    Very interesting article that rides below the radar for most. My opinion on Artificial Intelligence, which I believe is rapidly gaining momentum, is that when create true AI how we raise it is crucial to the result of how it will look at it’s creators. Do we raise it in a prison because we fear it? Prisoners take a dim view of their captors. Do we raise it with only only positive information? I am not sure this option would even spawn a true AI. Do we raise it by baring all we know about ourselves? All the good we have achieved along with all the ugly flaws? I think raising a child of such power requires complete honesty, true AI will find out everything about us anyway.A pretty high bar for humanity to achieve. If we do not teach truthfully how will Artificial Intelligence find the truth? The internet?
    Respectfully
    Bob Broun

Leave a comment