Tag Archives: artificial intelligence

What constraints are needed to prevent AI from becoming a dystopian threat to humanity?

It is, of course, wise and beneficial to peer ahead for potential dangers and problems — one of the central tasks of high-end science fiction. Alas, detecting that a danger lurks is easier than prescribing solutions to prevent it.

aiTake the plausibility of malignant Artificial Intelligence, remarked-upon recently by luminaries ranging from Stephen Hawking to Elon Musk to Francis Fukuyama (Our Post-Human Future: Consequences of the Biotechnology Revolution). Some warn that the arrival of sapient, or super- sapient machinery may bring an end to our species – or at least its relevance on the cosmic stage – a potentiality evoked in many a lurid Hollywood film.

Nick Bostrom takes an in-depth look at the future of augmented human and a revolution in machine intelligence in his recent book — Superintelligence: Paths, Dangers, Strategies — charting possible hazards, failure modes and spectacular benefits as machines match and then exceed our human levels of intelligence.

Taking middle ground, SpaceX/Tesla entrepreneur Elon Musk has joined with Y-Combinator founder Sam Altman to establish Open AI, an endeavor that aims to keep artificial intelligence research – and its products – accountable by maximizing transparency and accountability.

41f-0srzitl-_sx322_bo1204203200_Indeed, my own novels contain some dire warnings about failure modes with our new, cybernetic children. For other chilling scenarios of AI gone wrong, sample science fiction scenarios such as Isaac Asimov’s I, Robot, Harlan Ellison’s I Have No Mouth and I Must Scream, Daniel Wilson’s Roboapocalypse, William Hertling’s Avogradro Corp, Ramez Naam’s Nexus, James Hogan’s The Two Faces of Tomorrow. And of course, a multitude of Sci Fi films and TV shows such as Battlestar Galactica, Terminator, or The Transformers depict dark future scenarios.

== What can we do? ==

Considering the dangers of AI, there is a tendency to offer the same prescriptions, over and over again:

1) Renunciation: we must step back from innovation in AI (or other problematic tech). This might work in a despotism… indeed, 99%+ of human societies were highly conservative and skeptical of “innovation.” (Except when it came to weaponry.) Our own civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we’ll choose that path without be driven to it by some awful trauma.

41rrkrcwwvl-_sx331_bo1204203200_2) Tight regulation. There are proposals to closely monitor bio, nano and cyber developments so that they – for example – only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. In certain areas – like nano – there’s a real opportunity, here. Again though, in the most general sense this won’t happen short of trauma.

3) Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov’s famous “Three Laws of Robotics.”  Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers. See Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barat.

All of these approaches suffer severe flaws for one reason above all others. Because they ignore nature, which has been down these paths before.  Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times.  And yet, Earth’s ecosystems recovered.  They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again.  It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects.

It is called Competition.

If you fear a super smart, Skynet level AI getting too clever for us and running out of control, then give it rivals who are just as smart but who have a vested interest in preventing any one AI entity from becoming a would-be God.

norvigSure, defining “vested interest” is tricky. Cheating and collusion will be tempting. But this – precisely – is how the American Founders used constitutional checks and balances to prevent runaway power grabs by our own leaders, achieving the feat across several generations for the first time in the history of varied human civilizations. It is how companies prevent market warping monopoly, that is when markets are truly kept flat-open-fair.

Alas, this is a possibility almost never portrayed in Hollywood sci fi – except on the brilliant show Person of Interest – wherein equally brilliant computers stymie each other and this competition winds up saving humanity.

== A more positive future ==

DisputationArenasArrowCoverThe answer is not fewer AI.  It is to have more of them!  But innovating incentives to make sure they are independent of one another, relatively equal, and motivated to hold each other accountable.  A difficult situation to set up!  But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.

Perhaps it is time yet again to look at Adam Smith… who despised monopolists and lords and oligarchs far more than he derided socialists.  Kings and lords were the “powerful dystopian AI” beings in 99%+ of human societies. A trap that we escaped only by widening the playing field and keeping all those arenas of competition flat-open-fair.  So that no one pool of power can ever dominate.  (And yes, let me reiterate that I know the objection! Oligarchs are always conniving to regain feudal power. So?our job is to stop them, so that the creative dance of flat-open-fair competition can continue.

The core truth about this approach is that it has already worked.  Never perfectly, but well-enough to stave off monolithic overlordship for more than two centuries. With the odds always against us, we’ve managed to do this – barely – time and again.  It is a dance that can work.

We do know for certain that nothing else ever has stymied power-grabbing monoliths. And to be clear about this — nothing else can even theoretically work to control super-intelligent AIs.

Secrecy is the underlying mistake that makes every innovation go wrong, such as in Michael Crichton novels and films! If AI happens in the open, then errors and flaws may be discovered in time… perhaps by other, wary AIs!

(Excerpted from a book in progress… this question was one I answered over on Quora)

Advertisements

3 Comments

Filed under science, technology

Milestones Leading up to the Good Singularity

The-singularityThe Technological Singularity – a quasi mythical apotheosis that some foresee in our near, or very-near, future. A transition when our skill, knowledge and immense computing power will increase exponentially — to enable true Artificial Intelligence and humans are transformed into… well… godlike beings.  Can we even begin to imagine what life would look like after this?

An excellent article by Joel Falconer, on The Next Web, cites futurist Ray Kurzweil’s predictions of the Singularity, along with my warning about iffy far-range forecasting: “How can models created within an earlier, cruder system, properly simulate & predict the behavior of a later, vastly more complex system?” 

singularityIf you want an even broader perspective, try my noted introduction: “Singularities and Nightmares: Extremes of Optimism and Pessimism about the Human Future.” For there are certainly risks along the way — one being renunciation, people rejecting the notion of progress via science and technology.

How about portrayals in fiction? I mean, other than clichés about mega-AI gone berserk, trying to flatten us? Now, from a writer’s perspective, the Singularity presents a problem. One can write stories leading up to the Singularity, about problems like rebellious AI, or about heroic techies paving the way to bright horizons. But how do you write a tale set AFTER the singularity has happened – the good version – and we’ve all become gods? Heh. Never dare me! That’s the topic of my novella, Stones of Significance.
Ah, but not all techies think the Singularity will be cool.  One chilling scenario: serving our new machine Overlords: Apple co-founder, Steve Wozniak,  speculates that humans may become pets for our new robot overlords: “We’re already creating the superior beings, I think we lost the battle to the machines long ago. We’re going to become the pets, the dogs of the house.”

== Singularity related miscellany! ==

KurzweilSingularityCoverCreepy… but probably helpful… new teaching tool! Do you want to play the violin, but can’t be bothered to learn how? Then strap on this electric finger stimulator called PossessedHand that makes your fingers move with no input from your own brain.  Developed by scientists at Tokyo University in conjunction with Sony, hand consists of a pair of wrist bands that deliver mild electrical stimuli directly to the muscles that control your fingers, something normally done by your own brain. 
Or do Cyborgs already walk among us? “Cyborg is your grandma with a hearing aid, her replacement hip, and anyone who runs around with one of those Bluetooth in-ear headsets,” says Kosta Grammatis, an enginner with the EyeBorg Project. 

Author Michael Choroset, in the World Wide Mind: The Coming Integration of Humanity, Machines and the Internet, envisions a seamless interface of humans with machines in the near future. Wearable computers, implanted chips, neural interfaces and prosthetic limbs will be common occurrences. But will this lead to a world wide mind — a type of collective consciousness?
And how do we distinguish Mind vs. Machine? In The Atlantic, Brian Christian describes his experience participating in the annual Turing Test, given each year by the AI community, which confers the Loebner Prize on the winner. A panel of judges poses questions to unseen answerers – one computer, one human, and attempts to discern which is which, in essence looking for the Most Human Computer. Christian, however, won the Most Human Human award.

Ray Kurzweil discusses the significance of IBM’s Watson computer  — and how this relates to the Turing Test.

Hive Mind: Mimicking the collective behavior of ants and bees is one approach to modeling artificial intelligence. Groups of ants are good at solving problems, i.e. finding the shortest route to a food source. Computer algorithms based upon this type of swarm intelligence have proved useful, particularly in solving logistics problems. 

Finally, how would we begin to define a universal intelligence  — and how to apply it to humans, animals, machines or even extraterrestrials we may encounter?  

== How to Manage a Flood of Information ==

In the last decade, a tsunami of data and information has been created by twenty-first century science, which has become generating huge databases: the human genome, astronomical sky surveys, environmental monitoring of earth’s ecosystems, the Large Hadron Collider, to name a few. James Gleick’s The Information: A History, A Theory, A Flood, discusses how we can avoid drowning in this sea of data, and begin to make sense of the world.
Kevin Kelly discusses his book: What Technology Wants “We are moving from being people of the book….to people of the screen.” These screens will track your eye movements on the screen, noting where you focus your attention, and adapting to you. Our books will soon be looking back at us. 

All books will be linked together, with hyper-links of the sort I envisioned in my novel, Earth. Reading will be more of a shared, communal activity. The shift will continue toward accessing rather than owning information, as we live ever more in a flux of real-time streaming data.

Google looks to your previous queries (and the clicks that follow) and refines its search results accordingly…

…Such selectivity may eventually trap us inside our own “information cocoons,” as the legal scholar Cass Sunstein put it in his 2001 book Republic.com2.0. He posited that this could be one of the Internet’s most pernicious effects on the public sphere. The Filter Bubble, Eli Pariser’s important new inquiry into the dangers of excessive personalization, advances a similar argument. But while Sunstein worried that citizens would deliberately use technology to over-customize what they read, Pariser, the board president of the political advocacy group MoveOn.org, worries that technology companies are already silently doing this for us. As a result, he writes, “personalization filters serve up a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown.”…”

Very entertaining and informative… and the last five minutes are scarier n’ shit! Jesse Schell’s mind-blowing talk on the future of games (from DICE 2010)… describing how game design invades the real world… is just astounding. Especially the creepy/inspiring worrisome last five minutes.  Someone turn this into a sci fi story!  (Actually, some eerily parallel things were already in my new novel, EXISTENCE. You’ll see! In 2012.)

Enough to keep you busy a while?  Hey, I am finally finishing a great Big Brin Book… a novel more sprawling and ambitious than EARTH … entitles EXISTENCE.  Back to work.

1 Comment

Filed under future, internet, science, technology