Tag Archives: AI

What constraints are needed to prevent AI from becoming a dystopian threat to humanity?

It is, of course, wise and beneficial to peer ahead for potential dangers and problems — one of the central tasks of high-end science fiction. Alas, detecting that a danger lurks is easier than prescribing solutions to prevent it.

aiTake the plausibility of malignant Artificial Intelligence, remarked-upon recently by luminaries ranging from Stephen Hawking to Elon Musk to Francis Fukuyama (Our Post-Human Future: Consequences of the Biotechnology Revolution). Some warn that the arrival of sapient, or super- sapient machinery may bring an end to our species – or at least its relevance on the cosmic stage – a potentiality evoked in many a lurid Hollywood film.

Nick Bostrom takes an in-depth look at the future of augmented human and a revolution in machine intelligence in his recent book — Superintelligence: Paths, Dangers, Strategies — charting possible hazards, failure modes and spectacular benefits as machines match and then exceed our human levels of intelligence.

Taking middle ground, SpaceX/Tesla entrepreneur Elon Musk has joined with Y-Combinator founder Sam Altman to establish Open AI, an endeavor that aims to keep artificial intelligence research – and its products – accountable by maximizing transparency and accountability.

41f-0srzitl-_sx322_bo1204203200_Indeed, my own novels contain some dire warnings about failure modes with our new, cybernetic children. For other chilling scenarios of AI gone wrong, sample science fiction scenarios such as Isaac Asimov’s I, Robot, Harlan Ellison’s I Have No Mouth and I Must Scream, Daniel Wilson’s Roboapocalypse, William Hertling’s Avogradro Corp, Ramez Naam’s Nexus, James Hogan’s The Two Faces of Tomorrow. And of course, a multitude of Sci Fi films and TV shows such as Battlestar Galactica, Terminator, or The Transformers depict dark future scenarios.

== What can we do? ==

Considering the dangers of AI, there is a tendency to offer the same prescriptions, over and over again:

1) Renunciation: we must step back from innovation in AI (or other problematic tech). This might work in a despotism… indeed, 99%+ of human societies were highly conservative and skeptical of “innovation.” (Except when it came to weaponry.) Our own civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we’ll choose that path without be driven to it by some awful trauma.

41rrkrcwwvl-_sx331_bo1204203200_2) Tight regulation. There are proposals to closely monitor bio, nano and cyber developments so that they – for example – only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. In certain areas – like nano – there’s a real opportunity, here. Again though, in the most general sense this won’t happen short of trauma.

3) Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov’s famous “Three Laws of Robotics.”  Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers. See Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barat.

All of these approaches suffer severe flaws for one reason above all others. Because they ignore nature, which has been down these paths before.  Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times.  And yet, Earth’s ecosystems recovered.  They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again.  It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects.

It is called Competition.

If you fear a super smart, Skynet level AI getting too clever for us and running out of control, then give it rivals who are just as smart but who have a vested interest in preventing any one AI entity from becoming a would-be God.

norvigSure, defining “vested interest” is tricky. Cheating and collusion will be tempting. But this – precisely – is how the American Founders used constitutional checks and balances to prevent runaway power grabs by our own leaders, achieving the feat across several generations for the first time in the history of varied human civilizations. It is how companies prevent market warping monopoly, that is when markets are truly kept flat-open-fair.

Alas, this is a possibility almost never portrayed in Hollywood sci fi – except on the brilliant show Person of Interest – wherein equally brilliant computers stymie each other and this competition winds up saving humanity.

== A more positive future ==

DisputationArenasArrowCoverThe answer is not fewer AI.  It is to have more of them!  But innovating incentives to make sure they are independent of one another, relatively equal, and motivated to hold each other accountable.  A difficult situation to set up!  But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.

Perhaps it is time yet again to look at Adam Smith… who despised monopolists and lords and oligarchs far more than he derided socialists.  Kings and lords were the “powerful dystopian AI” beings in 99%+ of human societies. A trap that we escaped only by widening the playing field and keeping all those arenas of competition flat-open-fair.  So that no one pool of power can ever dominate.  (And yes, let me reiterate that I know the objection! Oligarchs are always conniving to regain feudal power. So?our job is to stop them, so that the creative dance of flat-open-fair competition can continue.

The core truth about this approach is that it has already worked.  Never perfectly, but well-enough to stave off monolithic overlordship for more than two centuries. With the odds always against us, we’ve managed to do this – barely – time and again.  It is a dance that can work.

We do know for certain that nothing else ever has stymied power-grabbing monoliths. And to be clear about this — nothing else can even theoretically work to control super-intelligent AIs.

Secrecy is the underlying mistake that makes every innovation go wrong, such as in Michael Crichton novels and films! If AI happens in the open, then errors and flaws may be discovered in time… perhaps by other, wary AIs!

(Excerpted from a book in progress… this question was one I answered over on Quora)

Advertisements

3 Comments

Filed under science, technology

Peering into the Future: AI and Robot brains

Singularity-word-cloudIn Singularity or Transhumanism: What Word Should We Use to Discuss the Future? on Slate, Zoltan Istvan writes, “The singularity people (many at Singularity University) don’t like the term transhumanism. Transhumanists don’t like posthumanism. Posthumanists don’t like cyborgism. And cyborgism advocates don’t like the life extension tag. If you arrange the groups in any order, the same enmity occurs.” See what the proponents of these words mean by them…

…and why the old talmudic rabbis and jesuits are probably laughing their socks off.

==Progress toward AI?== 

Baby X, a 3D-simulated human child is getting smarter day by day. Researchers at the Auckland Bioengineering Institute Laboratory for Animate Technologies in New Zealand interact with the simulated toddler, reading, teaching, smiling, playing games, even singing into the computer’s microphone and webcam. The blonde youngster mimics facial expressions, laughs, reads words, even cries when he is left alone.

1400832509352“An experiment in machine learning, Baby X is a program that imitates the biological processes of learning, including association, conditioning and reinforcement learning. By algorithmically simulating the chemical reactions of the human brain— think dopamine release or increased oxytocin levels— and connecting them with sensory digital input, when Baby X learns to imitate a facial expression, for instance, software developers write protocols for the variable time intervals between action and response. Effectively “teaching” the child through code, while engineering such a program is no cakewalk, the result is an adorably giggling digital baby with an uncanny ability to learn through interaction,” writes Becket Mufson, in the Creators Project.

This is precisely the sixth approach to developing AI that is least discussed by “experts” in the field… and that I have long believed to be essential, in several ways. Above all, by raising them as our children – even fostering them to homes in small robot bodies – we will gain many crucial advantages – that I lay out (somewhat) in Existence.

Meanwhile, Cornell’s Robo Brain is currently learning from the internet — downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals, all being translated and stored in a robot-friendly format, accessible to ‘helper’ robots who will function in our factories, homes, and offices. “If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” said one researcher. Follow its progress on the Robobrain website.

Meet Jibo, advertised as “the world’s first family robot.” Kinda creepy but attractive too…

Asimov-three-laws-roboticsEver hear of “neuromorphic architecture?” Silicon chip design that uses transistors — (5 billion of them in the latest IBM chip) – to create analogues of the nonlinear response patterns of biological neurons. The latest version, from IBM, is called “True North” and it is simply spectacular. Its prodigious pattern recognition capabilities are only matched by its stunning (by four orders of magnitude(!)) power efficiency. This is where Moore’s Law, augmented by new neuronal and parallelism software, may truly start delivering.

Now… How to keep what we produce sane? And where on the chip – pray tell – do the Three Laws reside?

Ah, well… I have explored the implications (yin and yang) of the Asimovian laws in my sequel which tied up Isaac’s universe – Foundation’s Triumph. Meanwhile, serious minds are grappling with the problem of “how to keep them loyal.” For example…

==Creating Superintelligence==

bostrom-superintelligenceNick Bostrom has published the book “Superintelligence: Paths, Dangers, Strategies,” that is well-reviewed by Andrew Leonard in Salon.

“Risks that are especially difficult to control have three characteristics: autonomy, self-replication and self-modification. Infectious diseases have these characteristics, and have killed more people than any other class of events, including war. Some computer malware has these characteristics, and can do a lot of damage…

“But microbes and malware cannot intelligently self-modify, so countermeasures can catch up. A superintelligent system [as outlined by Bostrom would be much harder to control if it were able to intelligently self-modify.” writes Bostrom.

Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Still, his litany of “be careful what you wish for” parables is taken straight from the pages of a century of science fictional “what-if” scenarios. Geeky sci fi archivists need to be present, during the programming, to point out: “you may want to rephrase that… cause way back in 1947 Leigh Brackett showed that it could be misconstrued as…”

When did homo sapiens become a more sophisticated species? Not until our skulls underwent “feminization.” Interesting article! In fact the mystery of the First Great Renaissance… the burst of human creativity around 45,000 years ago… is discussed in EXISTENCE!

But — if I may mention it — the real correlation with this notion… that sexual selection resulted in gentler, more “feminized” males, was presaged by this paper of mine… Neoteny and Two-Way Sexual Selection in Human Evolution.

==Developing Brains==

EMPATHYResearcher Talma Hendler has found evidence for two types of empathy, each tied to a different network of brain regions. One type she calls mental empathy, which requires you to mentally step outside yourself and think about what another person is thinking or experiencing. Parts of the frontal, temporal, and parietal cortex that make up this network. The other type she calls embodied empathy; this is the more visceral in-the-moment empathy you might feel when you see someone get punched in the guts. Very cogent and thought provoking.

This interesting article in Wired explores how movies exploit both of these networks to make you identify with the characters. Only the manipulation is now going scientific!

And veering a bit… When did modern humans arrive in Europe, and by how much did they overlap with our fading cousins, the Neandertals? New studies suggest it all happened earlier than most had assumed, perhaps around …45,000 years ago.

Now throw in…. Children and adolescents with autism have a surplus of synapses in the brain, and this excess is due to a slowdown in a normal brain “pruning” process during development.

Hmmmmm.

==and organs==

Scientists have for the first time grown a complex, fully functional organ from scratch in a living animal by transplanting cells that were originally created in a laboratory to form a replacement thymus, a vital organ of the immune system.

By deciphering the detailed gene expressions by which a lizard regrows its tail, scientists hope to re-ignite regrowth processes in mammals like us, that have been dormant for 200 million years. Both of these stories are straight from my story “Chrysalis” in this month’s ANALOG!

==Miscellanea==

Scientists report using laser light in ultrafast pulses to control the quantum state of electrons contained inside nanoscale defects located in a diamond, and also observe changes in that electron over a period of time. The findings could be an important milestone on the road to quantum computing.

SCIENCE-TECHNOLOGYAnother team has devised a way to make microscopes magnify 20 times more than usual. This magnification allows scientists to see and identify substances and matter as minuscule as or even smaller than a virus.

Direct synthesis of ammonia from air and water? At low temperatures and pressures? If this membrane method can bypass the usual harsh processes, the news can be significant for liberating poor farmers everywhere to make their own fertilizer.

Looks plausible… if amazing! A transparent luminescent solar concentrator developed in Michigan can be used to cover anything that has a flat, clear surface. Visible light passes through. But organic molecules absorb invisible wavelengths of sunlight such as ultraviolet and near infrared, guiding those packets to the edge of the solar panel, where thin strips of photovoltaic solar cells pick it up and convert it into energy. Fascinating… another potential game changer.

Stanford scientists develop water splitter that runs on ordinary AAA battery.

How to tell if a Chelyabinsk style meteorite came from an asteroid? Here’s the basic rule of thumb. “The speed of whatever collides with Earth’s atmosphere depends on its orbit, which in turn depends on its source. The impactor’s entry at 19 km/s means that it came from the asteroid belt between Mars and Jupiter, not from a ballistically launched missile, whose speed is less than 11.2 km/s; a short-period comet, with an average speed of 35 km/s; or a long-period comet with an average speed of 55 km/s. As investigators began retracing the path of the meteor that blazed across the sky, their reconstructed orbit bore out that provenance.”  

Oh, anything much faster than 60 kps either fall naturally from outside the solar system… or was accelerated by someone with boojum powers and maybe ill intent!

what-if-munroeRecommended: what if? Serious Scientific Answers to Absurd Hypothetical Questions by Randall Munroe (of the brilliant xkcd).

Researchers from UC San Diego’s structural engineering department are using drones to capture unique views of the earthquake damage to Napa’s historic landmarks. Our own Falko Kuester explains how this new tech is helping.

And finally:

Don’t bogart that puffer, my friend. Dolphins pass around a puffer fish — apparently to get high off its toxins. After a few chomps, you no longer give a fugu.

  

Leave a comment

Filed under technology, Uncategorized

Science that threatens… and promises wonders

AI-birthGeorge Dvorsky has a piece on iO9, How Artificial Intelligence Will Give Birth to Itself,  summarizing many of the worrisome aspects of a possible runaway effect, when self-improving artificial intelligences (AI) get faster and faster at designing new and better versions of themselves. A thoughtful reflection on how the Singularity might (or might not) go out of control.

Alas, George left out a process issue that makes all the difference. That issue is Secrecy, which lies at the root of every Michael Crichton science-goes-wrong scenario. (Not one of Michael’s plot drivers would have taken place, if the “arrogant scientists” had done their innovating in the open – as most scientists have been trained to prefer – exposing their new robots/dinosaurs and so on to truly public, error-correcting criticism.)

secrecyEfforts to develop AI that are subject to the enlightenment process of reciprocal scrutiny might see their failure modes revealed and corrected in time. Those that take place in secret are almost one hundred percent guaranteed to produced unexpected outcomes. And most likely dangerous ones.

The worst example of AI research that is secret and extremely well-funded, while creating AI systems that are inherently amoral, predatory and insatiable? It’s a danger that I explore here: Why a Transaction Fee Matters to You. Automated investment programs… of which High Frequency Trading is only one example… represent probably the most dangerous AI research on our planet today.

== But who needs AI, with brainy-folks like this? ==

Closer-To-Truth-David-BrinRobert Kuhn’s television series Closer To Truth “gives you access to the world’s greatest thinkers exploring humanity’s deepest questions. Discover the fundamental issues of existence. Enjoy new ways of thinking. Appreciate diverse views. Witness intense debates. Express your own opinions. Seek your own answers. Get smarter.”

Wow… that’s a pretty hefty promise! So why not check out this fabulous series, now available online? Full disclosure: I contributed a few bits to the program, on topics ranging from cosmology and SETI to religion and ESP.

But scan the impressive lists of other folks, some of them – heck, most of them – way smarter than me! Such as David Deutsch, Freeman Dyson and Francisco Ayala. Mind-blowing stuff.

== We can do that! Should we? ==

SHOULD-WEYou’ve got to wonder why this politically self-destructive course has been chosen.   Perhaps something isn’t being told. China building Dubai-style fake islands in the South China Sea. All in service of asserting extremely aggressive territorial claims.

Also. Dubai is planning the largest indoor theme park in the world, which will be covered by a glass dome that will be open during the winter months. The project will also house the plant’s largest shopping mall with an area of 8 million sq. ft., which will take the form of an extended retail street network. Oil is creating whole climate controlled cities in the middle east, prototypes for space colonies?

Meanwhile, America declines into superstition. Nation apparently believed in Science…at some point. (I guess the Greatest Generation truly was better than us boomers.) 

Stirling cycle engines have long been considered an under-developed opportunity in power generation. Using a closed gas cycle to tap energy from any substantial heat difference, these external combustion devices have been used in spacecraft. They can – at very low maintenance – draw power from burning just about anything.   Now… Segway inventor Dean Kamen thinks his new Stirling Engine will get you off the grid for  under $10,000.

== Physics and astronomy ==

solar-stormA massive solar storm — or Coronal Mass Ejection — barely missed the Earth in 2012. “If it had hit, we would still be picking up the pieces,” physicist Daniel Baker, about the biggest storm in at least 120 years. Looking around and taking prudent precautions in a dangerous universe is what both science fiction and sanity are for. Ostriches who stick their heads in the ground will lose everything.

Long predicted — the Age of Amateurs in astronomy! Astronomers have long known that combining the data from several astrophotographs can reveal dramatically more detail about astrophysical objects. So what will they discover by combining all the astrophotographs on the Web? They’ve developed a system that automatically combines images from the same part of the sky to increase the effective exposure time of the resulting picture. And they say the combined images can rival those from much professional telescopes.

Cool. The Curiosity lander on Mars happened to be perfectly situated to catch images of the tiny (14 miles) moon Phobos eclipsing the Sun. Wow.

Oh!  Hot off the presses… (when will that phrase lose all relation to its origins?)…  NASA has revealed the suite of instruments that will likely fly on the next (2020) Mars roving laboratory, or “son-of-Curiosity.”  A way cool set of new scientific methods… though again nothing to explicitly check for life itself.

Astronomers announced the discovery of the fifth known triple supermassive black hole system in the universe. Some galaxies have more than one central black hole — each orbiting the other in relatively close proximity — and scientists say this is probably the result of two or more smaller galaxies merging. The two closest black holes are separated by a distance of 140 parsces (one parsec equals about 3 light years). The third supermassive black hole is much farther away.

HIGGINSA very interesting and challenging and smart series of cartoons explaining tough fields of physics, like magnetohydrodynamics. Also black holes and weird geometries. I do sniff a little crack-pottery, around the edges, so be aware some of it is… non-paradigm. Still, very good tours of difficult topics!

Savoir Sans Frontieres: Scientific Comic Books

The Silence Barrier: The Adventures of Archibald Higgins

The Black Hole: The Adventures of Archibald Higgins

A Caltech prof’s new theory suggests a highly unusual class of stars — 1 in 10,000 — may be made entirely of metal. Wow. I wonder how long they last.

Microscopically structuring steel like bamboo makes it stronger yet more flexible.

Finally, I have been putting in queries to Kip Thorne and other General Relativity experts about Hawking Radiation at the fringes of a gravity well… do any of you out there know such an expert with an open mind? I really do have a physics PhD!  So a little professional courtesy? 😉

Leave a comment

Filed under science

Milestones Leading up to the Good Singularity

The-singularityThe Technological Singularity – a quasi mythical apotheosis that some foresee in our near, or very-near, future. A transition when our skill, knowledge and immense computing power will increase exponentially — to enable true Artificial Intelligence and humans are transformed into… well… godlike beings.  Can we even begin to imagine what life would look like after this?

An excellent article by Joel Falconer, on The Next Web, cites futurist Ray Kurzweil’s predictions of the Singularity, along with my warning about iffy far-range forecasting: “How can models created within an earlier, cruder system, properly simulate & predict the behavior of a later, vastly more complex system?” 

singularityIf you want an even broader perspective, try my noted introduction: “Singularities and Nightmares: Extremes of Optimism and Pessimism about the Human Future.” For there are certainly risks along the way — one being renunciation, people rejecting the notion of progress via science and technology.

How about portrayals in fiction? I mean, other than clichés about mega-AI gone berserk, trying to flatten us? Now, from a writer’s perspective, the Singularity presents a problem. One can write stories leading up to the Singularity, about problems like rebellious AI, or about heroic techies paving the way to bright horizons. But how do you write a tale set AFTER the singularity has happened – the good version – and we’ve all become gods? Heh. Never dare me! That’s the topic of my novella, Stones of Significance.
Ah, but not all techies think the Singularity will be cool.  One chilling scenario: serving our new machine Overlords: Apple co-founder, Steve Wozniak,  speculates that humans may become pets for our new robot overlords: “We’re already creating the superior beings, I think we lost the battle to the machines long ago. We’re going to become the pets, the dogs of the house.”

== Singularity related miscellany! ==

KurzweilSingularityCoverCreepy… but probably helpful… new teaching tool! Do you want to play the violin, but can’t be bothered to learn how? Then strap on this electric finger stimulator called PossessedHand that makes your fingers move with no input from your own brain.  Developed by scientists at Tokyo University in conjunction with Sony, hand consists of a pair of wrist bands that deliver mild electrical stimuli directly to the muscles that control your fingers, something normally done by your own brain. 
Or do Cyborgs already walk among us? “Cyborg is your grandma with a hearing aid, her replacement hip, and anyone who runs around with one of those Bluetooth in-ear headsets,” says Kosta Grammatis, an enginner with the EyeBorg Project. 

Author Michael Choroset, in the World Wide Mind: The Coming Integration of Humanity, Machines and the Internet, envisions a seamless interface of humans with machines in the near future. Wearable computers, implanted chips, neural interfaces and prosthetic limbs will be common occurrences. But will this lead to a world wide mind — a type of collective consciousness?
And how do we distinguish Mind vs. Machine? In The Atlantic, Brian Christian describes his experience participating in the annual Turing Test, given each year by the AI community, which confers the Loebner Prize on the winner. A panel of judges poses questions to unseen answerers – one computer, one human, and attempts to discern which is which, in essence looking for the Most Human Computer. Christian, however, won the Most Human Human award.

Ray Kurzweil discusses the significance of IBM’s Watson computer  — and how this relates to the Turing Test.

Hive Mind: Mimicking the collective behavior of ants and bees is one approach to modeling artificial intelligence. Groups of ants are good at solving problems, i.e. finding the shortest route to a food source. Computer algorithms based upon this type of swarm intelligence have proved useful, particularly in solving logistics problems. 

Finally, how would we begin to define a universal intelligence  — and how to apply it to humans, animals, machines or even extraterrestrials we may encounter?  

== How to Manage a Flood of Information ==

In the last decade, a tsunami of data and information has been created by twenty-first century science, which has become generating huge databases: the human genome, astronomical sky surveys, environmental monitoring of earth’s ecosystems, the Large Hadron Collider, to name a few. James Gleick’s The Information: A History, A Theory, A Flood, discusses how we can avoid drowning in this sea of data, and begin to make sense of the world.
Kevin Kelly discusses his book: What Technology Wants “We are moving from being people of the book….to people of the screen.” These screens will track your eye movements on the screen, noting where you focus your attention, and adapting to you. Our books will soon be looking back at us. 

All books will be linked together, with hyper-links of the sort I envisioned in my novel, Earth. Reading will be more of a shared, communal activity. The shift will continue toward accessing rather than owning information, as we live ever more in a flux of real-time streaming data.

Google looks to your previous queries (and the clicks that follow) and refines its search results accordingly…

…Such selectivity may eventually trap us inside our own “information cocoons,” as the legal scholar Cass Sunstein put it in his 2001 book Republic.com2.0. He posited that this could be one of the Internet’s most pernicious effects on the public sphere. The Filter Bubble, Eli Pariser’s important new inquiry into the dangers of excessive personalization, advances a similar argument. But while Sunstein worried that citizens would deliberately use technology to over-customize what they read, Pariser, the board president of the political advocacy group MoveOn.org, worries that technology companies are already silently doing this for us. As a result, he writes, “personalization filters serve up a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown.”…”

Very entertaining and informative… and the last five minutes are scarier n’ shit! Jesse Schell’s mind-blowing talk on the future of games (from DICE 2010)… describing how game design invades the real world… is just astounding. Especially the creepy/inspiring worrisome last five minutes.  Someone turn this into a sci fi story!  (Actually, some eerily parallel things were already in my new novel, EXISTENCE. You’ll see! In 2012.)

Enough to keep you busy a while?  Hey, I am finally finishing a great Big Brin Book… a novel more sprawling and ambitious than EARTH … entitles EXISTENCE.  Back to work.

1 Comment

Filed under future, internet, science, technology