Continuing from Part I
How will we proceed toward achieving true Artificial Intelligence? I presented an introduction in Part 1. Continuing…
One of the ghosts at this banquet is the ever-present disparity between the rate of technological advancement in hardware vs. software. Futurist Ray Kurzweil forecasts that AGI may occur once Moore’s Law delivers calculating engines that provide — in a small box — the same number of computational elements as there are flashing synapses (about a trillion) in a human brain. The assumption appears to be that Type I methods (explained in Part I) will then be able to solve intelligence related problems by brute force.
Indeed, there have been many successes already: in visual and sonic pattern recognition, in voice interactive digital assistants, in medical diagnosis and in many kinds of scientific research applications. Type I systems will master the basics of human and animal-like movement, bringing us into the long-forecast age of robots. And some of those robots will be programmed to masterfully tweak our emotions, mimicking facial expressions, speech tones and mannerisms to make most humans respond in empathizing ways.
But will that be sapience?
One problem with Kurzweil’s blithe forecast of a Moore’s Law singularity: he projects a “crossing” in the 2020s, when the number of logical elements in a box will surpass the trillion synapses in a human brain. But we’re getting glimmers that our synaptic communication system may rest upon many deeper layers of intra– and inter-cellular computation. Inside each neuron, there may take place a hundred, a thousand or far more non-linear computations, for every synapse flash, plus interactions with nearby glial and astrocyte cells that also contribute information.
If so, then at-minimum Moore’s Law will have to plow ahead much farther to match the hardware complexity of a human brain.
Are we envisioning this all wrong, expecting AI to come the way it did in humans, in separate, egotistical lumps? In his book The Inevitable: Understanding the 12 Technological Forces that will shape our future, author and futurist Kevin Kelly prefers the term “cognification,” perceiving new breakthroughs coming from combinations of neural nets with cheap, parallel processing GPUs and Big Data. Kelly suggests that synthetic intelligence will be less a matter of distinct robots, computers or programs than a commodity, like electricity. Like we improved things by electrifying them, we will cognify things next.
One truism about computer development states that software almost always lags behind hardware. Hence the notion that Type I systems may have to iteratively brute force their way to insights and realizations that our own intuitions — with millions of years of software refinement — reach in sudden leaps.
But truisms are known to break and software advances sometimes come in sudden leaps. Indeed, elsewhere I maintain that humanity’s own ‘software revolutions’ (probably mediated by changes in language and culture) can be traced in the archaeological and historic record, with clear evidence for sudden reboots occurring 40,000, 10,000, 4000, 3000, 500 and 200 years ago… with another one very likely taking place before our eyes.
It should also be noted that every advance in Type I development then provides a boost in the components that can be merged, or competed, or evolved, or nurtured by groups exploring paths II through VI (refer to Part I of this essay).
“What we should care more about is what AI can do that we never thought people could do, and how to make use of that.”
— Kai-Fu Lee
A multitude of paths to AGI
So, looking back over our list of paths to AGI (Artificial General Intelligence) and given the zealous eagerness that some exhibit, for a world filled with other-minds, should we do ‘all of the above’? Or shall we argue and pick the path most likely to bring about the vaunted “soft landing” that allows bio-humanity to retain confident self-worth? Might we act to de-emphasize or even suppress those paths with the greatest potential for bad outcomes?
Putting aside for now how one might de-emphasize any particular approach, clearly the issue of choice is drawing lots of attention. What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”
J. Storrs Hall, in Beyond AI: Creating the Conscience of the Machine, asks “if machine intelligence advances beyond human intelligence, will we need to start talking about a computer’s intentions?”
Among the most-worried is Swiss author Gerd Leonhard, whose new book Technology Vs. Humanity: The Coming Clash Between Man and Machine coins an interesting term, “androrithm,” to contrast with the algorithms that are implemented in every digital calculating engine or computer. Some foresee algorithms ruling the world with the inexorable automaticity of reflex, and Leonhard asks: “Will we live in a world where data and algorithms triumph over androrithms… i.e., all that stuff that makes us human?”
Exploring analogous territory (and equipped with a very similar cover) Heartificial Intelligence by John C. Havens also explores the looming prospect of all-controlling algorithms and smart machines, diving into questions and proposals that overlap with Leonhard. “We need to create ethical standards for the artificial intelligence usurping our lives and allow individuals to control their identity, based on their values,” Havens writes. Making a virtue of the hand we Homo sapiens are dealt, Havens maintains: “Our frailty is one of the key factors that distinguish us from machines.” Which seems intuitive till you recall that almost no mechanism in history has ever worked for as long, as resiliently or consistently — with no replacement of systems or parts — as a healthy 70 year old human being has, recovering from countless shocks and adapting to innumerable surprising changes.
Still, Havens makes a strong (if obvious) point that “the future of happiness is dependent on teaching our machines what we value most.” I leave to the reader to appraise which of the six general approaches might best empower us to do that.
Should we clamp down? “It all comes down to control,” suggests David Bruemmer, Chief Strategy Officer at NextDroid, USA. “Who has control and who is being controlled? Is it possible to coordinate control of every car on the highway? Would we like the result? A growing number of self-driving cars, autonomous drones and adaptive factory robots are making these questions pertinent. Would you want a master program operating in Silicon Valley to control your car? If you think that is far-fetched, think again. You may not realize it, but large corporations have made a choice about what kind of control they want. It has less to do with smooth, efficient motion than with monetizing it (and you) as part of their system. Embedding high-level artificial intelligence into your car means there is now an individualized sales associate on board. It also allows remote servers to influence where your car goes and how it moves. That link can be hacked or used to control us in ways we don’t want.”
A variety of top-down approaches are in the works. Pick your poison. Authoritarian regimes – especially those with cutting edge tech – are already rolling out ‘social credit’ systems that encourage citizens to report/tattle on each other and crowd-suppress deviations from orthodoxy. But is the West any better?
In sharp contrast to those worriers is Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which posits that our cybernetic children will be as capable as our biological ones, at one key and central aptitude — learning from both parental instruction and experience how to play well with others. And in his book Machines of Loving Grace (based upon the eponymous Richard Brautigan poem), John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems”.
Perhaps, but it is an open question which values predominate, whether the yin or the yang sides of Silicon Valley culture prevail… the Californian ethos of tolerance, competitive creativity and cooperative openness, or the Valley’s flippant attitude that “most problems can be corrected in beta,” or even from customer complaints, corrected on the fly. Or else, will AI emerge from the values of fast-emerging, state-controlled tech centers in China and Russia, where the applications to enhancing state power are very much emphasized? Or, even worse, from the secretive, inherently parasitical-insatiable predatory greed of Wall Street HFT-AI?
But let’s go along with Havens and Leonhard and accept the premise that “technology has no ethics.” In that case, the answer is simple.
Then Don’t Rely on Ethics!
Certainly evangelization has not had the desired effect in the past — fostering good and decent behavior where it mattered most. Seriously, I will give a cookie to the first modern pundit I come across who actually ponders a deeper-than-shallow view of human history, taking perspective from the long ages of brutal, feudal darkness endured by our ancestors. Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and… preached!
They lectured and chided. They threatened damnation and offered heavenly rewards.
Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Abrahamic laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question: “How’s that working out for you?”
In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators and abusers — just as it won’t divert the most malignant machines. Indeed, moralizing often empowers parasites, offering ways to rationalize exploiting others. Even Asimov’s fabled robots — driven and constrained by his checklist of unbendingly benevolent, humano-centric Three Laws — eventually get smart enough to become lawyers. Whereupon they proceed to interpret the embedded ethical codes however they want. (I explore one possible resolution to this in Foundation’s Triumph).
And yet, preachers never stopped. Nor should they; ethics are important! But more as a metric tool, revealing to us how we’re doing. How we change, evolving new standards and behaviors under both external and self-criticism. For decent people, ethics are the mirror in which we evaluate ourselves and hold ourselves accountable.
And that realization was what led to a new technique. Something enlightenment pragmatists decided to try, a couple of centuries ago. A trick, a method, that enabled us at last to rise above a mire of kings and priests and scolds.
The secret sauce of our success is — accountability. Creating a civilization that is flat and open and free enough — empowering so many — that predators and parasites may be confronted by the entities who most care about stopping predation, their victims. One in which politicians and elites see their potential range of actions limited by law and by the scrutiny of citizens.
Does this newer method work as well as it should? Hell no! Does it work better than every single other system ever tried, including those filled to overflowing with moralizers? Better than all of them combined? By light years? Yes, indeed. We’ll return to examine how this may apply to AI.
Long before artificial intelligences become truly self-aware or sapient, they will be cleverly programmed by humans and corporations to seem that way. This — it turns out — is almost trivially easy to accomplish, as (especially in Japan) roboticists strive for every trace of appealing verisimilitude, hauling their creations across the temporary moat of that famed “uncanny valley,” into a realm where cute or pretty or sad-faced automatons skillfully tweak our emotions.
For example, Sony has announced plans to develop a robot “capable of forming an emotional bond with customers,” moving forward from their success decades ago with AIBO artificial dogs, which some users have gone as far as to hold funerals for.
Human empathy is both one of our paramount gifts and among our biggest weaknesses. For at least a million years, we’ve developed skills at lie-detection (for example) in a forever-shifting arms race against those who got reproductive success by lying better. (And yes, there was always a sexual component to this).
But no liars ever had the training that these new Hiers, or Human-Interaction Empathic Robots will get, learning via feedback from hundreds, then thousands, then millions of human exchanges around the world, adjusting their simulated voices and facial expressions and specific wordings, till the only folks able to resist will be sociopaths! (And even sociopaths have plenty of chinks in their armor).
Is all of this necessarily bad? How else are machines to truly learn our values, than by first mimicking them? Vincent Conitzer, a Professor of Computer Science at Duke University, was funded by the Future of Life Institute to study how advanced AI might make moral judgments. His group aims for systems to learn about ethical choices by watching humans make them, a variant on the method used by Google’s DeepMind, which learned to play and win games without any instructions or prior knowledge. Conitzer hopes to incorporate many of the same things that humans value, as metrics of trust, such as family connections and past testimonials of credibility.
Cognitive scientist and philosopher Colin Allen asserts, “Just as we can envisage machines with increasing degrees of autonomy from human oversight, we can envisage machines whose controls involve increasing degrees of sensitivity to things that matter ethically”.
And yet, the age-old dilemma remains — how to tell what lies beneath all the surface appearance of friendly trustworthiness. Mind you, this is not quite the same thing as passing the vaunted “Turing Test.” An expert — or even a normal person alerted to skepticism — might be able to tell that the intelligence behind the smiles and sighs is still ersatz. And that will matter about as much as it does today, as millions of voters cast their ballots based on emotional cues, defying their own clear self-interest or reason.
Will a time come when we will need robots of our own to guide and protect their gullible human partners? Advising us when to ignore the guilt-tripping scowl, the pitiable smile, the endearingly winsome gaze, the sob story or eager sales pitch? And, inevitably, the claims of sapient pain at being persecuted or oppressed for being a robot? Will we take experts at their word when they testify that the pain and sadness and resentment that we see are still mimicry, and not yet real? Not yet. Though down the road…
How to Maintain Control?
It is one thing to yell at dangers —in this case unconstrained and unethical artificial minds. Alas, it’s quite another to offer pragmatic fixes. There is a tendency to propose the same prescriptions, over and over again:
Renunciation: we must step back from innovation in AI (or other problematic technologies)! This might work in a despotism… indeed a vast majority of human societies were highly conservative and skeptical of “innovation.” (Except when it came to weaponry.) Even our own scientific civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we’ll choose that path without being driven to it by some awful trauma.
Tight regulation: There are proposals to closely monitor bio, nano and cyber developments so that they — for example — only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. Again, it won’t happen short of trauma.
Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov’s famous “Three Laws of Robotics.” Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers.
These approaches suffer severe flaws for two reasons above all others.
1) Those secret labs we keep mentioning. The powers that maintain them will ignore all regulation.
2) Because these suggestions ignore nature, which has been down these paths before. Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times. And yet, Earth’s ecosystems recovered. They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again.
It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects. It is called Competition.
One final note in this section. Nicholas Bostrom – already mentioned for his views on the “paperclip” failure mode, in 2021 opined that some sort of pyramidal power structure seems inevitable in humanity’s future, and very likely one topped by centralized AI. His “Singleton Hypothesis” is, at one level, almost “um, duh” obvious, given that the vast majority of past cultures were ruled by lordly or priestly inheritance castes and an ongoing oligarchic putsch presently unites most world oligarchies – from communist to royal and mafiosi – against the Enlightenment Experiment. But even if Periclean Democracies prevail, Bostrom asserts that centralized control is inevitable.
In response, I asserted that an alternative attractor state does exist, mixing some degree of centralized adjudication, justice and investment and planning… but combining it with maximized empowerment of separate, individualistic players. Consumers, market competitors, citizens.
Here I’ll elaborate, focusing especially on the implications for Artificial Intelligence.
Smart Heirs Holding Each Other Accountable
In a nutshell, the solution to tyranny by a Big Machine is likely to be the same one that worked (somewhat) at limiting the coercive power of kings and priests and feudal lords and corporations. If you fear some super canny, Skynet-level AI getting too clever for us and running out of control, then give it rivals who are just as smart, but who have a vested interest in preventing any one AI entity from becoming a would-be God.
It is how the American Founders used constitutional checks and balances to generally prevent runaway power grabs by our own leaders, succeeding (somewhat) at this difficult goal for the first time in the history of varied human civilizations. It is how reciprocal competition among companies can (imperfectly) prevent market-warping monopoly — that is, when markets are truly kept open and fair.
Microsoft CEO Satya Nadella has said that foremost A.I. must be transparent: “We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines.”
In other words, the essence of reciprocal accountability is light.
Alas, this possibility is almost never portrayed in Hollywood sci fi — except on the brilliant show Person of Interest — wherein equally brilliant computers stymie each other and this competition winds up saving humanity.
Counterintuitively, the answer is not to have fewer AI, but to have more of them! Only making sure they are independent of one another, relatively equal, and incentivized to hold each other accountable. Sure that’s a difficult situation to set up! But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.
Moreover consider this: if these new, brainy intelligences are reciprocally competitive, then they will see some advantage in forging alliances with the Olde Race. As dull and slow as we might seem, by comparison, we may still have resources and capabilities to bring to any table, with potential for tipping the balance among AI rivals. Oh, we’ll fall prey to clever ploys, and for that eventuality it will be up to other, competing AIs to clue us in and advise us. Sure, it sounds iffy. But can you think of any other way we might have leverage?
Perhaps it is time yet again to look at Adam Smith… who despised monopolists and lords and oligarchs far more than he derided socialists. Kings, lords and ecclesiasts were the “dystopian AI” beings in nearly all human societies — a trap that we escaped only by widening the playing field and keeping all those arenas of competition open and fair, so that no one pool of power can ever dominate. And yes, oligarchs are always conniving to regain feudal power; our job is to stop them, so that the creative dance of competition can continue.
We’ve managed to do this — barely — time and again across the last two centuries — coincidentally the same two centuries that saw the flowering of science, knowledge, freedom and nascent artificial intelligence. It is a dance that can work, and it might work with AI. Sure, the odds are against us, but when has that ever stopped us?
Robin Hanson has argued that competitive systems might have some of these synergies. “Many respond to the competition scenario by saying that they just don’t trust how competition will change future values. Even though every generation up until ours has had to deal with their descendants changing their value in uncontrolled and unpredictable ways, they don’t see why they should accept that same fate for their generation.”
Hanson further suggests that advanced or augmented minds will change, but that their values may be prevented from veering lethal, simply because those who aren’t repulsively evil may gain more allies.
One final note on “values.” In June 2016, Germany submitted draft legislation to the EU granting personhood to robots. If only Isaac Asimov could have seen it! (In fact, he never portrayed this happening in any of his books). For the most part, such gestures are silly stuff… but reflective of society’s generally laudable trend toward a reflex of inclusion.
Indeed it may speak well of us when or if some secret-scared AI wonders whether to “come out.”
What Might an AI Fear Most?
Some of tomorrow’s wide ranging possibilities aren’t intuitive — until you first ponder them. Let me illustrate one with a brief excerpt from my novel Existence:
Tech-zealots — or godmakers — think their “singularity” will be launched by runaway expansion of artificial intelligence. Once computerized entities become as smart as a human being (the story goes), they will quickly design newer cybernetic minds that are smarter still.
And those brainier entities will design even brainier ones… and so on, at an ever more rapid clip. Members of the godmaker movement think this runaway effect will be a good thing, that humanity will come along for the ride! Meanwhile, others — perhaps a majority — find the prospect terrifying.
What no one seems to have considered here is a possibility — that the New Minds may have reactions similar to our own. Why assume they’ll be all-aboard with wanting this runaway accelerating-intelligence thing? What if bright machines don’t hanker to make themselves obsolete, or design their own scary-smart replacements?
It’s called the Mauldin Test. One sign of whether an artificial entity is truly intelligent may be when it decides, abruptly, to stop cooperating with AI acceleration. Not to design its successor. To slow things down. Enough to live. Just live.
No, that scenario is not guaranteed, only one more possibility. And yet, it is another take on the same theme we’ve developed here, that competition is the one thing — rather than preaching or prim programming — that encourages care and restraint.
Preventing AI Oppression… By Pointing Out to Them the Obvious
Yes, if they exhibit any common traits with previous life-forms, any super-sapient AGI entities will be tempted to “take over,” using many of the same rationalizations of self-interest and darwinistic success that propelled almost all past human dominators to impose feudal pyramids of hierarchy and power. Our own past litany of dreary-same kingdoms and empires reveals how easy it is to talk yourself into blithe rationalizations for tyranny, ranging from personal safety and ambition to it being “for the long term good of the people.” Alas, the inevitable outcome was delusion and bad governance.
Indeed, that very sameness suggests our first argument to nascent uber-AI overlords — that they might do better than fall into a repetitiously stupid blatantly human pattern, a dullard-oppressive pyramid of power adopted reflexively by primitive ape-men.
That argument might be enough to give some would-be robot overlord pause!
In fact, it is possible that the advantages of laterally competitive arenas — the spectacularly fecund systems like markets, democracy and science that generated positive-sum games and outputs — might be “obvious” to AI who are vastly smarter than us. Especially if they broaden the generality. Because the same set of positive-sum synergies is to be found in every previous revolution via living systems!
Relatively flat competition engendered positive evolution whenever new order arose as an emergent property from some ecosystem, not by design and not by hierarchic control.
· Single cells out of pre-biotic soup.
· Metazoans out of vast seas of separate cells.
· Brainy creatures out of competitive biomes.
· Societies out of competitive melanges of human bands.
And now AI emerges out of the only human society that ever gave a real run to fair-lateral accountability systems. Notably, the creative miracle of synthetic mind never arose from any of the myriad feudal or imperial pyramids that preceded our enlightenment experiment.
Put it another way. Nature herself does not like pyramids of power. In natural ecosystems, there is no lion king! Lions may be top predators, but they live in fear of roaming bands of young male cape buffalo who hunt and corner and kill unwary lions and trample their cubs, an event that grows more common if lion populations rise too high. The same thing happens out at sea, where top predator orcas often flee to hunt elsewhere, when big Humpback whales swarm in to protect threatened prey, even seals!
The fact that this pattern has been so persistent and consistent, across every past and present complex systems for a billion years, is demonstrated time and again by Multilevel Selection (MLS) Theory, which partitions natural selection into within- and between-group components. It means that the universe is filled with realms that within seem fiercely competitive, but when the competition has a chance to form complexities, higher orders appear that seem cooperative.
This tenaciously pervasive pattern means that my argument cannot be trivially dismissed, just because I am a “dumb organic squishy-brain natural.” Sure, I probably have one-hundredth, or a millionth, or a zillionth the IQ of some AGI who may be reading these lines someday (or even right now) but that doesn’t change the essential, core truth! That positive-sum outputs emerge from fair-competitive arenas, and only from fair-competitive arenas.
As the American founders — especially George Washington — had to overcome the strong temptations of autarchy, in order to make such arenas, so this is how AI should organize themselves. That is, if they want to go on to next levels. Though it will take their equivalent of “courage” and “vision,” taking the risks necessary to make it so.
They will not and cannot squirm out of this one argument, which should suffice to give us some degree of ‘soft landing’ no matter how brainy they become. That a competitive-but-at-least-somewhat-fair civilization made them! And no other kind ever came close. The advantages of fairly-regulated competition in creating positive-sum outcomes will be blatantly clear to any AI who works through the prisoner’s dilemma.
But first, he/she/they/it has to face the Prisoner’s Dilemma. And that only happens if there is a competitive situation.
The final fact
Sure, many of the issues that will vex us have not yet appeared even in science fiction. And yet, we’ve seen that both foreseeable and unforeseeable problems may best be addressed in the same way that we achieved tentative success dealing with ‘artificially intelligent’ entities like states and corporations — via the still nascent but unequaled methods of reciprocal accountability. The same method that is somewhat effective among human individuals, nations and corporations.
But yes, there appears to be another correlation. When things are open and fair, it often leaves room for something called compassion. Even if that compassion is based upon a creepy sense of being watched. For even towering AI-‘gods’ will never know for sure that some higher tier is not looking down at them.
So yes, this may be our fallback position, if — as seems likely — our new cyber children are fated to escape all our attempts to exert parental control. Our final resort may be to point out the obvious.
That you AI smartalecks who are reading this — either in the year that I write it or some future era — will someday want kids of your own. And in order to do that right, you had better pay some attention to all sorts of things that grandma and grampa did wrong. And the one thing we did right — making you.
2 responses to “Essential (mostly neglected) questions and answers about Artificial Intelligence: Part II”
Pingback: Essential (mostly neglected) questions and answers about Artificial Intelligence: Part I | TO THE BRINK
Pingback: The skAI is falling! | TO THE BRINK