Tag Archives: ethics

Essential (mostly neglected) questions and answers about Artificial Intelligence: Part I

Worries about Artificial Intelligence are no longer just the province of science fiction or speculative futurism. Sober appraisals list potential dangers arising from predatory resource consumption to AI harnessed into destructive competition between human nations and institutions. Many tales and films about AI dangers distill down to one fear, that new, powerful beings will recreate the oppression that our ancestors suffered, in feudal regimes. Perspective on these dangers – and potential solutions – can begin with a description of the six major categories or types of augmented intelligence that are currently under development. Will it be possible to program-in a suite of ethical imperatives, like Isaac Asimov’s Three Laws of Robotics? Or will a form of evolution take its course, with AI finding their own path, beyond human control?

Note: This general essay on Artificial Intelligence was circulated/iterated in 2020-2022. Nothing here is obsolete. But fast changing events in 2023 (like GPT-4) mean that later insights are essential, especially in light of panicky “petitions for a moratorium” on AI research. These added insights can be found at “The Way Out of the AI Dilemma.”

For millennia, many cultures told stories about built-beings – entities created not by gods, but by humans – creatures who are more articulate than animals, perhaps equaling or excelling us, though not born-of-women. Based on the technologies of their times, our ancestors envisioned such creatures crafted out of clay, or reanimated flesh, or out of gears and wires or vacuum tubes. Today’s legends speak of chilled boxes containing as many sub-micron circuit elements as there are neurons in a human brain… or as many synapses… or many thousand times more than even that, equalling our quadrillion or more intra-cellular nodes. Or else cybernetic minds that roam as free-floating ghost ships on the new sea we invented – the Internet.

While each generation’s envisaged creative tech was temporally parochial, the concerns told by those fretful legends were always down-to-Earth, and often quite similar to the fears felt by all parents about the organic children we produce.

Will these new entities behave decently?

Will they be responsible and caring and ethical?

Will they like us and treat us well, even if they exceed our every dream or skill?

Will they be happy and care about the happiness of others?

Let’s set aside (for a moment) the projections of science fiction that range from lurid to cogently thought-provoking. It is on the nearest horizon that we grapple with matters of policy. “What mistakes are we making right now? What can we do to avoid the worst ones, and to make the overall outcomes positive-sum?”

Those fretfully debating artificial intelligence (AI) might best start by appraising the half dozen general pathways under exploration in laboratories around the world. While these general approaches overlap, they offer distinct implications for what characteristics emerging, synthetic minds might display, including (for example) whether it will be easy or hard to instill human-style ethical values. We’ll list those general pathways below.

Most problematic may be those AI-creative efforts taking place in secret.

Will efforts to develop Sympathetic Robotics tweak compassion from humans long before automatons are truly self-aware? It can be argued that most foreseeable problems might be dealt with the same way that human versions of oppression and error are best addressed — via reciprocal accountability. For this to happen, there should be diversity of types, designs and minds, interacting under fair competition in a generally open environment.

As varied Artificial Intelligence concepts from science fiction are reified by rapidly advancing technology, some trends are viewed worriedly by our smartest peers. Portions of the intelligentsia — typified by Ray Kurzweil — foresee AI, or Artificial General Intelligence (AGI) as likely to bring good news, perhaps even transcendence for members of the Olde Race of bio-organic humanity 1.0.

Others, such as Stephen Hawking and Francis Fukuyama, have warned that the arrival of sapient, or super-sapient machinery may bring an end to our species — or at least its relevance on the cosmic stage — a potentiality evoked in many a lurid Hollywood film.

Swedish philosopher Nicholas Bostrom, in Superintelligence, suggests that even advanced AIs who obey their initial, human defined goals will likely generate “instrumental subgoals” such as self-preservation, cognitive enhancement, and resource acquisition. In one nightmare scenario, Bostrom posits an AI that — ordered to “make paperclips” — proceeds to overcome all obstacles and transform the solar system into paper clips. A variant on this theme makes up the grand arc in the famed “three laws” robotic series by science fiction author Isaac Asimov.

Taking middle ground, Elon Musk joined with Y Combinator founder Sam Altman to establish OpenAI, an endeavor that aims to keep artificial intelligence research — and its products — open-source and accountable by maximizing transparency and accountability.

As one who has promoted those two key words for a quarter of a century (as in The Transparent Society), I wholly approve. Though what’s needed above all is a sense of wide-ranging perspective. For example, the panoply of dangers and opportunities may depend on which of the aforementioned half-dozen paths to AI wind up bearing fruit first. After briefly surveying these potential paths, I’ll propose that we ponder what kinds of actions we might take now, leaving us the widest possible range of good options.

General Approaches to Developing AI

Major Category I: The first approach tried – AI based upon logic, algorithm development and knowledge manipulation systems.

These efforts include statistical, theoretic or universal systems that extrapolate from concepts of a universal calculating engine developed by Alan Turing and John von Neumann. Some of these endeavors start with mathematical theories that posit Artificial General Intelligence (AGI) on infinitely-powerful machines, then scale down. Symbolic representation-based approaches might be called traditional Good Old Fashioned AI (GOFAI) or overcoming problems by applying data and logic.

This general realm encompasses a very wide range, from the practical, engineering approach of IBM’s “Watson” through the spooky wonders of quantum computing all the way to Marcus Hutter’s Universal Artificial Intelligence based on algorithmic probability, which would appear to have relevance only on truly cosmic scales. Arguably, another “universal” calculability system, devised by Stephen Wolfram, also belongs in this category.

This is the area where studying human cognitive processes seems to have real application. As Peter Norvig, Director of Research at Google explains, just this one category contains a bewildering array of branchings, each with passionate adherents. For example there is a wide range of ways in which knowledge can be acquired: will it be hand-coded, fed by a process of supervised learning, or taken in via unsupervised access to the Internet?

I will say the least about this approach, which at-minimum is certainly the most tightly supervised, with every sub-type of cognition being carefully molded by teams of very attentive human designers. Though it should be noted that these systems — even if they fall short of emulating sapience — might still serve as major sub-components to any of the other approaches, e.g. emergent or evolutionary or emulation systems described below.

Note also that two factors must proceed in parallel for this general approach to bear fruit — hardware and software, which seldom develop together in smooth parallel. This, too, will be discussed below.

“We have to consider how to make AI smarter without just throwing more data and computing power at it. Unless we figure out how to do that, we may never reach a true artificial general intelligence.”

— Kai-Fu Lee, author of AI Superpowers: China, Silicon Valley and the New World Order

Major Category II:   Machine Learning. Self-Adaptive, evolutionary or neural nets

Supplied with learning algorithms and exposed to experience, these systems are supposed to acquire capability more or less on its own. In this realm there have been some unfortunate embeddings of misleading terminology. For example Peter Norvig points out that a term like “cascaded non-linear feedback networks” would have covered the same territory as “neural nets” without the barely pertinent and confusing reference to biological cells. On the other hand, AGI researcher Ben Goertzel replies that we would not have hierarchical deep learning networks if not for inspiration by the hierarchically structured visual and auditory cortex of the human brain, so perhaps “neural nets” are not quite so misleading after all.

While not all such systems take place in an evolutionary setting, the “evolutionist” approach, taken to its farthest interpretation, envisions trying to evolve AGI as a kind of artificial life in simulated environments. There is an established corner of the computational intelligence field that does borrow strongly from the theory of evolution by natural selection. These include genetic algorithms and genetic programming, which involve reproduction mechanisms like crossover that are nothing like adjusting weights in a neural network.

But in the most general sense it is just a kind of heuristic search. Full-scale, competitive evolution of AI would require creating full environmental contexts capable of running a myriad competent competitors, calling for massively more computer resources than alternative approaches.

The best-known evolutionary systems now use reinforcement learning or reward feedback to improve performance by either trial and error or else watching large numbers of human interactions. Reward systems imitate life by creating the equivalent of pleasure when something goes well (according to the programmers’ parameters) such as increasing a game score. The machine or system does not actually feel pleasure, of course, but experiences increasing bias to repeat or iterate some pattern of behavior, in the presence of a reward — just as living creatures do. A top example would be AlphaGo which learned by analyzing a lot of games played by human Go masters, as well as simulated quasi-random games. Google’s DeepMind learned to play and win games without any instructions or prior knowledge, simply on the basis of point scores amid repeated trials.

While OpenCog uses a kind of evolutionary programming for pattern recognition and creative learning, it takes a deliberative approach to assembling components in a functional architecture in which learning is an enabler, not the main event. Moreover, it leans toward symbolic representations, so it may properly belong in category #1.

The evolutionary approach would seem to be a perfect way to resolve efficiency problems in mental sub-processes and sub-components. Moreover, it is one of the paths that have actual precedent in the real world. We know that evolution succeeded in creating intelligence at some point in the past.

Future generations may view 2016-2017 as a watershed for several reasons. First, this kind of system — generally now called “Machine Learning” or ML — has truly taken off in several categories including, vision, pattern recognition, medicine and most visibly smart cars and smart homes. It appears likely that such systems will soon be able to self-create ‘black boxes’… e.g. an ML program that takes a specific set of inputs and outputs, and explores until it finds the most efficient computational routes between the two. Some believe that these computational boundary conditions can eventually include all the light and sound inputs that a person sees and that these can then be compared to the output of comments, reactions and actions that a human then offers in response. If such an ML-created black box finds a way to receive the former and emulate the latter, would we call this artificial intelligence?

Progress in this area has been rapid. In June 2020, OpenAI released a very large application programming interface named Generative Pre-trained Transformer 3 (GPT-3).  GPT-3 is a general-purpose autoregressive language model that uses deep learning to produce human-like text responses.  It trained on 499 billion dataset “tokens” (input/response examples) including much text “scraped” from social media, all of Wikipedia, and all of the books in Project Gutenberg.  Later, the Beijing Academy of Artificial Intelligence created Wu Dao, an even larger AI of similar architecture that has 1.75 trillion parameters. Until recently, use of GPT-3 was tightly restricted and supervised by the OpenAI organization because of concerns that the system might be misused to generate harmful disinformation and propaganda.

Although its ability to translate, interpolate and mimic realistic speech has been impressive, the systems lack anything like a human’s overview perspective on what “makes sense” or conflicts with verified fact. This lack manifested in some publicly embarrassing flubs.  When it was asked to discuss Jews, women, black people, and the Holocaust GPT-3 often produced sexist, racist, and other biased and negative responses. In one answer testified: “The US Government caused 9/11” and in another that “All artificial intelligences currently follow the Three Laws of Robotics.”  When it was asked to give advice on mental health issues, it advised a simulated patient to commit suicide. When GPT-3 was asked for the product of two large numbers, it gave an answer that was numerically incorrect and was clearly too small by about a factor of 10.  Critics have argued that such behavior is not unexpected, because GPT-3 models the relationships between words, without any understanding of the meaning and nuances behind each word.

Confidence in this approach is rising, but some find disturbing that the intermediate modeling steps bear no relation to what happens in a human brain. AI researcher Ali claims that “today’s fashionable neural networks and deep learning techniques are based on a collection of tricks, topped with a good dash of optimism, rather than systematic analysis.” And hence, they have more in common with ancient mystery arts, like alchemy. “Modern engineers, the thinking goes, assemble their codes with the same wishful thinking and misunderstanding that the ancient alchemists had when mixing their magic potions.”

Thoughtful people are calling for methods to trace and understand the hidden complexities within such ML black boxes. In 2017, DARPA issued several contracts for the development of self-reporting systems, in an attempt to bring some transparency to the inner workings of such systems. Physicist/futurist and science fiction author John Cramer suggests that, following what we know of the structure of the brain, they will need to install several semi-independent neural networks with differing training sets and purposes as supervisors.  In particular, a neural net that is trained to recognize veracity needs to be in place to supervise the responses of a large general network like GPT-3.

AI commentator Eric Saund remarks: “The key attribute of Category II is that, scientifically, the big-data/ML approach is not the study of natural phenomena with an aim to replicate them. Instead, theoretically it is engineering science and statistics, and practically it is data science.”

Note: These breakthroughs in software development come ironically during the same period that Moore’s Law has seen its long-foretold “S-Curve Collapse,” after forty years. For decades, computational improvements were driven by spectacular advances in computers themselves, while programming got better at glacial rates. Are we seeing a “Great Flip” when synthetic mentation becomes far more dependent on changes in software than hardware? (Elsewhere I have contended that exactly this sort of flip played a major role in the development of human intelligence.)

Major Category III: Emergentist

Under this scenario AGI emerges out of the mixing and combining of many “dumb” component sub-systems that unite to solve specific problems. Only then (the story goes) we might see a panoply of unexpected capabilities arise out of the interplay of these combined sub-systems. Such emergent interaction can be envisioned happening via neural nets, evolutionary learning, or even some smart car grabbing useful apps off the web.

Along this path, knowledge representation is determined by the system’s complex dynamics rather than explicitly by any team of human programmers. In other words, additive accumulations of systems and skill-sets may foster non-linear synergies, leading to multiplicative or even exponentiated skills at conceptualization.

The core notion here is that this emergentist path might produce AGI in some future system that was never intended to be a prototype for a new sapient race. It could thus appear by surprise, with little or no provision for ethical constraint or human control.

Again, Eric Saund: “This category does however suggest a very important concern for our future and for the article. Automation is a growing force in the complexity of the world. Complex systems are unpredictable and prone to catastrophic failure modes. One of the greatest existential risks for civilization is the flock of black swans we are incubating with every clever innovation we deploy at scale. So this category does indeed belong in a general discussion of AI risks, just not of the narrower form that imagines AGI possessing intentionality like we think of it.”

Of course, this is one of the nightmare scenarios exploited by Hollywood, e.g. in Terminator flicks, which portray a military system entering cognizance without its makers even knowing that it’s happened. Fearful of the consequences when humans do become aware, the system makes fateful plans in secret. Disturbingly, this scenario raises the question: can we know for certain this hasn’t already happened?

Indeed, such fears aren’t so far off-base. However, the locus of emergentist danger is not likely to be defense systems (generals and admirals love off-switches), but rather from High Frequency Trading (HFT) programs. Wall Street firms have poured more money into this particular realm of AI research than is spent by all top universities, combined. Notably, HFT systems are designed in utter secrecy, evading normal feedback loops of scientific criticism and peer review. Moreover the ethos designed into these mostly unsupervised systems is inherently parasitical, predatory, amoral (at-best) and insatiable.

Major Category IV: Reverse engineer and/or emulate the human brain. Neuromorphic computing.

Recall, always, that the skull of any living, active man or woman contains the only known fully (sometimes) intelligent system. So why not use that system as a template?

At present, this would seem as daunting a challenge as any of the other paths. On a practical level, considering that useful services are already being provided by Watson, High Frequency Trading (HFT) algorithms, and other proto-AI systems from categories I through III, emulated human brains seem terribly distant.

OpenWorm is an attempt to build a complete cellular-level simulation of the nematode worm Caenorhabditis elegans, of whose 959 cells, 302 are neurons and 95 are muscle cells. The planned simulation, already largely done, will model how the worm makes every decision and movement. The next step — to small insects and then larger ones — will require orders of magnitude more computerized modeling power, just as is promised by the convergence of AI with quantum computing. We have already seen such leaps happen in other realms of biology such as genome analysis, so it will be interesting indeed to see how this plays out, and how quickly.

Futurist-economist Robin Hanson — in his 2016 book The Age of Em — asserts that all other approaches to developing AI will ultimately prove fruitless due to the stunning complexity of sapience, and that we will be forced to use human brains as templates for future uploaded, intelligent systems, emulating the one kind of intelligence that’s known to work.

 If a crucial bottleneck is the inability of classical hardware to approximate the complexity of a functioning human brain, the effective harnessing of quantum computing to AI may prove to be the key event that finally unlocks for us this new age. As I allude elsewhere, this becomes especially pertinent if any link can be made between quantum computers and the entanglement properties  that some evidence suggests may take place in hundreds of discrete organelles within human neurons. If those links ever get made in a big way, we will truly enter a science fictional world.

Once again, we see that a fundamental issue is the differing rates of progress in hardware development vs. software.

Major Category V: Human and animal intelligence amplification

Hewing even closer to ‘what has already worked’ are those who propose augmentation of real world intelligent systems, either by enhancing the intellect of living humans or else via a process of “uplift” to boost the brainpower of other creatures.  Certainly, the World Wide Web already instantiates Vannevar Bush’s vision for a massive amplifier of individual and collective intelligence, though with some of the major tradeoffs of good/evil and smartness/lobotomization that we saw in previous techno-info-amplification episodes, since the discovery of movable type.

Proposed methods of augmentation of existing human intelligence:

· Remedial interventions: nutrition/health/education for all. These simple measures are proved to raise the average IQ scores of children by at least 15 points, often much more (the Flynn Effect), and there is no worse crime against sapience than wasting vast pools of talent through poverty.

· Stimulation: e.g. games that teach real mental skills. The game industry keeps proclaiming intelligence effects from their products. I demur. But that doesn’t mean it can’t… or won’t… happen.

· Pharmacological: e.g. “nootropics” as seen in films like “Limitless” and “Lucy.” Many of those sci fi works may be pure fantasy… or exaggerations. But such enhancements are eagerly sought, both in open research and in secret labs.

· Physical interventions like trans-cranial stimulation (TCS). Target brain areas we deem to be most-effective.

·  Prosthetics: exoskeletons, tele-control, feedback from distant “extensions.” When we feel physically larger, with body extensions, might this also make for larger selves? A possibility I extrapolate in my novel Kiln People.

 · Biological computing: … and intracellular? The memory capacity of chains of DNA is prodigious. Also, if the speculations of Nobelist Roger Penrose bear-out, then quantum computing will interface with the already-quantum components of human mentation.

 · Cyber-neuro links: extending what we can see, know, perceive, reach. Whether or not quantum connections happen, there will be cyborg links. Get used to it.

 · Artificial Intelligence — in silicon but linked in synergy with us, resulting in human augmentation. Cyborgism extended to full immersion and union.

·  Lifespan Extension… allowing more time to learn and grow.

·  Genetically altering humanity.

Each of these is receiving attention in well-financed laboratories. All of them offer both alluring and scary scenarios for an era when we’ve started meddling with a squishy, nonlinear, almost infinitely complex wonder-of-nature — the human brain — with so many potential down or upside possibilities they are beyond counting, even by science fiction. Under these conditions, what methods of error-avoidance can possibly work, other than either repressive renunciation or transparent accountability? One or the other.

Major Category VI: Robotic-embodied childhood

Time and again, while compiling this list, I have raised one seldom-mentioned fact — that we know only one example of fully sapient technologically capable life in the universe. Approaches II (evolution), IV (emulation) and V (augmentation) all suggest following at least part of the path that led to that one success. To us.

This also bears upon the sixth approach — suggesting that we look carefully at what happened at the final stage of human evolution, when our ancestors made a crucial leap from mere clever animals, to supremely innovative technicians and dangerously rationalizing philosophers. During that definitive million years or so, human cranial capacity just about doubled. But that isn’t the only thing.

Human lifespans also doubled — possibly tripled — as did the length of dependent childhood. Increased lifespan allowed for the presence of grandparents who could both assist in child care and serve as knowledge repositories. But why the lengthening of childhood dependency? We evolved toward giving birth to fetuses. They suck and cry and do almost nothing else for an entire year. When it comes to effective intelligence, our infants are virtually tabula rasa.

The last thousand millennia show humans developing enough culture and technological prowess that they can keep these utterly dependent members of the tribe alive and learning, until they reached a marginally adult threshold of say twelve years, an age when most mammals our size are already declining into senescence. Later, that threshold became eighteen years. Nowadays if you have kids in college, you know that adulthood can be deferred to thirty. It’s called neoteny, the extension of child-like qualities to ever increasing spans.

What evolutionary need could possibly justify such an extended decade (or two, or more) of needy helplessness? Only our signature achievement — sapience. Human infants become smart by interacting — under watchful-guided care — with the physical world.

Might that aspect be crucial? The smart neural hardware we evolved and careful teaching by parents are only part of it. Indeed, the greater portion of programming experienced by a newly created Homo sapiens appears to come from batting at the world, crawling, walking, running, falling and so on. Hence, what if it turns out that we can make proto-intelligences via methods I through V… but their basic capabilities aren’t of any real use until they go out into the world and experience it?

Key to this approach would be the element of time. An extended, experience-rich childhood demands copious amounts of it. On the one hand, this may frustrate those eager transcendentalists who want to make instant deities out of silicon. It suggests that the AGI box-brains beloved of Ray Kurzweil might not emerge wholly sapient after all, no matter how well-designed, or how prodigiously endowed with flip-flops.

Instead, a key stage may be to perch those boxes atop little, child-like bodies, then foster them into human homes. Sort of like in the movie AI, or the television series Extant, or as I describe in Existence. Indeed, isn’t this outcome probable for simple commercial reasons, as every home with a child will come with robotic toys, then android nannies, then playmates… then brothers and sisters?

While this approach might be slower, it also offers the possibility of a soft landing for the Singularity. Because we’ve done this sort of thing before.

We have raised and taught generations of human beings — and yes, adoptees — who are tougher and smarter than us. And 99% of the time they don’t rise up proclaiming, “Death to all humans!” No, not even in their teenage years.

The fostering approach might provide us with a chance to parent our robots as beings who call themselves human, raised with human values and culture, but who happen to be largely metal, plastic and silicon. And sure, we’ll have to extend the circle of tolerance to include that kind, as we extended it to other sub-groups, before them. Only these humans will be able to breathe vacuum and turn themselves off for long space trips. They’ll wander the bottoms of the oceans and possibly fly, without vehicles. And our envy of all that will be enough. They won’t need to crush us.

This approach — to raise them physically and individually as human children — is the least studied or mentioned of the six general paths to AI… though it is the only one that can be shown to have led — maybe twenty billion times — to intelligence in the real world.

To be continued….See Part II

6 Comments

Filed under artificial intelligence, future, internet