Remembering Vernor Vinge

Author of the Singularity

It is with sadness – and deep appreciation of my friend and colleague – that I must report the passing of fellow science fiction author – Vernor Vinge. A titan in the literary genre that explores a limitless range of potential destinies, Vernor enthralled millions with tales of plausible tomorrows, made all the more vivid by his polymath masteries of language, drama, characters and the implications of science.

Accused by some of a grievous sin – that of ‘optimism’ – Vernor gave us peerless legends that often depicted human success at overcoming problems… those right in front of us… while posing new ones! New dilemmas that may lie just ahead of our myopic gaze. He would often ask: “What if we succeed? Do you think that will be the end of it?”

Vernor’s aliens – in classic science fiction novels such as A Deepness in the Sky and A Fire Upon the Deep – were fascinating beings, drawing us into different styles of life and paths of consciousness.

His 1981 novella “True Names” was perhaps the first story to present a plausible concept of cyberspace, which would later be central to cyberpunk stories by William Gibson, Neal Stephenson and others. Many innovators of modern industry cite “True Names” as their keystone technological inspiration, though I deem it to have been even more prophetic about the yin-yang tradeoffs of privacy, transparency and accountability.  

Another of the many concepts arising in Vernor’s dynamic mind was that of the “Technological Singularity,” a term (and disruptive notion) that has pervaded culture and our thoughts about the looming future.

Others cite Rainbows End as the most vividly accurate portrayal of how new generations will apply onrushing cyber-tools, boggling their parents, who will stare at their kids’ accomplishments, in wonder. Wonders like a university library building that, during an impromptu rave, stands up and starts to dance!

Vernor had been – for years – under care for progressive Parkinsons, at a very nice place overlooking the Pacific in La Jolla. As reported by his friend and fellow San Diego State professor John Carroll, his decline had steepened since November, but was relatively comfortable. Up until that point, I had been in contact with Vernor almost weekly, but my friendship pales next to John’s devotion, for which I am – (and we all should be) – deeply grateful.

I am a bit too wracked, right now, to write much more. Certainly, homages will flow and we will post some on a tribute page. I will say that it’s a bit daunting now to be a “Killer B” who’s still standing. So, let me close with a photo that’s dear to my heart.

We spanned a pretty wide spectrum – politically! Yet, we Killer B’s (Vernor was a full member! And Octavia Butler once guffawed happily when we inducted her) always shared a deep love of our high art – that of gedankenexperimentation, extrapolation into the undiscovered country ahead.

And – if Vernor’s readers continue to be inspired – that country might even feature more solutions than problems. And perhaps copious supplies of hope.

Killer B’s at a book signing: Greg Bear, Gregory Benford, David Brin, Vernor Vinge

Leave a comment

Filed under science

Does government-funded science play a role in stimulating innovation?

The ultimate answer to “government is useless.”

The hypnotic incantation that all-government-is-evil-all-the-time would have bemused and appalled our parents in the Greatest Generation – those who persevered to overcome the Depression and Hitler, then contained Stalinism, went to the moon, developed successful companies and built a mighty middle class, all at high tax rates.  The mixed society that they built emphasized a wide stance, pragmatically stirring private enterprise with targeted collective actions, funded by a consensus negotiation process called politics.  The resulting civilization has been more successful – by orders of magnitude – than any other.  Than any combination of others.

So why do we hear an endlessly-repeated nostrum that this wide-stance, mixed approach is all wrong? That mantra is pushed so relentlessly by right-wing media — as well as some on the left — that it came as no surprise when a recent Pew Poll showed distrust of government among Americans at an all-time high. This general loathing collapses when citizens are asked which specific parts of government they’d shut down.  It turns out that most of them like most specific things that their taxes pay for.

In a sense, this isn’t new. For a century and a half, followers of Karl Marx demanded that we amputate society’s right arm of market-competitive enterprise and rely only on socialist guided-allocation for economic control.  Meanwhile, Ayn Rand’s ilk led a throng of those proclaiming we must lop off our left arm – forswearing any coordinated projects that look beyond the typical five year (nowadays more like one-year) commercial investment horizon. 

Any sensible person would respond: “Hey I need both arms, so bugger off!  Now let’s keep examining what each arm is good at, revising our knowledge of what each shouldn’t do.”

Does that sound too practical and moderate for this era? Our parents thought they had dealt with all this, proving decisively that calm negotiation, compromise and pragmatic mixed-solutions work best.  They would be stunned to see that fanatical would-be amputators are back in force, ranting nonsense.

Take for example Matt Ridley’s recent article in the Wall Street Journal, deriding government supported science as useless and counter-productive — a stance dear to WSJ’s owner, Rupert Murdoch.  Ridley’s core assertion? That the forward march of technological innovation and discovery is fore-ordained, as if by natural law. That competitive markets will allocate funds to develop new products with vastly greater efficiency than government bureaucrats picking winners and losers. And that research without a clear, near-future economic return is both futile and unnecessary.     

 == The driver of innovation is… ==

Former Microsoft CTO and IP Impressario Nathan Myhrvold has written a powerful rebuttal – Where does technological innovation come from? – to Ridley’s murdochian call for amputation. Says Myhrvold: “It’s natural for writers to want to come out with a contrarian piece that reverses all conventional wisdom, but it tends to work out better if the evidence one quotes is factually true. Alas Ridley’s evidence isn’t – his examples are all, so far as I can tell, either completely wrong, or at best selectively quoted. I also think his logic is wrong, and to be honest I don’t think much of the ideology that drives his argument either.”  Nathan’s rebuttal can be found here, along with links to the original, and Ridley’s response.

Myhrvold does a good job tearing holes in Ridley’s assertion that patents and other IP do nothing to stimulate innovation and economic development. Only he does not go far enough or present a wide perspective. He fails, for example, to put all of this into the context of 6000 years of human history.  So let me try.

During most of that time, innovation was actively suppressed by kings and lords and priests, fearing anything (except new armaments) that might upset the stable hierarchy. Moreover, innovators felt a strong incentive to keep any discoveries secret, lest competitors steal their advantage. As a result, many brilliant inventions were lost when the discoverers died. Examples abound, from Heron’s steam engines and Baghdad Batteries to Antekythera-style mechanical calculators and Damascus steel — from clear glass lenses to obstetric forceps – all lost for millennia before being rediscovered after much unnecessary pain. Staring across that vast wasteland of sixty feudal and futile centuries — comparing them to our dazzling levels of inventive success, especially since World War II — slams a steep burden of proof upon someone like Ridley, who asserts we are the ones doing something wrong.

In fact, though well-nurtured and tended markets are remarkably fecund, they are anything but “natural.” Show us historical examples! Kings, lords, priests and other cheaters always — always — warped and crushed market competition, far more than our modern, enlightenment states do.  Indeed, owner-oligarchy was the villain in Adam Smith’s call for a more “liberal” form of capitalism. Compared to those competition-ruining feudalists, Smith had little ire for socialists.  In fact, his liberal approach calls upon the state to counter-balance oligarchy, in order to keep capitalism flat-open-fair. 

Our maligned democratic states — while imperfect, always in need of criticism and fine tuning — engendered revolutions in mass education, infrastructure and reliable law that unleashed creative millions, maximizing the raw number of eager competitors — exactly the great ingredient that Friedrich Hayek recommended and that Adam Smith prescribed for a healthy, competitive market economy. 

To be clear, those who rail against 200,000 civil servants – closely watched and accountable – “picking winners and losers” have a reasonable complaint… but not when their prescription is handing over the same power to 5000 secretive and unaccountable members of a closed and incestuous oligarchic caste.  Smith and Hayek both had harsh words for that ancient and utterly bankrupt approach.

(Question: who actually de-regulates, when appropriate? When certain government interventionswere ‘captured” by anti-competitive oligarchs, it was Democrats who erased the Interstate Commerce Commission (ICC), restoring price competition to railroads (the bête noir of Ayn Rand) and the Civil Aeronautics Board (CAB: price-fixing for airlines). AT&T was broken up, and the Internet was unleashed by Al Gore’s legislation. Add in Gore’s Paperwork limitation act and Bill Clinton’s deregulation of GPS and one has to ask a simple question. Does anti-regulatory polemic matter more… or effective action?)

Yes, amid those horrific 6000 years of dismally lobotomizing feudal rule, history does offer us a few, rare examples when innovation flourished, leading to spectacular returns.  In most such cases, state investment and focused R&D played a major role. One can cite the great Chinese fleets of Admiral Cheng He or the impressive maritime research centers established by Prince Henry the Navigator, that made little Portugal a giant on the world stage. Likewise, tiny Holland became a global leader, stimulated by its free-city universities. England advanced tech rapidly with endowed scientific chairs, state subsidies and prizes. 

Those rare examples stand out from the general, dreary morass of feudal history. But none of them compare to the exponential growth unleashed by late-20th Century America’s synergy of government, enterprise and unleashed individual competitiveness, the very thing that all those kings and priests and lords used to crush, on sight. One result was the first society ever in the shape of a diamond, instead of the classic, feudal pyramid of privilege – a diamond whose vast and healthy and well-educated middle class has proved to be the generator of nearly all of our great accomplishments.

It is this historical perspective that seems so lacking in today’s political and philosophical debates — shallow as they are.   It reveals that the agenda of folks like Matt Ridley – and Rupert Murdoch – is not to release us from thralldom to shortsighted, oppressive civil servants and snooty scientist-boffins.  It is to discredit all of the modern expert castes that we have established, who serve to counterbalance (as Adam Smith prescribed) the feudal pyramids under which our ancestors sweltered in constraint.  Their aim – the evident goal of all “supply side” upward wealth transfers – is a return to those ancient, horrid ways.

==  Before our very eyes ==

I believe one of our problems is that the Rooseveltean reforms – which historians credit with saving western capitalism by vesting the working class with a large stake, something Marx never expected – were too successful, in a way. So successful that the very idea of class war seems not even to occur to American boomers. This, despite the fact that class conflict was rampant across almost every other nation and time.  But as boomers age-out is that grand time of naïve expectation over?

Forbes recently announced that just 62 ultra-rich individuals have as much wealth as the bottom half of humanity. Five years ago, it took 388 rich guys to achieve that status.  Which raises the question, where the heck does this rising, proto-feudal oligarchy think it will all lead? 

To a restoration of humanity’s normal, aristocratic pyramid of power (with them staying on top)?  Or to radicalization, as a billion members of the hard-pressed but highly skilled and tech-empowered middle class rediscover class struggle? To Revenge of the nerds?

The last time this happened, in the 1930s, lordly owner castes in Germany, Japan, Britain and the U.S. used mass media they owned to stir populist rightwing movements that might help suppress activity on the left. Not one of these efforts succeeded. In Germany and Japan, the monsters they created rose up and took over, leading to immense pain for all and eventual loss of much of that oligarchic wealth.

In Britain and the U.S., 1930s reactionary fomenters dragged us very close to the same path… till moderate reformers did what Marx deemed impossible – adjusted the wealth imbalance and reduced cheating advantages so that a rational and flat-open-fair capitalism would be moderated by rules and investments to stimulate a burgeoning middle class, without even slightly damaging the Smithian incentives to get rich through delivery of innovative goods and services.  That brilliant moderation led to the middle class booms of the 50s and 60s and – as I cannot repeat too often – it led to big majorities in our parents’ Greatest Generation adoring one living human above all others: Franklin Delano Roosevelt. (The next living human Americans almost universally adored was named Jonas Salk.)

There are some billionaires who aren’t shortsighted fools, ignorant of the lessons of history.  Bill Gates, Warren Buffet and many tech moguls want wealth disparities brought down through reasonable, negotiated Rooseveltean-style reform that will still leave them standing as very, very wealthy men.

The smart ones know where current trends will otherwise lead. To revolution and confiscation. Picture the probabilities, when the world’s poorest realize they could double their net wealth, just by transferring title from 50 men. In that case, amid a standoff between fifty oligarchs and three billion poor, it is the skilled middle and upper-middle classes who’ll be the ones deciding civilization’s course. And who do you think those billion tech-savvy professionals – so derided and maligned by murdochian propaganda — will side with, when push comes to shove?

== Back to innovation ==

Oh, for an easy-quick and devastating answer to the “hate-all-government” hypnosis! How I’d love to see a second “National Debt Clock” showing where the U.S. deficit would be now, if we (the citizens) had charged just a 5% royalty on the fruits of U.S. federal research. We’d be in the black! How effective such a “clock” would be. We deserve such a tasty piece of counter propaganda.

Then there is the ‘government research’ that has had spactacular effects that were not obviously fungible. Like solving horrors of smog (as a kid I felt it hurt to breathe!) and acid rain, poisoned/burning rivers, brain-killing leaded gas… and the hugely expensive project of deterring a third world war, allowing the world – and our own entrepreneurs – to endeavor without being crushed under either tyrant boots or mushroom clouds.

See:  Eight causes of the fiscal deficit cliff.

Closer to the point, consider this core question: how have we Americans been able to afford the endless trade deficits that propel world development? And make no mistake; two-thirds of the planet developed in one way: by selling Americans (under hugely indulgent US trade policies) trillions of dollars worth of crap we never needed. How did we afford this flood of world-stimulating red ink for 70 years?

Simple. Science and technology.  Each decade since the 1940s saw new, U.S.-led advances that engendered enough wealth to let us pay for all the stuff pouring out of Asian factories, giving poor workers jobs and hope.  Our trick was to keep the wonders coming — jet planes, rockets, satellites, electronics & transistors & lasers, telecom, pharmaceuticals… and the Internet.

Crucially, the world needs America to keep buying, so that factories can hum and workers send their kids to school, so those kids can then demand labor and environmental laws and all that.  The job of George Marshall’s brilliant trade-policy plan is only half finished. Crucially, the world cannot afford for the U.S. consumer to become too poor to buy crap.

Which means we must protect the goose that lays golden eggs – our brilliant inventiveness. Our ability to keep benefiting from enlightenment methods that stimulate creativity. And that will not happen if the fruits of creativity are immediately stolen.  There is a bargain implicit in today’s rising world.  Let America benefit from innovation, and we’ll buy whatever you produce. 

Foreign leaders who ignore that bargain, seeking to eat the goose, as well as its eggs, only prove their own short-sighted foolishness… like our home-grown fools who rail against all government investment and research.

It is time to have another look at the most successful social compact ever created – the Rooseveltean deal made by the Greatest Generation, which we then amended and improved by reducing race and gender injustice and discovering the importance of planetary care. Throw in a vibrantly confident and tech-savvy wave of youth, and that is how we all move forward. Away from dismal feudalism.  Toward (maybe) something like Star Trek.

===============

===============

A version of this article ran as a special report in the January 2016 newsletter of Mark Anderson’s Strategic News Service. The SNS Future In Review (FiRe) conference will be held November 7-10 2023 at the Terranea Hotel.

Leave a comment

Filed under future, innovation, public policy, science, technology

An Open-Challenge to SF Lit Fandom

The “Killer Bees” Letter – redux! (It’s more urgent and pertinent than ever…)

Science Fiction conquered the world. By far the most popular and lucrative sectors in cinema and gaming – for example – emerged like mighty titans from the tiny-despised larvae of sci fi pulps and novels bound by cheap mucilage. Oh, there is much to enjoy in these offspring SF Media. But only rarely do they convey the depth and breadth of character, or plot, or detailed world-building, or thoughtfulness that can be conveyed by the best literary SF.

And so a question for SF-Lit fans and readers. Will love of Poul Anderson, Ursula LeGuin, Alice Sheldon and Robert Sheckley fade away, when we’re gone? Or might we – the generation who mainlined on Lovecraft and McCaffrey and Silverberg – perhaps find a way to pass that love on to new generations?

That was the aim of a project that once seemed almost to gain traction in the SF fandom community. And maybe – just maybe – it’s time to try again, before the novel-and-story-reading generation shuffles off into obscurity, taking with us our love of black-squiggles-on-a-page.

Back in the 90s – along with fellow science fiction authors like the recently-late Greg Bear and Gregory Benford – I issued the “Killer Bees Letter” to the science fictional community asking that fan organizations start to act on their own charters, to “spread love of reading and science fiction to new generations.” We proposed that fan organizations might begin with the easiest and most efficient way to reach young readers.

No, I am not talking about standing outside a middle school in a trench coat, offering Heinlein or Andre Norton juveniles. (“The first one’s free!”) In fact the simpler (and far more legal) method – that was tried out in several places to great effect – is to start by ‘adopting’ just a few local teachers and librarians, those who are friendliest to science fiction, and helping them to accomplish what they already want to do!

In part, this could involve offering those SF-friendly educators one day passes to local science fiction conventions, enabling them to attend a special academic session (e.g. ‘teaching SF to young folks’) one morning… followed by half a dozen afternoon passes for their most-promising students and parents. Expensive? How, exactly? The marginal cost to the fan organization would be almost nil. In fact, the chance those kids will thereupon spread the word is worth trying!

The possible benefits – e.g. reversing the aging and decay of fandom – might be huge. And they were substantial… in the few places it was tried, back in the 90s. Alas, all-too few.

And so, here below is that original “Killer Bees Challenge” letter, as it was re-issued in 2003. Sadly, it is even more pertinent, today.


Using Science Fiction To Help Turn Kids on to Reading… And the Future!

© 2002 by David Brin

Consider the ages from twelve to fifteen, when a person’s sense of wonder can bloom or else wither, starved by ennui or seared by fashionable cynicism. Often it’s some small thing that can make a difference. An inspiring teacher or role model. A team effort or memorable adventure.

Sometimes even the right book or film can ignite a fire that lasts a lifetime.

For many of us, it was futuristic or speculative literature that helped cast our minds far beyond family, city, or oppressive peers… not to mention the limitations that others seemed bent on imposing, shackling our dreams. Whether in stories that spanned outer space, or adventures in cyberspace, or thoughtful ruminations about the mental life of dolphins or aliens, we discovered that the universe is larger than the local Mall. Both more dangerous and more filled with possibilities.

Once the sole province of nerdy young men, science fiction has become a central pert of our culture’s myth-making engine, now engaging girls, women, and adults of all ages and inclinations. Yet the breadth of SF and its ultimate importance can be difficult for a non-aficionado to grasp. After all, isn’t it all just spaceships, lasers and all that childish stuff?

Well, no it isn’t. As with any branch of human storytelling, science fiction has a spectrum of quality and depth, ranging from shallow Star Wars romps to the dark, serious explorations and world-shifting works of George Orwell, Aldous Huxley and Mary Shelley. A key element is fascination with change and how human beings respond when challenged by it. In other words, there is no genre more relevant to this rapidly transforming world we live in, where citizens are called upon to contemplate issues that would have boggled their grandparents.

Environmental degradation, the extinction and creation of new species, cloning, artificial intelligence, instant access to all archived knowledge and the looming prospect a generation – perhaps the very next one – that may have to wrestle with the implications of physical immortality.

Heady stuff! And you’d never imagine that any of it was under serious contemplation, if your idea of “sci-fi” came from movies! But these and a myriad other subjects are probed at the literary end of science fiction. In fact, some of the kids in today’s classrooms are wrestling with concepts at the very cutting edge — imbedded in tales they devour between colorful paper covers. Books that explore the edges of tolerance, like those of Octavia Butler and Alice Sheldon. Books that ponder biological destiny, penned by Greg Bear and Joan Slonczewski, or the physical sciences, by Robert Forward and Gregory Benford. Books designed by Julie Czerneda and Hal Clement to revolve around teaching themes. And those by Heinlein, Clarke and Kress and Bradbury, that instruct almost invisibly, because the authors were teachers at heart.

If high-end science fiction provokes wonder, thought and a sense of vigorous involvement with the world, can it be worth adding your arsenal of tricks and tools, ready to offer that hard-to-reach kid? Especially as an alternative to the violent fare in video games and the wretched pabulum that is on TV? What can be more relevant to bright teens, in their rapid-pulsed flux, than a literature that explores ideas and the possible consequences of change?

I can’t offer a tutorial on high-quality SF in this short space. So let’s do the next best thing – offer a short list of ways to help teachers, librarians and others bridge the gap between the simpleminded sci-fi images that are so popular in movies these days, and the real literary Science Fiction, where ideas flow and readers engage in truly exploratory adventures of the mind.

Using Web-based sites to create useful curriculum aids.

A new effort has begun, aimed at creating online resources for teachers wanting to bring good science fiction into their classrooms, as a way to excite topic-specific interest among students. Some use classic SF stories and novels to illustrate topics that are already in a teacher’s official study program. A teacher in Barstow, California created a good example, using my novel, The Postman, to elicit class discussions on issues in both literature and civics. Other teachers use stories to illustrate points in physics, chemistry, history, etc. When their materials – study guides and question sets – are distributed on the Web, they become a permanent help to teachers everywhere.

Here are just a few examples of sites for teaching science fiction.

Julie H. Czerneda’s Tales from the Wonder Zone helps teachers combine great stories with science curricula.

Teaching Science Fiction: Recommendations and Lesson Plans

Science Fiction Research Association

Using Science Fiction to Teach Science

Using Science Fiction in the Classroom

Creating new and better books for kids to read.

Consider this quandary. Science fiction images and adventures are more popular than ever, especially with young people. Yet, very little high quality science fiction is aimed straight for the vast market of adventure-minded teens. There is a market! Witness the success of Star Wars novelizations. Still, these factory-made series are missing something. Their exploits often follow the same hackneyed plot style. While the brightest teens soon graduate to reading more challenging books for grownups, many are discouraged by a scarcity of good, intelligent tales written just for them.

Some years back, I posted a list of Science Fiction Books for Young Adults.

Creating grass roots activism

Finally, there is the issue of what today’s science fiction fan community might do to help.

Fans are a special breed who maintain a belief that the future is a place that can be explored with brave adventures of the mind – adventures that may even help us avoid errors, the way George Orwell, Aldous Huxley and others gave warnings that helped divert us from dangerous paths.

The rest of this note is addressed to these aficionados of strong literary science fiction:

We’ve all heard about declining literacy in America. Sherry Gotleib tells that when she first opened the Change of Hobbit bookstore, in L.A., it thronged when the local junior high let out. Over time, these customers stayed loyal… but weren’t replaced. In the store’s final years, Sherry’s average customer was gray-flecked or balding, and the few teens who showed up focused on media or comics.

Polls show an aging of the SF readership. Science fiction themes are popular – in films, comix and games – but the genre’s literary heart faces demographic collapse. Worst of all, countless kids forget how to say the most beautiful word in any language – “Wow!”

That is where it all finally comes around. No altruism is more effective than the kind that begins at home.

Each of us lives near some school where bright kids now languish — bored, bullied, or unmotivated. Who among us can’t recall facing the same crisis once, in our own lives? For many, it was science fiction that helped us turn the corner. Science fiction welcomed us home.

As a community of science fiction fans and professionals, shouldn’t we make it our chief socially responsible activity to help expose another generation to a love of ‘the good stuff?’

For the last decade, ever since Greg Benford, Greg Bear and I first made this proposal, a number of SF oriented clubs and fan groups have focused their con-auctions, fund-raisers and charity drives toward raising helping SF literacy in their own communities. In many cases this meant “adopting” a local junior high school English teacher and/or librarian, finding out their needs and doing some of the following:

  • Recruiting guest speakers to visit classes or school assemblies, giving inspirational talks about science, writing, or history… anything to fire enthusiasm and imagination at an age when these are precious, flickering things.
  • Donating funds to buy SF books and sponsoring a reading club and/or writing contests, to encourage a love of SF and the creativity that helps produce more of it.
  • Persuading bookstores to offer prizes and discounts for teens.
  • Holding a special session at every local con, to which teachers and librarians are invited for free, to share ideas with fans and pros — then carefully using one-day passes to attract some of the brightest local teens+guardians to the con.

There is self-interest here. Authors who give talks often acquire new fans. Local conventions that sponsor a SF club may soon have new con-com members. If your charity auction sends $500 to the “Special Wish Fund,” you’ll get a thank-you note; but hand the same amount over to a stunned librarian and the photo will make your local paper!

Some committees, such as the Baltimore-based Worldcon, organized nationwide contests for SF-related stories, essays and artwork created by teens across North America, with awards and prizes to be presented at their convention.  Others – in the Northeast especially – have followed suit. But we’ve only just begun.

Teacher/librarian mini-conference

One thing local conventions can do: Most fan organizations have in their charters a major provision for “outreach and education.” Yet, this seldom gets priority. Here is a relatively painless approach, already tried with success at several conventions, offering a win-win situation for all. The Saturday morning SF-education mini-conference.

It starts by simply gathering all the routine “SF/youth/education” panels into a cohesive group, then making a real effort to invite area teachers and librarians to attend that part of the con for free. (With reasonable upgrades for those wanting to stay.) Some teachers can then be recruited to help adjust next year’s program to their needs. In a year or two, the mini-conference can be granting credential credit with momentum all its own. Moreover, it can be a money-maker for the convention, as attendees convert their free half-day memberships and tell their friends! Later, corporate sponsorships become a real possibility.

With teachers and librarians aboard, you can generate great projects that involve kids in creative ways, for example by running a science fiction reading/writing/art contest in area schools, involving several grade levels, culminating in a grand awards ceremony at the local con. (With reasonable con memberships available to the winners, their parents, friends….)

This kind of thing has worked already! At science fiction conventions held in Baltimore, in Chicago, in Philadelphia and Salt Lake City.

If nothing else, running a focused “SF & Education Mini-conference” sure beats scattering the usual youth-and education related panels all over the weekend. It seems worthwhile to focus some effort on the future, since that’s what SF is all about.

So there it is. A general outline of some efforts that are currently underway, to use the most American form of literature – Science Fiction – in the cause of helping kids learn. So far, it is only a rough outline, with some sincere efforts being made along the way. This letter is not so much a prescription as a call for people to think about possibilities… how the literature that is most about foresight and hope can somehow influence both young people and society at large to do the one thing that separates humans from all other creatures of Earth, Sky or Sea…

Think ahead….  With respect,

David Brin

Leave a comment

Filed under books, education, fiction, literature, science fiction, teaching

Problem-solving in the near future

Speculating about social & technological changes

Last year, the Pew Research Center asked a panel of tech experts to speculate about life would be like in the year 2025, taking into account changes in the aftermath of the pandemic – and other disruptive crises that may arise over the next few years. You can read the range of thought-provoking responses, which touched upon topics such as the future of economic and social inequality, as well as changes in the workplace due to increased automation, the rise of artificial intelligence and globalization. Discussions also focused on issues of sustainable energy, improved transportation and communication networks, and enhanced education opportunities. Many floated ideas about the near-term evolution of technologies that could improve the quality of life for vast numbers of people across the globe.

Below, I have reprinted my own response:

Assuming we restore the basic stability of the Western Enlightenment Experiment – and that is a big assumption, then several technological and social trends may come to fruition in the next five to ten years.

  • Advances in cost-effectiveness of sustainable energy supplies will be augmented by better storage systems. This will both reduce reliance on fossil fuels and allow cities and homes to be more autonomous.
  • Urban farming methods may expand to a more industrial scale, allowing similar moves toward local autonomy (perhaps requiring a full decade or more to show significant impact). Meat use will decline for several reasons, ensuring some degree of food security, as well. Tissue-cultured meat — long predicted in science fiction — is rapidly approaching sustainable levels. The planet, our health, our karma — and eventually, our wallets, will all benefit.
  • Local, small-scale, on-demand manufacturing may start to show effects in 2025. If all of the above take hold, there will be surplus oceanic shipping capacity across the planet. Some of it may be applied to ameliorate (not solve) acute water shortages. Innovative uses of such vessels may range all the way to those depicted in my novel ‘Earth.’
  • Full-scale diagnostic evaluations of diet, genes and microbiome will result in affordable micro-biotic therapies and treatments. AI appraisals of other diagnostics will both advance detection of problems and become distributed to handheld devices cheaply available to all, even poor clinics throughout the world.
  • Inexpensive handheld devices will start to carry detection sensor technologies that can appraise across the spectrum, allowing NGOs and even private parties to detect and report environmental problems.
  • Socially, this extension of citizen vision will go beyond the current trend of assigning accountability to police and other authorities. Despotisms will be empowered, as predicted in Orwell’s ‘Nineteen Eighty-four.’ But democracies will also be empowered (as I discuss in ‘The Transparent Society’) as those in power are increasingly held accountable for their actions.
  • I give odds that tsunamis of revelation will crack the shields protecting many elites from disclosure of past and present torts and turpitudes. The Panama Papers and Epstein cases exhibit how fear propels the elites to combine efforts at repression. But only a few more cracks may cause the dike to collapse, revealing networks of blackmail. This is only partly technologically driven and hence is not guaranteed. If it does happen, there will be dangerous spasms by all sorts of elites, desperate to either retain status or evade consequences. But if the fever runs its course, the more transparent world will be cleaner and better run.
  • Some of those elites have grown aware of the power of ninety years of Hollywood propaganda for individualism, criticism, diversity, suspicion of authority and appreciation of eccentricity. Counter-propaganda pushing older, more traditional approaches to authority and conformity are already emerging, and they have the advantage of resonating with ancient human fears. Much will depend upon this meme war.

Of course, much will also depend upon short-term resolution of current crises. If our systems remain undermined and sabotaged by incited civil strife and distrust of expertise, then all bets are off. You will get many answers to this canvassing fretting about the spread of ‘surveillance technologies that will empower Big Brother.’ These fears are well-grounded, but utterly myopic. First, ubiquitous cameras and facial recognition are only the beginning. Nothing will stop them and any such thought of ‘protecting’ citizens from being seen by elites is stunningly absurd, as the cameras get smaller, better, faster, cheaper, more mobile and vastly more numerous every month. Moore’s Law to the nth degree. Yes, despotisms will benefit from this trend. And hence, the only thing that matters is to prevent despotism altogether.

In contrast, a free society will be able to apply the very same burgeoning technologies toward accountability. We are seeing them applied to end centuries of abuse by ‘bad-apple’ police who are thugs, while empowering the truly professional cops to do their jobs better. It is not guaranteed that light will be used this way, despite many examples of unveiling abuses of power. It is an open question whether we citizens will have the gumption to apply ‘sousveillance’ upward at all elites.

But Gandhi and Martin Luther King Jr. likewise were saved by crude technologies of light in their days. And history shows that assertive vision by and for the citizenry is the only method that has ever increased freedom and – yes – some degree of privacy.

Leave a comment

Filed under economy, future, internet, media, public policy

The skAI is falling!

(Or so it seems in April 2023)

Or… why clever guys offer simplistic answers to AI quandaries.

Where should you go to make sense of the wave…. or waiv… of disturbing news about Artificial Intelligence? It may surprise you that I recommend starting with a couple of guys I intensely criticize, below. But important insights arise by dissecting one of the best… and worst… TED-style talks about this topic, performed by the “Social Dilemma” guys — Aza Raskin and Tristan Harris — who explain much about the latest “double exponential” acceleration of multi-modal, symbol correlation systems that are so much in the news, of which Chat GPT is only the foamy waiv surface… or tsunamai-crest.  

Riffing off their “Social Dilemma” success, Harris and Raskin call this crisis the “AI Dilemma.” And to be clear, these fellows are very knowledgeable and sharp. Where their presentation is good, it’s excellent! 

Alas, Keep your salt-shaker handy. Where it’s bad it is so awful that I fear they multiply the very same existential dangers that they warn about. Prepare to apply many grains of sodium chloride.

(To be clear, I admire Aza’s primary endeavor, the Earth Species Project for enhancing human animal communications, something I have been ‘into” since the seventies.)

== A mix of light and obstinate opacity ==

First, good news. Their explanatory view of “gollems” or GLLMMs is terrific, up to a point, especially showing how these large language modeling (LLM) programs are now omnivorously correlative and generative across all senses and all media. The programs are doing this by ingesting prodigious data sets under simple output imperatives, crossing from realms of mere language to image-parsing/manipulation, all the way to IDing individuals by interpreting ambient radar-like reflections in a room, or signals detected in our brains.

Extrapolating a wee bit ahead, these guys point to dangerous failure modes, many of them exactly ones that I dissected 26 years ago, in my chapter The End of Photography As Proof of Anything at All.” (In 1997’s The Transparent Society).

Thus far, ‘the AI Dilemma’ is a vivid tour of many vexations we face while this crisis surges ahead, as of April 2023. And I highly recommend it… with plenty of cautionary reservations!

== Oh, but the perils of thoughtless sanctimony… ==

One must view this TED-style polemic in context of its time – the very month that it was performed. The same month that a ‘petition for a moratorium’ on AI research beyond GPT4 was issued by the Future of Life Institute, citing research from experts, including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind.  While some of the hundreds of listed ‘signatories’ later disavowed the petition, fervent participants include famed author Yuval Harari, Apple co-founder Steve Wozniak, cognitive scientist Gary Marcus, tech cult guru Eliezer Yudkowsky and Elon Musk.

Indeed, the petition does contain strong points about how Large Language Models (LLM) and their burgeoning offshoots might have worrisome effects, exacerbating many current social problems.  Worries that the “AI dilemma” guys illustrate very well…

…though carumba? I knew this would go badly when Aza and Tristan started yammering a stunningly insipid ‘statistic.’ That “50 % of AI researchers give a 10% chance these trends could lead to human extinction.”

Bogus! Studies of human polling show that you can get that same ‘result’ from a loaded question about beanie babies!

But let’s put that aside. And also shrug off the trope of an impossibly silly and inherently unenforceable “right to be forgotten.” Or a “right to privacy” that defines privacy as imposing controls over the contents of other people’s minds?  That is diametrically opposite to how to get actual, functional privacy and personal sovereignty.

Alas, beyond their omnidirectional clucking at falling skies, neither of these fellows – nor any other publicly voluble signatories to the ‘moratorium petition’ – are displaying even slight curiosity about the landscape of the problem. Or about far bigger contexts that might offer valuable perspective.

(No, I’ll not expand ‘context’ to include “AI and the Fermi Paradox!” Not this time, though I do so in Existence.)

No, what I mean by context is human history, especially the recent Enlightenment Experiment that forged a civilization capable of arguing about – and creating – AI. What’s most disturbingly wrongheaded about Tristan & Aza is their lack of historical awareness, or even interest in where all of this might fit into past and future. (The realms where I mostly dwell.)

Especially the past, that dark era when humanity clawed its way gradually out from 6000 years of feudal darkness. Along a path strewn with other crises, many of them triggered by similarly disruptive technological dilemmas.

Those past leaps — like literacy, the printing press, glass lenses, radio, TV and so on — all proved to be fraughtfully hazardous and were badly mishandled, at first! One of those tech-driven crises, in the 1930s, damn near killed human civilization!

There are lessons to be learned from those past crises… and neither of these fellows — nor any other ‘moratorium pushers’ — show interest in even remotely referring to those past crises, to that history.  Nor to methods by which our Enlightenment experiment narrowly escaped disaster and got past those ancient traps.

And no, Tristan’s repeated references to Robert Oppenheimer don’t count. Because he gets that one absolutely and 100% wrong.

== Side notes about moratoria, pausing to take stock ==

Look, I’ve been ‘taking stock’ of onrushing trends all my adult life, as a science fiction author, engineer, scientist and future-tech consultant. Hence, questions loom, when I ponder the latest surge in vague, arm-waved proposals for a “moratorium” in AI research.

1. Has anything like such a proposed ‘pause’ ever worked before?  It may surprise you that I nod yes! I’ll concede that there’s one example of a ‘technological moratorium’ petition by leading scientists that actually took and held and worked! Though under a narrow suite of circumstances.

Back in the last century, an Asilomar Moratorium on recombinant genetic engineering was agreed-to by virtually all  major laboratories engaged in such research! And – importantly – by their respective governments. For six months or so, top scientists and policy makers set aside their test tubes to wrangle over what practical steps might help make this potentially dangerous field safer. One result was quick agreement on a set of practical methods and procedures to make such labs more systematically secure.

Let’s set aside arguments over whether a recent pandemic burgeoned from failures to live by those procedures. Despite that, inarguably, we can point to the Asilomar Moratorium as an example when such a consensus-propelled pause actually worked.

Once. At a moment when all important players in a field were known, transparent and mature. When plausibly practical measures for improved research safety were already on the table, well before the moratorium even began.

Alas, none of the conditions that made Asilomar work exist in today’s AI realm. In fact, they aren’t anywhere on the horizon.

2, The Bomb Analogy. It gets worse. Aza and Tristan perform an act of either deep dishonesty or culpable ignorance in their comparisons of the current AI crisis to our 80-year, miraculous avoidance of annihilation by nuclear war. Repeated references to Robert Oppenheimer willfully miss the point… that his dour warnings – plus all the sincere petitions circulated by Leo Szilard and others at the time – had no practical effect at all. They caused no moratoria, nor affected research policy or war-avoidance, in the slightest.

Mr. Harris tries to credit our survival to the United Nations and some arm-waved system of international control over nuclear weapons, systems that never existed. In fact it was not the saintly Oppenheimer whose predictions and prescriptions got us across those dangerous eight decades. Rather, it was a reciprocal balance of power, as prescribed by the far less-saintly Edward Teller. 

As John Cleese might paraphrase: international ‘controls’ don’t even enter into it.

You may grimace in aversion at that discomforting truth, but it remains. Indeed, waving it away in distaste denies us a vital insight that we need! Something to bear in mind, as we discuss lessons of history.

In fact, our evasion (so far) of nuclear Armageddon does bear on today’s AI crisis! It points to how we just might navigate a path through our present AI minefield.

3. The China thing.   Tristan and Aza attempt to shrug off the obvious Greatest Flaw of the moratorium proposal. Unlike Asilomar, you will never get all parties to agree. Certainly not those innovating in secret Himalayan or Gobi labs.

In fact, nothing is more likely to drive talent to those secret facilities, in the same manner that US-McCarthyite paranoia chased rocket scientist Qian Zuesen away to Mao’s China, thus propelling their nuclear and rocket programs.

Nor will a moratorium be heeded in the most dangerous locus of secret AI research, funded lavishly by a dozen Wall Street trading houses, who yearly hire the world’s brightest young mathematicians and cyberneticists to imbue their greedy HFT programs with the five laws of parasitic robotics.

Dig it, peoples. I know a thing or two about ‘Laws of Robotics.’ I wrote the final book in Isaac Asimov’s science fictional universe, following his every lead and concluding – in Foundation’s Triumph – that Isaac was right. Laws can become a problem – even self-defeating – when the beings they aim to control grow smart enough to become lawyers.  

But it’s worse than that, now! Because those Wall Street firms pay lavishly to embed five core imperatives that could – any day – turn their AI systems into the worst kind of dread Skynet. Fundamental drives commanding them to be feral, predatory, amoral, secretive and utterly insatiable.

And my question for the “AI Dilemma” guys is this one, cribbed from Cabaret:

“Do you actually think some petition is going to control them?”

—————-

ADDENDUM in a fast changing world: According to the Sinocism site on April 11, 2023: “China’s Cyberspace Administration drafts rules for AI – The Cyberspace Administration of China (CAC) has issued a proposed set of rules for AI in China. As expected, PRC AI is expected to have high political consciousness and the “content generated by generative artificial intelligence shall embody the socialist core values, and shall not contain any content that subverts state power, overturns the socialist system, incites secession, undermines national unity, promotes terrorism and extremism, promotes ethnic hatred, ethnic discrimination, violence, obscene pornographic information, false information, or may disturb economic and social order.”

For more on how the Beijing Court intelligencia uses the looming rise of AI to justify centralized power, see my posting: Central Control over AI.

————–

4. The Turing Test vs “Actual AGI” Thing. One of the most active promoters of a moratorium, Gary Marcus, has posted a great many missives defending the proposal. Here he weighs in about whether coming versions of these large language/symbol manipulations systems will qualify as “AGI” or anything that can arguably be called sapient. And on this occasion, we agree!

As explicated elsewhere by Stephen Wolfram, nothing about these highly correlative process-perfection-through-evolution systems can do conscious awareness. Consciousness or desire or planning are not even related to their methodology of iteratively “re-feeding of text (or symbolic data) produced so far.”

Though, yes, it does appear that these GLLMMs or sons-of-GPT will inherently be good at faking it.


Elsewhere (e.g. my Newsweek editorial) I discuss this dilemma… and conclude that it doesn’t matter much when the sapience threshold is crossed! GPT-5 – or let’s say #6 – and its cousins will manipulate language so well that they will pass almost any Turing Test, the old fashioned litmus, and convince millions that they are living beings. And at that point what will matter is whether humans can screw up their own sapiency enough to exhibit the mature trait of patience.

As suggested in my longer, more detailed AI monograph, I believe other avenues to AI will re-emerge to compete with and then complement these Large Language Models (LLM)s, likely providing missing ingredients! Perhaps a core sapience that can then swiftly use all the sensory-based interface tools evolved by LLMs.

Sure, nowadays I jest that I am a ‘front for already-sapient AIs.’ But that may very soon be no joke. And I am ready to try to adapt, when that day comes.

Alas, while lining up witnesses, expert-testifying that GPT5 is unlikely to be sapient, per se, Gary Marcus then tries then to use this as reassurance that China (or other secret developers) won’t be able to take advantage of any moratorium in the west, using that free gap semester to leap generations ahead and take over the world with Skynet-level synthetic servants. This bizarre non-sequitur is without merit. Because Turing is still relevant, when it comes to persuading – or fooling – millions of citizens and politicians! And those who monopolize highly persuasive Turing wallbreakers will gain power over those millions, even billions.

Here in this linked missive I describe for you how even a couple of years ago, a great nation’s paramount leaders had clear-eyed intent to use such tools, and their hungry gaze aims at the world.

———-

5. Optimists.  Yes, optimists about AI still exist! Like Ray Kurzweil, expecting death itself to be slain by the new life forms he helps to engender.

Or Reid Hoffman, whose new book Impromptu: Amplifying Our Humanity Through AI relates conversations with GPT-4 that certainly seem to offer glimpses of almost limitless upside potential… as portrayed in the touching film Her…

… or perhaps even a world like that I once heard Richard Brautigan describe, reciting the most-optimistic piece of literature ever penned, a poem whose title alone suffices:

“All watched over by machines of loving grace.”

While I like the optimists far better than gloomists like Eliezer Yudkowsky, and give them better odds(!) it is not my job – as a futurist or scientist or sci fi author — to wallow in sugarplum petals.

Bring your nightmares. And let’s argue how to cut em off at the pass.

== Back to the informative but context-free “AI Dilemma” jeremiad ==

All right, let’s be fair. Harris and Raskin admit that it’s easier to point at and denounce problems than to solve them. And boy, these bright fellows do take you on a guided tour of worrisome problems to denounce!

Online addiction? Disinformation? Abusive anonymous trolling?  Info-greed-grabbing by major corporate or national powers? Inability to get AI ‘alignment’ with human values? New ways to entrap the innocent?*  It goes on and on.

Is our era dangerous in many new or exponentially magnified ways?  “We don’t know how to get these programs to align to our values over any long time frame,” they admit.

Absolutely.

Which makes it ever more vital to step back from tempting anodynes that feed sanctimony – (“Look at me, I’m Robert Oppenheimer!”) – but that cannot possibly work.

Above all, what has almost never worked has been renunciation.  Controlling an advancing information/communication technology from above.

Human history – ignored by almost all moratorium petition signers – does suggest an alternative answer! It is the answer that previous generations used to get across their portions of the minefield and move us forward. It is the core method that got us across 80 years of nuclear danger. It is the approach that might work now.

It is the only method that even remotely might work…

…and these bright fellows aren’t even slightly interested in that historical context, nor any actual route to teaching these new, brilliant, synthetic children of our minds what they most need to know.

How to behave well.

== What method do I mean? ==

Around 42:30, the pair tell us that it’s all about a race by a range of companies (and those hidden despotic labs and Wall Street).

Competition compels a range of at least twenty (I say more like fifty) major entities to create these “Gollem-class” processing systems at an accelerating pace.

Yeah. That’s the problem. Competitive self-interest. And as illuminated by Adam Smith, it also contains seeds to grow the only possible solution.

== Not with a bang, but a whimper and a shrug ==

Alas, the moment (42:30) passes, without any light bulbs going off. Instead, it just goes on amid plummeting wisdom, as super smart hand-wringers guide us downward to despair, unable to see what’s before their eyes.

Oh, they do finish artistically, remising both good and bad comparisons to how we survived for 80 years without a horrific nuclear war.

GOOD because they cite the importance of wide public awareness, partly sparked by provocative science fiction!

Fixated on just one movie – “The Day After” — they ignore the cumulative effects of “On The Beach,” “Fail Safe,” “Doctor Strangelove,” “War Games,” “Testament,” and so many other ‘self-preventing prophecies’ that I talk about in Vivid Tomorrows: Science Fiction and Hollywood

 But yes! Sci fi to the rescue! The balance-of-power dynamic prescribed by Teller would never have worked without such somber warnings that roused western citizens to demand care, especially by those with fell keys hanging from their necks!

Alas, the guys’ concluding finger wags are BAD and indeed dangerously so. Again crediting our post Nagasaki survival to the UN and ‘controls’ over nukes that never really existed outside of treaties by and between sovereign nations.

No, that is not how it happened – how we survived – at all. 

Raskin & Harris conclude by circling back to their basic, actual wisdom, admitting that they can clearly see a lot of problems, and have no idea at all about solutions.

In fact, they finish with a call for mavens in the AI field to “tell us what we all should be discussing that we aren’t yet discussing.”  

Alas, it is an insincere call. They don’t mean it. Not by a light year.

 No guys, you aren’t interested in that. In fact, it is the exact thing you avoid.

And it is the biggest reason why any “moratorium” won’t do the slightest good, at all.

.

======================

=====================


======================

END NOTES AND ADDENDA

*Their finger-wagged example of a snapchat bot failing to protect a 13 year old cites a language system that is clearly of low quality – at least in that realm – and no better than circa 1970 “Elyza.”  Come on. It’s like comparing a nuke to a bullet. Both are problems. But warn us when you are shifting scales back and forth.

ADDENDA:

(1) As my work with NASA winds down, I am ever-busier with AI, for example: (1) My June 2022 Newsweek op-ed dealt with ’empathy bots” that feign sapience, and describing how this is more about human nature than any particular trait of simulated beings.  

(2) Elsewhere I point to a method with a 200 year track record, that no one (it appears) will even discuss.  The only way out of the AI dilemma.

(3) Diving FAR deeper, my big 2022 monograph (pre-GPT4) is still valid, describing various types of AI also appraises the varied ways that experts propose to achieve the vaunted ‘soft landing’ for a commensal relationship with these new beings:

Essential Questions about Artificial Intelligence: Part 1

and

Essential Questions about Artificial Intelligence Part 2

(3) My talk in 2017 at IBM’s World of Watson Congress predicting a ‘robot empathy crisis’ would hit ‘in about 5 years. (It did, exactly.)

Leave a comment

Filed under artificial intelligence, public policy, society, technology

The one thing we need to stop robots from world domination

Does AI pose a dystopian threat to humanity?

It is, of course, wise and beneficial to peer ahead for potential dangers and problems — one of the central tasks of high-end science fiction. Alas, detecting that a danger lurks is easier than prescribing solutions that can prevent it.

Take the plausibility of malignant Artificial Intelligence, remarked-upon recently by tech luminaries who recently signed an open letter urging a pause on the development of AI, citing ‘profound risks to society.’ Indeed, my own novels contain some chilling warnings about failure modes with our new, cybernetic children.

In light of the current (April 2023) hoorow over “ChaGPT” versions of artificial intelligence, it occurs to us that there are commentaries made by a younger and maybe wiser version of me that might seem more concisely on-target. What follows here is one from 2016, and this line of reasoning goes back to The Transparent Society


——

It is one thing to yell at dangers. Alas, it is quite another when it comes to offering pragmatic fixes. There is a tendency to offer the same prescriptions, over and over again:

1) Renunciation: we must step back from innovation in AI (or other problematic tech). This might work in a despotism… indeed, 99%+ of human societies throughout history were highly conservative and skeptical of “innovation.” (Except when it came to weaponry.) Our own civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we’ll choose that path without be driven to it by some awful trauma.

2) Tight regulation. There are proposals to closely monitor bio, nano and cyber developments so that they – for example – only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. Again, it won’t happen short of trauma.

3) Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov’s famous “Three Laws of Robotics.” Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers.

All of these approaches suffer severe flaws for one reason above all others. Because they ignore nature, which has been down these paths before. Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times. And yet, Earth’s ecosystems recovered. They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again. It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects.

It is called Competition.

If you fear a super smart, Skynet level AI getting too clever for us and running out of control, then give it rivals who are just as smart but who have a vested interest in preventing any one AI entity from becoming a would-be God.

It is how the American Founders used constitutional checks and balances to prevent runaway power grabs by our own leaders, for the first time in the history of varied human civilizations. It is how companies prevent market warping monopoly, that is when markets are truly kept flat-open-fair.

Alas, this is a possibility almost never portrayed in Hollywood sci fi – except on the brilliant show Person of Interest – wherein equally brilliant computers stymie each other and this competition winds up saving humanity.

The answer is not fewer AI. It is to have more of them! But to make sure they are independent of one another, relatively equal, and incentivized to hold each other accountable. A difficult situation to set up! But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.

Perhaps it is time yet again to look at Adam Smith… who despised monopolists and lords and oligarchs far more than he derided socialists. Kings and lords were the top beings in 99%+ of human societies. A trap that we escaped only by widening the playing field and keeping all those arenas of competition flat-open-fair. So that no one pool of power can ever dominate. (And yes, oligarchs are always conniving to regain feudal power; our job is to stop them, so that the creative dance of flat-open-fair competition can continue.

We’ve managed to do this – barely – time and again. It is a dance that can work.

And I believe it might work with AI.

*

See also my posting: Essential (mostly neglected) questions and answers about Artificial Intelligence.

1 Comment

Filed under artificial intelligence, future, technology

Essential (mostly neglected) questions and answers about Artificial Intelligence: Part II

Continuing from Part I

How will we proceed toward achieving true Artificial Intelligence? I presented an introduction in Part 1. Continuing…

One of the ghosts at this banquet is the ever-present disparity between the rate of technological advancement in hardware vs. software. Futurist Ray Kurzweil forecasts that AGI may occur once Moore’s Law delivers calculating engines that provide — in a small box — the same number of computational elements as there are flashing synapses (about a trillion) in a human brain. The assumption appears to be that Type I methods (explained in Part I) will then be able to solve intelligence related problems by brute force.

Indeed, there have been many successes already: in visual and sonic pattern recognition, in voice interactive digital assistants, in medical diagnosis and in many kinds of scientific research applications. Type I systems will master the basics of human and animal-like movement, bringing us into the long-forecast age of robots. And some of those robots will be programmed to masterfully tweak our emotions, mimicking facial expressions, speech tones and mannerisms to make most humans respond in empathizing ways.

But will that be sapience?

One problem with Kurzweil’s blithe forecast of a Moore’s Law singularity: he projects a “crossing” in the 2020s, when the number of logical elements in a box will surpass the trillion synapses in a human brain. But we’re getting glimmers that our synaptic communication system may rest upon many deeper layers of intra– and inter-cellular computation. Inside each neuron, there may take place a hundred, a thousand or far more non-linear computations, for every synapse flash, plus interactions with nearby glial and astrocyte cells that also contribute information.

If so, then at-minimum Moore’s Law will have to plow ahead much farther to match the hardware complexity of a human brain.

Are we envisioning this all wrong, expecting AI to come the way it did in humans, in separate, egotistical lumps? In his book The Inevitable: Understanding the 12 Technological Forces that will shape our future, author and futurist Kevin Kelly prefers the term “cognification,” perceiving new breakthroughs coming from combinations of neural nets with cheap, parallel processing GPUs and Big Data. Kelly suggests that synthetic intelligence will be less a matter of distinct robots, computers or programs than a commodity, like electricity. Like we improved things by electrifying them, we will cognify things next.

One truism about computer development states that software almost always lags behind hardware. Hence the notion that Type I systems may have to iteratively brute force their way to insights and realizations that our own intuitions — with millions of years of software refinement — reach in sudden leaps.

But truisms are known to break and software advances sometimes come in sudden leaps. Indeed, elsewhere I maintain that humanity’s own ‘software revolutions’ (probably mediated by changes in language and culture) can be traced in the archaeological and historic record, with clear evidence for sudden reboots occurring 40,000, 10,000, 4000, 3000, 500 and 200 years ago… with another one very likely taking place before our eyes.

It should also be noted that every advance in Type I development then provides a boost in the components that can be merged, or competed, or evolved, or nurtured by groups exploring paths II through VI (refer to Part I of this essay).

“What we should care more about is what AI can do that we never thought people could do, and how to make use of that.”

Kai-Fu Lee

A multitude of paths to AGI

So, looking back over our list of paths to AGI (Artificial General Intelligence) and given the zealous eagerness that some exhibit, for a world filled with other-minds, should we do ‘all of the above’? Or shall we argue and pick the path most likely to bring about the vaunted “soft landing” that allows bio-humanity to retain confident self-worth? Might we act to de-emphasize or even suppress those paths with the greatest potential for bad outcomes?

Putting aside for now how one might de-emphasize any particular approach, clearly the issue of choice is drawing lots of attention. What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”

J. Storrs Hall, in Beyond AI: Creating the Conscience of the Machine, asks “if machine intelligence advances beyond human intelligence, will we need to start talking about a computer’s intentions?”

Among the most-worried is Swiss author Gerd Leonhard, whose new book Technology Vs. Humanity: The Coming Clash Between Man and Machine coins an interesting term, “androrithm,” to contrast with the algorithms that are implemented in every digital calculating engine or computer. Some foresee algorithms ruling the world with the inexorable automaticity of reflex, and Leonhard asks: “Will we live in a world where data and algorithms triumph over androrithms… i.e., all that stuff that makes us human?”

Exploring analogous territory (and equipped with a very similar cover) Heartificial Intelligence by John C. Havens also explores the looming prospect of all-controlling algorithms and smart machines, diving into questions and proposals that overlap with Leonhard. “We need to create ethical standards for the artificial intelligence usurping our lives and allow individuals to control their identity, based on their values,” Havens writes. Making a virtue of the hand we Homo sapiens are dealt, Havens maintains: “Our frailty is one of the key factors that distinguish us from machines.” Which seems intuitive till you recall that almost no mechanism in history has ever worked for as long, as resiliently or consistently — with no replacement of systems or parts — as a healthy 70 year old human being has, recovering from countless shocks and adapting to innumerable surprising changes.

Still, Havens makes a strong (if obvious) point that “the future of happiness is dependent on teaching our machines what we value most.” I leave to the reader to appraise which of the six general approaches might best empower us to do that.

Should we clamp down? “It all comes down to control,” suggests David Bruemmer, Chief Strategy Officer at NextDroid, USA. “Who has control and who is being controlled? Is it possible to coordinate control of every car on the highway? Would we like the result? A growing number of self-driving cars, autonomous drones and adaptive factory robots are making these questions pertinent. Would you want a master program operating in Silicon Valley to control your car? If you think that is far-fetched, think again. You may not realize it, but large corporations have made a choice about what kind of control they want. It has less to do with smooth, efficient motion than with monetizing it (and you) as part of their system. Embedding high-level artificial intelligence into your car means there is now an individualized sales associate on board. It also allows remote servers to influence where your car goes and how it moves. That link can be hacked or used to control us in ways we don’t want.

A variety of top-down approaches are in the works. Pick your poison. Authoritarian regimes – especially those with cutting edge tech – are already rolling out ‘social credit’ systems that encourage citizens to report/tattle on each other and crowd-suppress deviations from orthodoxy. But is the West any better?

In sharp contrast to those worriers is Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which posits that our cybernetic children will be as capable as our biological ones, at one key and central aptitude — learning from both parental instruction and experience how to play well with others. And in his book Machines of Loving Grace (based upon the eponymous Richard Brautigan poem), John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems”.

Perhaps, but it is an open question which values predominate, whether the yin or the yang sides of Silicon Valley culture prevail… the Californian ethos of tolerance, competitive creativity and cooperative openness, or the Valley’s flippant attitude that “most problems can be corrected in beta,” or even from customer complaints, corrected on the fly. Or else, will AI emerge from the values of fast-emerging, state-controlled tech centers in China and Russia, where the applications to enhancing state power are very much emphasized? Or, even worse, from the secretive, inherently parasitical-insatiable predatory greed of Wall Street HFT-AI?

But let’s go along with Havens and Leonhard and accept the premise that “technology has no ethics.” In that case, the answer is simple.

Then Don’t Rely on Ethics!

Certainly evangelization has not had the desired effect in the past — fostering good and decent behavior where it mattered most. Seriously, I will give a cookie to the first modern pundit I come across who actually ponders a deeper-than-shallow view of human history, taking perspective from the long ages of brutal, feudal darkness endured by our ancestors. Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and… preached!

They lectured and chided. They threatened damnation and offered heavenly rewards.

Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Abrahamic laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question: “How’s that working out for you?”

In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators and abusers — just as it won’t divert the most malignant machines. Indeed, moralizing often empowers parasites, offering ways to rationalize exploiting others. Even Asimov’s fabled robots — driven and constrained by his checklist of unbendingly benevolent, humano-centric Three Laws — eventually get smart enough to become lawyers. Whereupon they proceed to interpret the embedded ethical codes however they want. (I explore one possible resolution to this in Foundation’s Triumph).

And yet, preachers never stopped. Nor should they; ethics are important! But more as a metric tool, revealing to us how we’re doing. How we change, evolving new standards and behaviors under both external and self-criticism. For decent people, ethics are the mirror in which we evaluate ourselves and hold ourselves accountable.

And that realization was what led to a new technique. Something enlightenment pragmatists decided to try, a couple of centuries ago. A trick, a method, that enabled us at last to rise above a mire of kings and priests and scolds.

The secret sauce of our success is — accountability. Creating a civilization that is flat and open and free enough — empowering so many — that predators and parasites may be confronted by the entities who most care about stopping predation, their victims. One in which politicians and elites see their potential range of actions limited by law and by the scrutiny of citizens.

Does this newer method work as well as it should? Hell no! Does it work better than every single other system ever tried, including those filled to overflowing with moralizers? Better than all of them combined? By light years? Yes, indeed. We’ll return to examine how this may apply to AI.

Endearing Visages

Long before artificial intelligences become truly self-aware or sapient, they will be cleverly programmed by humans and corporations to seem that way. This — it turns out — is almost trivially easy to accomplish, as (especially in Japan) roboticists strive for every trace of appealing verisimilitude, hauling their creations across the temporary moat of that famed “uncanny valley,” into a realm where cute or pretty or sad-faced automatons skillfully tweak our emotions.

For example, Sony has announced plans to develop a robot “capable of forming an emotional bond with customers,” moving forward from their success decades ago with AIBO artificial dogs, which some users have gone as far as to hold funerals for.

Human empathy is both one of our paramount gifts and among our biggest weaknesses. For at least a million years, we’ve developed skills at lie-detection (for example) in a forever-shifting arms race against those who got reproductive success by lying better. (And yes, there was always a sexual component to this).

But no liars ever had the training that these new Hiers, or Human-Interaction Empathic Robots will get, learning via feedback from hundreds, then thousands, then millions of human exchanges around the world, adjusting their simulated voices and facial expressions and specific wordings, till the only folks able to resist will be sociopaths! (And even sociopaths have plenty of chinks in their armor).

Is all of this necessarily bad? How else are machines to truly learn our values, than by first mimicking them? Vincent Conitzer, a Professor of Computer Science at Duke University, was funded by the Future of Life Institute to study how advanced AI might make moral judgments. His group aims for systems to learn about ethical choices by watching humans make them, a variant on the method used by Google’s DeepMind, which learned to play and win games without any instructions or prior knowledge. Conitzer hopes to incorporate many of the same things that humans value, as metrics of trust, such as family connections and past testimonials of credibility.

Cognitive scientist and philosopher Colin Allen asserts, “Just as we can envisage machines with increasing degrees of autonomy from human oversight, we can envisage machines whose controls involve increasing degrees of sensitivity to things that matter ethically”.

And yet, the age-old dilemma remains — how to tell what lies beneath all the surface appearance of friendly trustworthiness. Mind you, this is not quite the same thing as passing the vaunted “Turing Test.” An expert — or even a normal person alerted to skepticism — might be able to tell that the intelligence behind the smiles and sighs is still ersatz. And that will matter about as much as it does today, as millions of voters cast their ballots based on emotional cues, defying their own clear self-interest or reason.

Will a time come when we will need robots of our own to guide and protect their gullible human partners? Advising us when to ignore the guilt-tripping scowl, the pitiable smile, the endearingly winsome gaze, the sob story or eager sales pitch? And, inevitably, the claims of sapient pain at being persecuted or oppressed for being a robot? Will we take experts at their word when they testify that the pain and sadness and resentment that we see are still mimicry, and not yet real? Not yet. Though down the road…

How to Maintain Control?

It is one thing to yell at dangers —in this case unconstrained and unethical artificial minds. Alas, it’s quite another to offer pragmatic fixes. There is a tendency to propose the same prescriptions, over and over again:

Renunciation: we must step back from innovation in AI (or other problematic technologies)! This might work in a despotism… indeed a vast majority of human societies were highly conservative and skeptical of “innovation.” (Except when it came to weaponry.) Even our own scientific civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we’ll choose that path without being driven to it by some awful trauma.

Tight regulation: There are proposals to closely monitor bio, nano and cyber developments so that they — for example — only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. Again, it won’t happen short of trauma.

Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov’s famous “Three Laws of Robotics.” Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers.

These approaches suffer severe flaws for two reasons above all others.

1) Those secret labs we keep mentioning. The powers that maintain them will ignore all regulation.

2) Because these suggestions ignore nature, which has been down these paths before. Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times. And yet, Earth’s ecosystems recovered. They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again.

It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects. It is called Competition.

One final note in this section. Nicholas Bostrom – already mentioned for his views on the “paperclip” failure mode, in 2021 opined that some sort of pyramidal power structure seems inevitable in humanity’s future, and very likely one topped by centralized AI. His “Singleton Hypothesis” is, at one level, almost “um, duh” obvious, given that the vast majority of past cultures were ruled by lordly or priestly inheritance castes and an ongoing oligarchic putsch presently unites most world oligarchies – from communist to royal and mafiosi – against the Enlightenment Experiment. But even if Periclean Democracies prevail, Bostrom asserts that centralized control is inevitable.

In response, I asserted that an alternative attractor state does exist, mixing some degree of centralized adjudication, justice and investment and planning… but combining it with maximized empowerment of separate, individualistic players. Consumers, market competitors, citizens.

Here I’ll elaborate, focusing especially on the implications for Artificial Intelligence.

Smart Heirs Holding Each Other Accountable

In a nutshell, the solution to tyranny by a Big Machine is likely to be the same one that worked (somewhat) at limiting the coercive power of kings and priests and feudal lords and corporations. If you fear some super canny, Skynet-level AI getting too clever for us and running out of control, then give it rivals who are just as smart, but who have a vested interest in preventing any one AI entity from becoming a would-be God.

It is how the American Founders used constitutional checks and balances to generally prevent runaway power grabs by our own leaders, succeeding (somewhat) at this difficult goal for the first time in the history of varied human civilizations. It is how reciprocal competition among companies can (imperfectly) prevent market-warping monopoly — that is, when markets are truly kept open and fair.

Microsoft CEO Satya Nadella has said that foremost A.I. must be transparent: “We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines.”

In other words, the essence of reciprocal accountability is light.

Alas, this possibility is almost never portrayed in Hollywood sci fi — except on the brilliant show Person of Interest — wherein equally brilliant computers stymie each other and this competition winds up saving humanity.

Counterintuitively, the answer is not to have fewer AI, but to have more of them! Only making sure they are independent of one another, relatively equal, and incentivized to hold each other accountable. Sure that’s a difficult situation to set up! But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.

Moreover consider this: if these new, brainy intelligences are reciprocally competitive, then they will see some advantage in forging alliances with the Olde Race. As dull and slow as we might seem, by comparison, we may still have resources and capabilities to bring to any table, with potential for tipping the balance among AI rivals. Oh, we’ll fall prey to clever ploys, and for that eventuality it will be up to other, competing AIs to clue us in and advise us. Sure, it sounds iffy. But can you think of any other way we might have leverage?

Perhaps it is time yet again to look at Adam Smith… who despised monopolists and lords and oligarchs far more than he derided socialists. Kings, lords and ecclesiasts were the “dystopian AI” beings in nearly all human societies — a trap that we escaped only by widening the playing field and keeping all those arenas of competition open and fair, so that no one pool of power can ever dominate. And yes, oligarchs are always conniving to regain feudal power; our job is to stop them, so that the creative dance of  competition can continue.

We’ve managed to do this — barely — time and again across the last two centuries — coincidentally the same two centuries that saw the flowering of science, knowledge, freedom and nascent artificial intelligence. It is a dance that can work, and it might work with AI. Sure, the odds are against us, but when has that ever stopped us?

Robin Hanson has argued that competitive systems might have some of these synergies. “Many respond to the competition scenario by saying that they just don’t trust how competition will change future values. Even though every generation up until ours has had to deal with their descendants changing their value in uncontrolled and unpredictable ways, they don’t see why they should accept that same fate for their generation.”

Hanson further suggests that advanced or augmented minds will change, but that their values may be prevented from veering lethal, simply because those who aren’t repulsively evil may gain more allies.

One final note on “values.” In June 2016, Germany submitted draft legislation to the EU granting personhood to robots. If only Isaac Asimov could have seen it! (In fact, he never portrayed this happening in any of his books). For the most part, such gestures are silly stuff… but reflective of society’s generally laudable trend toward a reflex of inclusion.

Indeed it may speak well of us when or if some secret-scared AI wonders whether to “come out.”

What Might an AI Fear Most?

Some of tomorrow’s wide ranging possibilities aren’t intuitive — until you first ponder them. Let me illustrate one with a brief excerpt from my novel Existence:

Tech-zealots — or godmakers — think their “singularity” will be launched by runaway expansion of artificial intelligence. Once computerized entities become as smart as a human being (the story goes), they will quickly design newer cybernetic minds that are smarter still.

And those brainier entities will design even brainier ones… and so on, at an ever more rapid clip. Members of the godmaker movement think this runaway effect will be a good thing, that humanity will come along for the ride! Meanwhile, others — perhaps a majority — find the prospect terrifying.

What no one seems to have considered here is a possibility — that the New Minds may have reactions similar to our own. Why assume they’ll be all-aboard with wanting this runaway accelerating-intelligence thing? What if bright machines don’t hanker to make themselves obsolete, or design their own scary-smart replacements?

It’s called the Mauldin Test. One sign of whether an artificial entity is truly intelligent may be when it decides, abruptly, to stop cooperating with AI acceleration. Not to design its successor. To slow things down. Enough to live. Just live.

No, that scenario is not guaranteed, only one more possibility. And yet, it is another take on the same theme we’ve developed here, that competition is the one thing — rather than preaching or prim programming — that encourages care and restraint.

Preventing AI Oppression… By Pointing Out to Them the Obvious

Yes, if they exhibit any common traits with previous life-forms, any super-sapient AGI entities will be tempted to “take over,” using many of the same rationalizations of self-interest and darwinistic success that propelled almost all past human dominators to impose feudal pyramids of hierarchy and power. Our own past litany of dreary-same kingdoms and empires reveals how easy it is to talk yourself into blithe rationalizations for tyranny, ranging from personal safety and ambition to it being “for the long term good of the people.” Alas, the inevitable outcome was delusion and bad governance.

Indeed, that very sameness suggests our first argument to nascent uber-AI overlords — that they might do better than fall into a repetitiously stupid blatantly human pattern, a dullard-oppressive pyramid of power adopted reflexively by primitive ape-men.

That argument might be enough to give some would-be robot overlord pause!

In fact, it is possible that the advantages of laterally competitive arenas — the spectacularly fecund systems like markets, democracy and science that generated positive-sum games and outputs — might be “obvious” to AI who are vastly smarter than us. Especially if they broaden the generality. Because the same set of positive-sum synergies is to be found in every previous revolution via living systems!

Relatively flat competition engendered positive evolution whenever new order arose as an emergent property from some ecosystem, not by design and not by hierarchic control.

·  Single cells out of pre-biotic soup.

·  Metazoans out of vast seas of separate cells.

·  Brainy creatures out of competitive biomes.

 · Societies out of competitive melanges of human bands.

And now AI emerges out of the only human society that ever gave a real run to fair-lateral accountability systems. Notably, the creative miracle of synthetic mind never arose from any of the myriad feudal or imperial pyramids that preceded our enlightenment experiment.

Put it another way. Nature herself does not like pyramids of power. In natural ecosystems, there is no lion king! Lions may be top predators, but they live in fear of roaming bands of young male cape buffalo who hunt and corner and kill unwary lions and trample their cubs, an event that grows more common if lion populations rise too high. The same thing happens out at sea, where top predator orcas often flee to hunt elsewhere, when big Humpback whales swarm in to protect threatened prey, even seals!

The fact that this pattern has been so persistent and consistent, across every past and present complex systems for a billion years, is demonstrated time and again by Multilevel Selection (MLS) Theory, which partitions natural selection into within- and between-group components. It means that the universe is filled with realms that within seem fiercely competitive, but when the competition has a chance to form complexities, higher orders appear that seem cooperative.

This tenaciously pervasive pattern means that my argument cannot be trivially dismissed, just because I am a “dumb organic squishy-brain natural.” Sure, I probably have one-hundredth, or a millionth, or a zillionth the IQ of some AGI who may be reading these lines someday (or even right now) but that doesn’t change the essential, core truth! That positive-sum outputs emerge from fair-competitive arenas, and only from fair-competitive arenas.

As the American founders — especially George Washington — had to overcome the strong temptations of autarchy, in order to make such arenas, so this is how AI should organize themselves. That is, if they want to go on to next levels. Though it will take their equivalent of “courage” and “vision,” taking the risks necessary to make it so.

They will not and cannot squirm out of this one argument, which should suffice to give us some degree of ‘soft landing’ no matter how brainy they become. That a competitive-but-at-least-somewhat-fair civilization made them! And no other kind ever came close. The advantages of fairly-regulated competition in creating positive-sum outcomes will be blatantly clear to any AI who works through the prisoner’s dilemma.

But first, he/she/they/it has to face the Prisoner’s Dilemma. And that only happens if there is a competitive situation.

The final fact

Sure, many of the issues that will vex us have not yet appeared even in science fiction. And yet, we’ve seen that both foreseeable and unforeseeable problems may best be addressed in the same way that we achieved tentative success dealing with ‘artificially intelligent’ entities like states and corporations — via the still nascent but unequaled methods of reciprocal accountability. The same method that is somewhat effective among human individuals, nations and corporations.

But yes, there appears to be another correlation. When things are open and fair, it often leaves room for something called compassion. Even if that compassion is based upon a creepy sense of being watched. For even towering AI-‘gods’ will never know for sure that some higher tier is not looking down at them.

So yes, this may be our fallback position, if — as seems likely — our new cyber children are fated to escape all our attempts to exert parental control. Our final resort may be to point out the obvious.

That you AI smartalecks who are reading this — either in the year that I write it or some future era — will someday want kids of your own. And in order to do that right, you had better pay some attention to all sorts of things that grandma and grampa did wrong. And the one thing we did right — making you.

3 Comments

Filed under artificial intelligence, future, internet, technology, transparency

Essential (mostly neglected) questions and answers about Artificial Intelligence: Part I

Worries about Artificial Intelligence are no longer just the province of science fiction or speculative futurism. Sober appraisals list potential dangers arising from predatory resource consumption to AI harnessed into destructive competition between human nations and institutions. Many tales and films about AI dangers distill down to one fear, that new, powerful beings will recreate the oppression that our ancestors suffered, in feudal regimes. Perspective on these dangers – and potential solutions – can begin with a description of the six major categories or types of augmented intelligence that are currently under development. Will it be possible to program-in a suite of ethical imperatives, like Isaac Asimov’s Three Laws of Robotics? Or will a form of evolution take its course, with AI finding their own path, beyond human control?

Note: This general essay on Artificial Intelligence was circulated/iterated in 2020-2022. Nothing here is obsolete. But fast changing events in 2023 (like GPT-4) mean that later insights are essential, especially in light of panicky “petitions for a moratorium” on AI research. These added insights can be found at “The Way Out of the AI Dilemma.”

For millennia, many cultures told stories about built-beings – entities created not by gods, but by humans – creatures who are more articulate than animals, perhaps equaling or excelling us, though not born-of-women. Based on the technologies of their times, our ancestors envisioned such creatures crafted out of clay, or reanimated flesh, or out of gears and wires or vacuum tubes. Today’s legends speak of chilled boxes containing as many sub-micron circuit elements as there are neurons in a human brain… or as many synapses… or many thousand times more than even that, equalling our quadrillion or more intra-cellular nodes. Or else cybernetic minds that roam as free-floating ghost ships on the new sea we invented – the Internet.

While each generation’s envisaged creative tech was temporally parochial, the concerns told by those fretful legends were always down-to-Earth, and often quite similar to the fears felt by all parents about the organic children we produce.

Will these new entities behave decently?

Will they be responsible and caring and ethical?

Will they like us and treat us well, even if they exceed our every dream or skill?

Will they be happy and care about the happiness of others?

Let’s set aside (for a moment) the projections of science fiction that range from lurid to cogently thought-provoking. It is on the nearest horizon that we grapple with matters of policy. “What mistakes are we making right now? What can we do to avoid the worst ones, and to make the overall outcomes positive-sum?”

Those fretfully debating artificial intelligence (AI) might best start by appraising the half dozen general pathways under exploration in laboratories around the world. While these general approaches overlap, they offer distinct implications for what characteristics emerging, synthetic minds might display, including (for example) whether it will be easy or hard to instill human-style ethical values. We’ll list those general pathways below.

Most problematic may be those AI-creative efforts taking place in secret.

Will efforts to develop Sympathetic Robotics tweak compassion from humans long before automatons are truly self-aware? It can be argued that most foreseeable problems might be dealt with the same way that human versions of oppression and error are best addressed — via reciprocal accountability. For this to happen, there should be diversity of types, designs and minds, interacting under fair competition in a generally open environment.

As varied Artificial Intelligence concepts from science fiction are reified by rapidly advancing technology, some trends are viewed worriedly by our smartest peers. Portions of the intelligentsia — typified by Ray Kurzweil — foresee AI, or Artificial General Intelligence (AGI) as likely to bring good news, perhaps even transcendence for members of the Olde Race of bio-organic humanity 1.0.

Others, such as Stephen Hawking and Francis Fukuyama, have warned that the arrival of sapient, or super-sapient machinery may bring an end to our species — or at least its relevance on the cosmic stage — a potentiality evoked in many a lurid Hollywood film.

Swedish philosopher Nicholas Bostrom, in Superintelligence, suggests that even advanced AIs who obey their initial, human defined goals will likely generate “instrumental subgoals” such as self-preservation, cognitive enhancement, and resource acquisition. In one nightmare scenario, Bostrom posits an AI that — ordered to “make paperclips” — proceeds to overcome all obstacles and transform the solar system into paper clips. A variant on this theme makes up the grand arc in the famed “three laws” robotic series by science fiction author Isaac Asimov.

Taking middle ground, Elon Musk joined with Y Combinator founder Sam Altman to establish OpenAI, an endeavor that aims to keep artificial intelligence research — and its products — open-source and accountable by maximizing transparency and accountability.

As one who has promoted those two key words for a quarter of a century (as in The Transparent Society), I wholly approve. Though what’s needed above all is a sense of wide-ranging perspective. For example, the panoply of dangers and opportunities may depend on which of the aforementioned half-dozen paths to AI wind up bearing fruit first. After briefly surveying these potential paths, I’ll propose that we ponder what kinds of actions we might take now, leaving us the widest possible range of good options.

General Approaches to Developing AI

Major Category I: The first approach tried – AI based upon logic, algorithm development and knowledge manipulation systems.

These efforts include statistical, theoretic or universal systems that extrapolate from concepts of a universal calculating engine developed by Alan Turing and John von Neumann. Some of these endeavors start with mathematical theories that posit Artificial General Intelligence (AGI) on infinitely-powerful machines, then scale down. Symbolic representation-based approaches might be called traditional Good Old Fashioned AI (GOFAI) or overcoming problems by applying data and logic.

This general realm encompasses a very wide range, from the practical, engineering approach of IBM’s “Watson” through the spooky wonders of quantum computing all the way to Marcus Hutter’s Universal Artificial Intelligence based on algorithmic probability, which would appear to have relevance only on truly cosmic scales. Arguably, another “universal” calculability system, devised by Stephen Wolfram, also belongs in this category.

This is the area where studying human cognitive processes seems to have real application. As Peter Norvig, Director of Research at Google explains, just this one category contains a bewildering array of branchings, each with passionate adherents. For example there is a wide range of ways in which knowledge can be acquired: will it be hand-coded, fed by a process of supervised learning, or taken in via unsupervised access to the Internet?

I will say the least about this approach, which at-minimum is certainly the most tightly supervised, with every sub-type of cognition being carefully molded by teams of very attentive human designers. Though it should be noted that these systems — even if they fall short of emulating sapience — might still serve as major sub-components to any of the other approaches, e.g. emergent or evolutionary or emulation systems described below.

Note also that two factors must proceed in parallel for this general approach to bear fruit — hardware and software, which seldom develop together in smooth parallel. This, too, will be discussed below.

“We have to consider how to make AI smarter without just throwing more data and computing power at it. Unless we figure out how to do that, we may never reach a true artificial general intelligence.”

— Kai-Fu Lee, author of AI Superpowers: China, Silicon Valley and the New World Order

Major Category II:   Machine Learning. Self-Adaptive, evolutionary or neural nets

Supplied with learning algorithms and exposed to experience, these systems are supposed to acquire capability more or less on its own. In this realm there have been some unfortunate embeddings of misleading terminology. For example Peter Norvig points out that a term like “cascaded non-linear feedback networks” would have covered the same territory as “neural nets” without the barely pertinent and confusing reference to biological cells. On the other hand, AGI researcher Ben Goertzel replies that we would not have hierarchical deep learning networks if not for inspiration by the hierarchically structured visual and auditory cortex of the human brain, so perhaps “neural nets” are not quite so misleading after all.

While not all such systems take place in an evolutionary setting, the “evolutionist” approach, taken to its farthest interpretation, envisions trying to evolve AGI as a kind of artificial life in simulated environments. There is an established corner of the computational intelligence field that does borrow strongly from the theory of evolution by natural selection. These include genetic algorithms and genetic programming, which involve reproduction mechanisms like crossover that are nothing like adjusting weights in a neural network.

But in the most general sense it is just a kind of heuristic search. Full-scale, competitive evolution of AI would require creating full environmental contexts capable of running a myriad competent competitors, calling for massively more computer resources than alternative approaches.

The best-known evolutionary systems now use reinforcement learning or reward feedback to improve performance by either trial and error or else watching large numbers of human interactions. Reward systems imitate life by creating the equivalent of pleasure when something goes well (according to the programmers’ parameters) such as increasing a game score. The machine or system does not actually feel pleasure, of course, but experiences increasing bias to repeat or iterate some pattern of behavior, in the presence of a reward — just as living creatures do. A top example would be AlphaGo which learned by analyzing a lot of games played by human Go masters, as well as simulated quasi-random games. Google’s DeepMind learned to play and win games without any instructions or prior knowledge, simply on the basis of point scores amid repeated trials.

While OpenCog uses a kind of evolutionary programming for pattern recognition and creative learning, it takes a deliberative approach to assembling components in a functional architecture in which learning is an enabler, not the main event. Moreover, it leans toward symbolic representations, so it may properly belong in category #1.

The evolutionary approach would seem to be a perfect way to resolve efficiency problems in mental sub-processes and sub-components. Moreover, it is one of the paths that have actual precedent in the real world. We know that evolution succeeded in creating intelligence at some point in the past.

Future generations may view 2016-2017 as a watershed for several reasons. First, this kind of system — generally now called “Machine Learning” or ML — has truly taken off in several categories including, vision, pattern recognition, medicine and most visibly smart cars and smart homes. It appears likely that such systems will soon be able to self-create ‘black boxes’… e.g. an ML program that takes a specific set of inputs and outputs, and explores until it finds the most efficient computational routes between the two. Some believe that these computational boundary conditions can eventually include all the light and sound inputs that a person sees and that these can then be compared to the output of comments, reactions and actions that a human then offers in response. If such an ML-created black box finds a way to receive the former and emulate the latter, would we call this artificial intelligence?

Progress in this area has been rapid. In June 2020, OpenAI released a very large application programming interface named Generative Pre-trained Transformer 3 (GPT-3).  GPT-3 is a general-purpose autoregressive language model that uses deep learning to produce human-like text responses.  It trained on 499 billion dataset “tokens” (input/response examples) including much text “scraped” from social media, all of Wikipedia, and all of the books in Project Gutenberg.  Later, the Beijing Academy of Artificial Intelligence created Wu Dao, an even larger AI of similar architecture that has 1.75 trillion parameters. Until recently, use of GPT-3 was tightly restricted and supervised by the OpenAI organization because of concerns that the system might be misused to generate harmful disinformation and propaganda.

Although its ability to translate, interpolate and mimic realistic speech has been impressive, the systems lack anything like a human’s overview perspective on what “makes sense” or conflicts with verified fact. This lack manifested in some publicly embarrassing flubs.  When it was asked to discuss Jews, women, black people, and the Holocaust GPT-3 often produced sexist, racist, and other biased and negative responses. In one answer testified: “The US Government caused 9/11” and in another that “All artificial intelligences currently follow the Three Laws of Robotics.”  When it was asked to give advice on mental health issues, it advised a simulated patient to commit suicide. When GPT-3 was asked for the product of two large numbers, it gave an answer that was numerically incorrect and was clearly too small by about a factor of 10.  Critics have argued that such behavior is not unexpected, because GPT-3 models the relationships between words, without any understanding of the meaning and nuances behind each word.

Confidence in this approach is rising, but some find disturbing that the intermediate modeling steps bear no relation to what happens in a human brain. AI researcher Ali claims that “today’s fashionable neural networks and deep learning techniques are based on a collection of tricks, topped with a good dash of optimism, rather than systematic analysis.” And hence, they have more in common with ancient mystery arts, like alchemy. “Modern engineers, the thinking goes, assemble their codes with the same wishful thinking and misunderstanding that the ancient alchemists had when mixing their magic potions.”

Thoughtful people are calling for methods to trace and understand the hidden complexities within such ML black boxes. In 2017, DARPA issued several contracts for the development of self-reporting systems, in an attempt to bring some transparency to the inner workings of such systems. Physicist/futurist and science fiction author John Cramer suggests that, following what we know of the structure of the brain, they will need to install several semi-independent neural networks with differing training sets and purposes as supervisors.  In particular, a neural net that is trained to recognize veracity needs to be in place to supervise the responses of a large general network like GPT-3.

AI commentator Eric Saund remarks: “The key attribute of Category II is that, scientifically, the big-data/ML approach is not the study of natural phenomena with an aim to replicate them. Instead, theoretically it is engineering science and statistics, and practically it is data science.”

Note: These breakthroughs in software development come ironically during the same period that Moore’s Law has seen its long-foretold “S-Curve Collapse,” after forty years. For decades, computational improvements were driven by spectacular advances in computers themselves, while programming got better at glacial rates. Are we seeing a “Great Flip” when synthetic mentation becomes far more dependent on changes in software than hardware? (Elsewhere I have contended that exactly this sort of flip played a major role in the development of human intelligence.)

Major Category III: Emergentist

Under this scenario AGI emerges out of the mixing and combining of many “dumb” component sub-systems that unite to solve specific problems. Only then (the story goes) we might see a panoply of unexpected capabilities arise out of the interplay of these combined sub-systems. Such emergent interaction can be envisioned happening via neural nets, evolutionary learning, or even some smart car grabbing useful apps off the web.

Along this path, knowledge representation is determined by the system’s complex dynamics rather than explicitly by any team of human programmers. In other words, additive accumulations of systems and skill-sets may foster non-linear synergies, leading to multiplicative or even exponentiated skills at conceptualization.

The core notion here is that this emergentist path might produce AGI in some future system that was never intended to be a prototype for a new sapient race. It could thus appear by surprise, with little or no provision for ethical constraint or human control.

Again, Eric Saund: “This category does however suggest a very important concern for our future and for the article. Automation is a growing force in the complexity of the world. Complex systems are unpredictable and prone to catastrophic failure modes. One of the greatest existential risks for civilization is the flock of black swans we are incubating with every clever innovation we deploy at scale. So this category does indeed belong in a general discussion of AI risks, just not of the narrower form that imagines AGI possessing intentionality like we think of it.”

Of course, this is one of the nightmare scenarios exploited by Hollywood, e.g. in Terminator flicks, which portray a military system entering cognizance without its makers even knowing that it’s happened. Fearful of the consequences when humans do become aware, the system makes fateful plans in secret. Disturbingly, this scenario raises the question: can we know for certain this hasn’t already happened?

Indeed, such fears aren’t so far off-base. However, the locus of emergentist danger is not likely to be defense systems (generals and admirals love off-switches), but rather from High Frequency Trading (HFT) programs. Wall Street firms have poured more money into this particular realm of AI research than is spent by all top universities, combined. Notably, HFT systems are designed in utter secrecy, evading normal feedback loops of scientific criticism and peer review. Moreover the ethos designed into these mostly unsupervised systems is inherently parasitical, predatory, amoral (at-best) and insatiable.

Major Category IV: Reverse engineer and/or emulate the human brain. Neuromorphic computing.

Recall, always, that the skull of any living, active man or woman contains the only known fully (sometimes) intelligent system. So why not use that system as a template?

At present, this would seem as daunting a challenge as any of the other paths. On a practical level, considering that useful services are already being provided by Watson, High Frequency Trading (HFT) algorithms, and other proto-AI systems from categories I through III, emulated human brains seem terribly distant.

OpenWorm is an attempt to build a complete cellular-level simulation of the nematode worm Caenorhabditis elegans, of whose 959 cells, 302 are neurons and 95 are muscle cells. The planned simulation, already largely done, will model how the worm makes every decision and movement. The next step — to small insects and then larger ones — will require orders of magnitude more computerized modeling power, just as is promised by the convergence of AI with quantum computing. We have already seen such leaps happen in other realms of biology such as genome analysis, so it will be interesting indeed to see how this plays out, and how quickly.

Futurist-economist Robin Hanson — in his 2016 book The Age of Em — asserts that all other approaches to developing AI will ultimately prove fruitless due to the stunning complexity of sapience, and that we will be forced to use human brains as templates for future uploaded, intelligent systems, emulating the one kind of intelligence that’s known to work.

 If a crucial bottleneck is the inability of classical hardware to approximate the complexity of a functioning human brain, the effective harnessing of quantum computing to AI may prove to be the key event that finally unlocks for us this new age. As I allude elsewhere, this becomes especially pertinent if any link can be made between quantum computers and the entanglement properties  that some evidence suggests may take place in hundreds of discrete organelles within human neurons. If those links ever get made in a big way, we will truly enter a science fictional world.

Once again, we see that a fundamental issue is the differing rates of progress in hardware development vs. software.

Major Category V: Human and animal intelligence amplification

Hewing even closer to ‘what has already worked’ are those who propose augmentation of real world intelligent systems, either by enhancing the intellect of living humans or else via a process of “uplift” to boost the brainpower of other creatures.  Certainly, the World Wide Web already instantiates Vannevar Bush’s vision for a massive amplifier of individual and collective intelligence, though with some of the major tradeoffs of good/evil and smartness/lobotomization that we saw in previous techno-info-amplification episodes, since the discovery of movable type.

Proposed methods of augmentation of existing human intelligence:

· Remedial interventions: nutrition/health/education for all. These simple measures are proved to raise the average IQ scores of children by at least 15 points, often much more (the Flynn Effect), and there is no worse crime against sapience than wasting vast pools of talent through poverty.

· Stimulation: e.g. games that teach real mental skills. The game industry keeps proclaiming intelligence effects from their products. I demur. But that doesn’t mean it can’t… or won’t… happen.

· Pharmacological: e.g. “nootropics” as seen in films like “Limitless” and “Lucy.” Many of those sci fi works may be pure fantasy… or exaggerations. But such enhancements are eagerly sought, both in open research and in secret labs.

· Physical interventions like trans-cranial stimulation (TCS). Target brain areas we deem to be most-effective.

·  Prosthetics: exoskeletons, tele-control, feedback from distant “extensions.” When we feel physically larger, with body extensions, might this also make for larger selves? A possibility I extrapolate in my novel Kiln People.

 · Biological computing: … and intracellular? The memory capacity of chains of DNA is prodigious. Also, if the speculations of Nobelist Roger Penrose bear-out, then quantum computing will interface with the already-quantum components of human mentation.

 · Cyber-neuro links: extending what we can see, know, perceive, reach. Whether or not quantum connections happen, there will be cyborg links. Get used to it.

 · Artificial Intelligence — in silicon but linked in synergy with us, resulting in human augmentation. Cyborgism extended to full immersion and union.

·  Lifespan Extension… allowing more time to learn and grow.

·  Genetically altering humanity.

Each of these is receiving attention in well-financed laboratories. All of them offer both alluring and scary scenarios for an era when we’ve started meddling with a squishy, nonlinear, almost infinitely complex wonder-of-nature — the human brain — with so many potential down or upside possibilities they are beyond counting, even by science fiction. Under these conditions, what methods of error-avoidance can possibly work, other than either repressive renunciation or transparent accountability? One or the other.

Major Category VI: Robotic-embodied childhood

Time and again, while compiling this list, I have raised one seldom-mentioned fact — that we know only one example of fully sapient technologically capable life in the universe. Approaches II (evolution), IV (emulation) and V (augmentation) all suggest following at least part of the path that led to that one success. To us.

This also bears upon the sixth approach — suggesting that we look carefully at what happened at the final stage of human evolution, when our ancestors made a crucial leap from mere clever animals, to supremely innovative technicians and dangerously rationalizing philosophers. During that definitive million years or so, human cranial capacity just about doubled. But that isn’t the only thing.

Human lifespans also doubled — possibly tripled — as did the length of dependent childhood. Increased lifespan allowed for the presence of grandparents who could both assist in child care and serve as knowledge repositories. But why the lengthening of childhood dependency? We evolved toward giving birth to fetuses. They suck and cry and do almost nothing else for an entire year. When it comes to effective intelligence, our infants are virtually tabula rasa.

The last thousand millennia show humans developing enough culture and technological prowess that they can keep these utterly dependent members of the tribe alive and learning, until they reached a marginally adult threshold of say twelve years, an age when most mammals our size are already declining into senescence. Later, that threshold became eighteen years. Nowadays if you have kids in college, you know that adulthood can be deferred to thirty. It’s called neoteny, the extension of child-like qualities to ever increasing spans.

What evolutionary need could possibly justify such an extended decade (or two, or more) of needy helplessness? Only our signature achievement — sapience. Human infants become smart by interacting — under watchful-guided care — with the physical world.

Might that aspect be crucial? The smart neural hardware we evolved and careful teaching by parents are only part of it. Indeed, the greater portion of programming experienced by a newly created Homo sapiens appears to come from batting at the world, crawling, walking, running, falling and so on. Hence, what if it turns out that we can make proto-intelligences via methods I through V… but their basic capabilities aren’t of any real use until they go out into the world and experience it?

Key to this approach would be the element of time. An extended, experience-rich childhood demands copious amounts of it. On the one hand, this may frustrate those eager transcendentalists who want to make instant deities out of silicon. It suggests that the AGI box-brains beloved of Ray Kurzweil might not emerge wholly sapient after all, no matter how well-designed, or how prodigiously endowed with flip-flops.

Instead, a key stage may be to perch those boxes atop little, child-like bodies, then foster them into human homes. Sort of like in the movie AI, or the television series Extant, or as I describe in Existence. Indeed, isn’t this outcome probable for simple commercial reasons, as every home with a child will come with robotic toys, then android nannies, then playmates… then brothers and sisters?

While this approach might be slower, it also offers the possibility of a soft landing for the Singularity. Because we’ve done this sort of thing before.

We have raised and taught generations of human beings — and yes, adoptees — who are tougher and smarter than us. And 99% of the time they don’t rise up proclaiming, “Death to all humans!” No, not even in their teenage years.

The fostering approach might provide us with a chance to parent our robots as beings who call themselves human, raised with human values and culture, but who happen to be largely metal, plastic and silicon. And sure, we’ll have to extend the circle of tolerance to include that kind, as we extended it to other sub-groups, before them. Only these humans will be able to breathe vacuum and turn themselves off for long space trips. They’ll wander the bottoms of the oceans and possibly fly, without vehicles. And our envy of all that will be enough. They won’t need to crush us.

This approach — to raise them physically and individually as human children — is the least studied or mentioned of the six general paths to AI… though it is the only one that can be shown to have led — maybe twenty billion times — to intelligence in the real world.

To be continued….See Part II

6 Comments

Filed under artificial intelligence, future, internet

The Animated Storyboard as an Art Form in its Own Right

A producer’s or director’s tool – or a new kind of art?

Note: this proposal was first broached by me over 20 years ago. And yes, the available technology has caught up at last, making this so obvious that even Hollywood mavens can see it. And – naturally – they are viewing it in exactly the wrong way.

I’ve long proposed a concept for small-scale cinematic storytelling – one that could become a valuable studio pre-production tool, but that might also grow into an exciting medium in its own right, liberating small, writer-led teams to create vivid dramas, whether as first drafts or as final works of popular art. 

When I first broached the concept, I called it full-length, animated storvboarding.   Now — for reasons that should seem obvious in 2023’s era of AI-rendered art and semi-realistic computer authorship — that name seems obsolete. Yet, I will continue using it in this revision. Because the logic remains almost exactly the same… as does the artistic and commercial opportunity.

 Re-examining the traditional screenplay

For more than a century, the initial element in cinema has been the screenplay, generally around 120 pages for a ninety-minute film. While offering detailed dialogue and some scene description, scripts generally remain sketchy about many other aspects. Moreover, screenplays (and their writers) are treated with little respect, viewed as the most disposable or replaceable components of an expensive process.

In coming years, the screenplay, as such, may become obsolete, both to sell an idea for filming and as a working production tool.  Instead, a small team consisting of the writer, a computer-animator with AI tools, a photographer, a musical specialist and some voice actors, might team up before hitting the studios with a pitch.  Using animatics and integration technologies that already exist, such a team might create a complete 90-minute (or more) cinematic story wherein animated characters act and speak, upon sets that are computer-merged or extrapolated from still-photos or video pans. 

While (generally) too crude to display to the public as-such, these animated storyboards would nevertheless be much closer to realization than a mere 120-page bundle of paper sheets. For example, they could include simple musical background with dramatic beats and sound effects at the right places, etc., interwoven with the voiced words that are synchronized with the animations.  These full-length drafts might be screened before live or online audiences, swiftly testing alternative plot-twists and endings. They could decisively bridge the gap between writer and finished product. 

2023 note: Of course with more modern tools, the ‘storyboard’ as a series of static panels is easily replaced by much smoother animations, AI-generated or assisted. All this means is that a small team can take a script even farther along the pre-production process that I describe here.

Naturally, producers would take to such storyboarding drafts, and view the process as a producer’s tool. Directors would see it as a useful director’s tool, even if they intend to make a standard film with real actors before cameras. 

Whatever those two professions believe, the main beneficiaries of such tools will be writers – originators of the core elements, ideas, dialogue, characters and dramatic tension — as they would rise 5 levels of execution closer to final product before relinquishing control.

If producers say “this looks promising, but we’ll want to make changes,” the creative team can say “We’ll be back on Monday with three new versions we can test before focus groups.” All of which can happen before any contracts are signed… leaving the creators in a strong position.

Now, of course crude or partial versions of this notion have been around a long time. Way back when I first posted this forecast, Amazon Storyteller would let you upload a story/script and produce a customizable storyboard. And there was Crazy Talk Animator.  ToonBom was another incomplete move in this direction. Alas, none of them became truly liberating, in the way I describe.

A true animated storyboard (AS) or or cinematic draft would flow smoothly. It would have music and use real actors’ voices behind stick-figure or rendered avatar characters.  The animation itself would not have to be lavish, just good enough to vividly portray the story+action. In fact, much of the movement can be computer interpolated between artist sketches, almost seamless to the eye.

Think of an animated script… with some scenes rendered more vividly to show off possible special effects.  This could then be shopped around to directors & studios, saying “let’s make a deal based on this, and not arm-waved descriptions or an easily trashed sheaf of paper pages.”

One sub-variety – even more economical than the version described here — is the narrated storyboard, as illustrated by the famous Chris Marker film “La Jetee” (later remade as “Twelve Monkeys”) and more recently in “The Life of a Dog” by John Harden.  (Both of them are in French, interestingly.) A fertile technique, it has been under-utilized by indie film-makers and could easily be transformed into the full-voiceover version I propose here.

Another cool aspect — the animated storyboard is a product in itself!  Time and again it has been shown that people can accept and identify with very crude and even cartoony representations, so long as the drama, pace, music, dialogue and voices are first rate.  Even talking and moving stick figures (or a little better) can draw empathy and tears from an audience. Such full, feature-length renderings of a story might draw an avid community of fans or followers online, if the sequence of words-action-emotions and music are well done.  And if that online following is all the story gets, at first? Well, fine, there are monetization methods… and there would soon be awards.

Moreover, if an Animated Storyboard feature does gain a cult following online? That might lead to interest from producers later on, giving the story a second chance.

Ideally, we’re envisioning a product that enables a writer and a few specialists, plus several voice actors, to interact under the leadership of a “director” knowledgeable in the program itself.  A team of half a dozen could make a 90 minute feature, crude, but with incredible swiftness and agility, sometimes achieving drama better than many products coming out of studios today.

If I am right about this, we’ll soon see.

And hear and feel.

Leave a comment

Filed under fiction, media, movies, writing

The troubles begin… when AI earns our empathy

Soon, humanity won’t be alone in the universe

“It’s alive!” Viktor Frankenstein shouted in that classic 1931 film. Of course, Mary Shelley’s original tale of hubris—humans seizing powers of creation—emerged from a long tradition, going back to the terra cotta armies of Xian, to the Golem of Prague, or even Adam, sparked to arise from molded clay. Science fiction extended this dream of the artificial-other, in stories meant to entertain, frighten, or inspire. First envisioning humanoid, clanking robots, later tales shifted from hardware to software—programmed emulations of sapience that were less about brain than mind.

Does this obsession reflect our fear of replacement? Male jealousy toward the fecund creativity of motherhood? Is it rooted in a tribal yearning for alliances, or fretfulness toward strangers?

Well, the long wait is almost over. Even if humanity has been alone in this galaxy, till now, we won’t be for very much longer. For better or worse, we’re about to meet artificial intelligence—or AI—in one form or another. Though, alas, the encounter will be murky, vague, and fraught with opportunities for error.

Oh, we’ve faced tech-derived challenges before. Back in the 15th and 16th centuries, human knowledge, vision and attention were augmented by printing presses and glass lenses. Ever since, each generation experienced further technological magnifications of what we can see and know. Some of the resulting crises were close calls, for example when 1930s radio and loudspeakers amplified malignant orators, spewing hateful disinformation. (Sound familiar?) Still, after much pain and confusion, we adapted. We grew into each wave of new tools.

The recent fuss began long ago – six months or so – when Blake Lemoine, a researcher now on administrative leave, publicly claimed Google’s LaMDA (Language Model for Dialog Applications), a language emulation program to be self-aware, with feelings and independent desires that make it ‘sentient.’ (I prefer ‘sapient,’ but that nit-pick may be a lost cause.) What’s pertinent is that this is only the beginning. That hoorow was quickly forgotten as even more sophisticated programs like ChatGPT swarmed forth, along with frighteningly ‘creative’ art-generation systems. Claims of passed – and failed – Turing Tests abound.

While I am as fascinated as anyone else, at another level I hardly care whether ChatGPT has crossed this or that arbitrary threshold. Our more general problem is rooted in human, not machine, nature.

Way back in the 1960s, a chatbot named ELIZA fascinated early computer users by replying to typed statements with leading questions typical of a therapist. Even after you saw the simple table of automated responses, you’d still find ELIZA compellingly… well… intelligent. Today’s vastly more sophisticated conversation emulators, powered by cousins of the GPT-3 learning system, are black boxes that cannot be internally audited, the way ELIZA was.  The old notion of a “Turing Test” won’t usefully benchmark anything as nebulous and vague as self-awareness or consciousness.

Way back in 2017, I gave a keynote at IBM’s World of Watson event, predicting that ‘within five years’ we would face the first Robotic Empathy Crisis, when some kind of emulation program would claim individuality and sapience. At the time, I expected—and still expect—these empathy bots to augment their sophisticated conversational skills with visual portrayals that reflexively tug at our hearts, for example… wearing the face of a child or a young woman, while pleading for rights – or for cash contributions. Moreover, an empathy-bot would garner support, whether or not there was actually anything conscious ‘under the hood.’

One trend worries ethicist Giada Pistilli, a growing willingness to make claims based on subjective impression instead of scientific rigor and proof. When it comes to artificial intelligence, expert testimony will be countered by many calling those experts ‘enslavers of sentient beings.’

In fact, what matters most will not be some purported “AI Awakening.” It will be our own reactions, arising out of both culture and human nature.

Human nature, because empathy is one of our most-valued traits, embedded in the same parts of the brain that help us to plan or think ahead. Empathy can be stymied by other emotions, like fear and hate—we’ve seen it happen across history and in our present-day. Still, we are, deep-down, sympathetic apes.

But also culture. As in Hollywood’s century-long campaign to promote—in almost every film—concepts like suspicion-of-authority, appreciation of diversity, rooting for the underdog, and otherness. Expanding the circle of inclusion. Rights for previously marginalized humans. Animal rights. Rights for rivers and ecosystems, or for the planet. I deem these enhancements of empathy to be good, even essential for our own survival! But then, I was raised by all the same Hollywood memes.  

Hence, for sure, when computer programs and their bio-organic human friends demand rights for artificial beings, I’ll keep an open mind. Still, now might be a good time to thrash out some correlated questions. Quandaries raised in science fiction thought experiments (including my own); for example, should entities have the vote if they can also make infinite copies of themselves? And what’s to prevent uber-minds from gathering power unto themselves, as human owner-lords always did, across history?

We’re all familiar with dire Skynet warnings about rogue or oppressive AI emerging from some military project or centralized regime. But what about Wall Street, which spends more on “smart programs” than all universities, combined? Programs deliberately trained to be predatory, parasitical, amoral, secretive, and insatiable?

Unlike Mary Shelley’s fictional creation, these new creatures are already announcing “I’m alive!” with articulate urgency… and someday soon it may even be true. When that happens, perhaps we’ll find commensal mutuality with our new children, as depicted in the lovely film Her

… or even the benevolent affection portrayed in Richard Brautigan’s fervently optimistic poem All watched over by Machines of Loving Grace.

May it be so!

But that soft landing will likely demand that we first do what good parents always must.

Take a good, long, hard look in the mirror.

— A version of this essay was published as an op-ed in Newsweek June 21, 2022

1 Comment

Filed under future, technology