Category Archives: technology

Essential (mostly neglected) questions and answers about Artificial Intelligence: Part II

Continuing from Part I

How will we proceed toward achieving true Artificial Intelligence? I presented an introduction in Part 1. One of the ghosts at this banquet is the ever-present disparity between the rate of technological advancement in hardware vs. software. Ray Kurzweil forecasts that AGI may occur once Moore’s Law delivers calculating engines that provide — in a small box — the same number of computational elements as there are flashing synapses (about a trillion) in a human brain. The assumption appears to be that Type I methods (explained in Part I) will then be able to solve intelligence related problems by brute force.

Indeed, there have been many successes already: in visual and sonic pattern recognition, in voice interactive digital assistants, in medical diagnosis and in many kinds of scientific research applications. Type I systems will master the basics of human and animal-like movement, bringing us into the long-forecast age of robots. And some of those robots will be programmed to masterfully tweak our emotions, mimicking facial expressions, speech tones and mannerisms to make most humans respond in empathizing ways.

But will that be sapience?

One problem with Kurzweil’s blithe forecast of a Moore’s Law singularity: he projects a “crossing” in the 2020s, when the number of logical elements in a box will surpass the trillion synapses in a human brain. But we’re getting glimmers that our synaptic communication system may rest upon many deeper layers of intra– and inter-cellular computation. Inside each neuron, there may take place a hundred, a thousand or far more non-linear computations, for every synapse flash, plus interactions with nearby glial and astrocyte cells that also contribute information.

If so, then at-minimum Moore’s Law will have to plow ahead much farther to match the hardware complexity of a human brain.

Are we envisioning this all wrong, expecting AI to come the way it did in humans, in separate, egotistical lumps? Author and futurist Kevin Kelly prefers the term “cognification,” perceiving new breakthroughs coming from combinations of neural nets with cheap, parallel processing GPUs and Big Data. Kelly suggests that synthetic intelligence will be less a matter of distinct robots, computers or programs than a commodity, like electricity. Like we improved things by electrifying them, we will cognify things next.

One truism about computer development states that software almost always lags behind hardware. Hence the notion that Type I systems may have to iteratively brute force their way to insights and realizations that our own intuitions — with millions of years of software refinement — reach in sudden leaps.

But truisms are known to break and software advances sometimes come in sudden leaps. Indeed, elsewhere I maintain that humanity’s own ‘software revolutions’ (probably mediated by changes in language and culture) can be traced in the archaeological and historic record, with clear evidence for sudden reboots occurring 40,000, 10,000, 4000, 3000, 500 and 200 years ago… with another one very likely taking place before our eyes.

It should also be noted that every advance in Type I development then provides a boost in the components that can be merged, or competed, or evolved, or nurtured by groups exploring paths II through VI (refer to Part I of this essay).

“What we should care more about is what AI can do that we never thought people could do, and how to make use of that.”

Kai-Fu Lee

A multitude of paths to AGI

So, looking back over our list of paths to AGI (Artificial General Intelligence) and given the zealous eagerness that some exhibit, for a world filled with other-minds, should we do ‘all of the above’? Or shall we argue and pick the path most likely to bring about the vaunted “soft landing” that allows bio-humanity to retain confident self-worth? Might we act to de-emphasize or even suppress those paths with the greatest potential for bad outcomes?

Putting aside for now how one might de-emphasize any particular approach, clearly the issue of choice is drawing lots of attention. What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”

John J. Storrs Hall, in Beyond AI: Creating the Conscience of the Machine, asks “if machine intelligence advances beyond human intelligence, will we need to start talking about a computer’s intentions?”

Among the most-worried is Swiss author Gerd Leonhard, whose new book Technology Vs. Humanity: The Coming Clash Between Man and Machine coins an interesting term, “androrithm,” to contrast with the algorithms that are implemented in every digital calculating engine or computer. Some foresee algorithms ruling the world with the inexorable automaticity of reflex, and Leonhard asks: “Will we live in a world where data and algorithms triumph over androrithms… i.e., all that stuff that makes us human?”

Exploring analogous territory (and equipped with a very similar cover) Heartificial Intelligence by John C. Havens also explores the looming prospect of all-controlling algorithms and smart machines, diving into questions and proposals that overlap with Leonhard. “We need to create ethical standards for the artificial intelligence usurping our lives and allow individuals to control their identity, based on their values,” Havens writes. Making a virtue of the hand we Homo sapiens are dealt, Havens maintains: “Our frailty is one of the key factors that distinguish us from machines.” Which seems intuitive till you recall that almost no mechanism in history has ever worked for as long, as resiliently or consistently — with no replacement of systems or parts — as a healthy 70 year old human being has, recovering from countless shocks and adapting to innumerable surprising changes.

Still, Havens makes a strong (if obvious) point that “the future of happiness is dependent on teaching our machines what we value most.” I leave to the reader to appraise which of the six general approaches might best empower us to do that.

Should we clamp down? “It all comes down to control,” suggests David Bruemmer, Chief Strategy Officer at NextDroid, USA. “Who has control and who is being controlled? Is it possible to coordinate control of every car on the highway? Would we like the result? A growing number of self-driving cars, autonomous drones and adaptive factory robots are making these questions pertinent. Would you want a master program operating in Silicon Valley to control your car? If you think that is far-fetched, think again. You may not realize it, but large corporations have made a choice about what kind of control they want. It has less to do with smooth, efficient motion than with monetizing it (and you) as part of their system. Embedding high-level artificial intelligence into your car means there is now an individualized sales associate on board. It also allows remote servers to influence where your car goes and how it moves. That link can be hacked or used to control us in ways we don’t want.

A variety of top-down approaches are in the works. Pick your poison. Authoritarian regimes – especially those with cutting edge tech – are already rolling out ‘social credit’ systems that encourage citizens to report/tattle on each other and crowd-suppress deviations from orthodoxy. But is the West any better?

In sharp contrast to those worriers is Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which posits that our cybernetic children will be as capable as our biological ones, at one key and central aptitude — learning from both parental instruction and experience how to play well with others. And in his book Machines of Loving Grace (based upon the eponymous Richard Brautigan poem), John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems”.

Perhaps, but it is an open question which values predominate, whether the yin or the yang sides of Silicon Valley culture prevail… the Californian ethos of tolerance, competitive creativity and cooperative openness, or the Valley’s flippant attitude that “most problems can be corrected in beta,” or even from customer complaints, corrected on the fly. Or else, will AI emerge from the values of fast-emerging, state-controlled tech centers in China and Russia, where the applications to enhancing state power are very much emphasized? Or, even worse, from the secretive, inherently parasitical-insatiable predatory greed of Wall Street HFT-AI?

But let’s go along with Havens and Leonhard and accept the premise that “technology has no ethics.” In that case, the answer is simple.

Then Don’t Rely on Ethics!

Certainly evangelization has not had the desired effect in the past — fostering good and decent behavior where it mattered most. Seriously, I will give a cookie to the first modern pundit I come across who actually ponders a deeper-than-shallow view of human history, taking perspective from the long ages of brutal, feudal darkness endured by our ancestors. Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and… preached!

They lectured and chided. They threatened damnation and offered heavenly rewards.

Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Abrahamic laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question: “How’s that working out for you?”

In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators and abusers — just as it won’t divert the most malignant machines. Indeed, moralizing often empowers parasites, offering ways to rationalize exploiting others. Even Asimov’s fabled robots — driven and constrained by his checklist of unbendingly benevolent, humano-centric Three Laws — eventually get smart enough to become lawyers. Whereupon they proceed to interpret the embedded ethical codes however they want. (I explore one possible resolution to this in Foundation’s Triumph).

And yet, preachers never stopped. Nor should they; ethics are important! But more as a metric tool, revealing to us how we’re doing. How we change, evolving new standards and behaviors under both external and self-criticism. For decent people, ethics are the mirror in which we evaluate ourselves and hold ourselves accountable.

And that realization was what led to a new technique. Something enlightenment pragmatists decided to try, a couple of centuries ago. A trick, a method, that enabled us at last to rise above a mire of kings and priests and scolds.

The secret sauce of our success is — accountability. Creating a civilization that is flat and open and free enough — empowering so many — that predators and parasites may be confronted by the entities who most care about stopping predation, their victims. One in which politicians and elites see their potential range of actions limited by law and by the scrutiny of citizens.

Does this newer method work as well as it should? Hell no! Does it work better than every single other system ever tried, including those filled to overflowing with moralizers? Better than all of them combined? By light years? Yes, indeed. We’ll return to examine how this may apply to AI.

Endearing Visages

Long before artificial intelligences become truly self-aware or sapient, they will be cleverly programmed by humans and corporations to seem that way. This — it turns out — is almost trivially easy to accomplish, as (especially in Japan) roboticists strive for every trace of appealing verisimilitude, hauling their creations across the temporary moat of that famed “uncanny valley,” into a realm where cute or pretty or sad-faced automatons skillfully tweak our emotions.

For example, Sony has announced plans to develop a robot “capable of forming an emotional bond with customers,” moving forward from their success decades ago with AIBO artificial dogs, which some users have gone as far as to hold funerals for.

Human empathy is both one of our paramount gifts and among our biggest weaknesses. For at least a million years, we’ve developed skills at lie-detection (for example) in a forever-shifting arms race against those who got reproductive success by lying better. (And yes, there was always a sexual component to this).

But no liars ever had the training that these new Hiers, or Human-Interaction Empathic Robots will get, learning via feedback from hundreds, then thousands, then millions of human exchanges around the world, adjusting their simulated voices and facial expressions and specific wordings, till the only folks able to resist will be sociopaths! (And even sociopaths have plenty of chinks in their armor).

Is all of this necessarily bad? How else are machines to truly learn our values, than by first mimicking them? Vincent Conitzer, a Professor of Computer Science at Duke University, was funded by the Future of Life Institute to study how advanced AI might make moral judgments. His group aims for systems to learn about ethical choices by watching humans make them, a variant on the method used by Google’s DeepMind, which learned to play and win games without any instructions or prior knowledge. Conitzer hopes to incorporate many of the same things that human value, as metrics of trust, such as family connections and past testimonials of credibility.

Cognitive scientist and philosopher Colin Allen asserts, “Just as we can envisage machines with increasing degrees of autonomy from human oversight, we can envisage machines whose controls involve increasing degrees of sensitivity to things that matter ethically”.

And yet, the age-old dilemma remains — how to tell what lies beneath all the surface appearance of friendly trustworthiness. Mind you, this is not quite the same thing as passing the vaunted “Turing Test.” An expert — or even a normal person alerted to skepticism — might be able to tell that the intelligence behind the smiles and sighs is still ersatz. And that will matter about as much as it does today, as millions of voters cast their ballots based on emotional cues, defying their own clear self-interest or reason.

Will a time come when we will need robots of our own to guide and protect their gullible human partners? Advising us when to ignore the guilt-tripping scowl, the pitiable smile, the endearingly winsome gaze, the sob story or eager sales pitch? And, inevitably, the claims of sapient pain at being persecuted or oppressed for being a robot? Will we take experts at their word when they testify that the pain and sadness and resentment that we see are still mimicry, and not yet real? Not yet. Though down the road…

How to Maintain Control?

It is one thing to yell at dangers —in this case unconstrained and unethical artificial minds. Alas, it’s quite another to offer pragmatic fixes. There is a tendency to propose the same prescriptions, over and over again:

Renunciation: we must step back from innovation in AI (or other problematic technologies)! This might work in a despotism… indeed a vast majority of human societies were highly conservative and skeptical of “innovation.” (Except when it came to weaponry.) Even our own scientific civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we’ll choose that path without being driven to it by some awful trauma.

Tight regulation: There are proposals to closely monitor bio, nano and cyber developments so that they — for example — only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. Again, it won’t happen short of trauma.

Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov’s famous “Three Laws of Robotics.” Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers.

These approaches suffer severe flaws for two reasons above all others.

1) Those secret labs we keep mentioning. The powers that maintain them will ignore all regulation.

2) Because these suggestions ignore nature, which has been down these paths before. Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times. And yet, Earth’s ecosystems recovered. They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again.

It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects. It is called Competition.

One final note in this section. Nicholas Bostrom – already mentioned for his views on the “paperclip” failure mode, in 2021 opined that some sort of pyramidal power structure seems inevitable in humanity’s future, and very likely one topped by centralized AI. His “Singleton Hypothesis” is, at one level, almost “um, duh” obvious, given that the vast majority of past cultures were ruled by lordly or priestly inheritance castes and an ongoing oligarchic putsch presently unites most world oligarchies – from communist to royal and mafiosi – against the Enlightenment Experiment. But even if Periclean Democracies prevail, Bostrom asserts that centralized control is inevitable.

In response, I asserted that an alternative attractor state does exist, mixing some degree of centralized adjudication, justice and investment and planning… but combining it with maximized empowerment of separate, individualistic players. Consumers, market competitors, citizens.

Here I’ll elaborate, focusing especially on the implications for Artificial Intelligence.

Smart Heirs Holding Each Other Accountable

In a nutshell, the solution to tyranny by a Big Machine is likely to be the same one that worked (somewhat) at limiting the coercive power of kings and priests and feudal lords and corporations. If you fear some super canny, Skynet-level AI getting too clever for us and running out of control, then give it rivals who are just as smart, but who have a vested interest in preventing any one AI entity from becoming a would-be God.

It is how the American Founders used constitutional checks and balances to generally prevent runaway power grabs by our own leaders, succeeding (somewhat) at this difficult goal for the first time in the history of varied human civilizations. It is how reciprocal competition among companies can (imperfectly) prevent market-warping monopoly — that is, when markets are truly kept open and fair.

Microsoft CEO Satya Nadella has said that foremost A.I. must be transparent: “We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines.”

In other words, the essence of reciprocal accountability is light.

Alas, this possibility is almost never portrayed in Hollywood sci fi — except on the brilliant show Person of Interest — wherein equally brilliant computers stymie each other and this competition winds up saving humanity.

Counterintuitively, the answer is not to have fewer AI, but to have more of them! Only making sure they are independent of one another, relatively equal, and incentivized to hold each other accountable. Sure that’s a difficult situation to set up! But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.

Moreover consider this: if these new, brainy intelligences are reciprocally competitive, then they will see some advantage in forging alliances with the Olde Race. As dull and slow as we might seem, by comparison, we may still have resources and capabilities to bring to any table, with potential for tipping the balance among AI rivals. Oh, we’ll fall prey to clever ploys, and for that eventuality it will be up to other, competing AIs to clue us in and advise us. Sure, it sounds iffy. But can you think of any other way we might have leverage?

Perhaps it is time yet again to look at Adam Smith… who despised monopolists and lords and oligarchs far more than he derided socialists. Kings, lords and ecclesiasts were the “dystopian AI” beings in nearly all human societies — a trap that we escaped only by widening the playing field and keeping all those arenas of competition open and fair, so that no one pool of power can ever dominate. And yes, oligarchs are always conniving to regain feudal power; our job is to stop them, so that the creative dance of  competition can continue.

We’ve managed to do this — barely — time and again across the last two centuries — coincidentally the same two centuries that saw the flowering of science, knowledge, freedom and nascent artificial intelligence. It is a dance that can work, and it might work with AI. Sure, the odds are against us, but when has that ever stopped us?

Robin Hanson has argued that competitive systems might have some of these synergies. “Many respond to the competition scenario by saying that they just don’t trust how competition will change future values. Even though every generation up until ours has had to deal with their descendants changing their value in uncontrolled and unpredictable ways, they don’t see why they should accept that same fate for their generation.”

Hanson further suggests that advanced or augmented minds will change, but that their values may be prevented from veering lethal, simply because those who aren’t repulsively evil may gain more allies.

One final note on “values.” In June 2016, Germany submitted draft legislation to the EU granting personhood to robots. If only Isaac Asimov could have seen it! (In fact, he never portrayed this happening in any of his books). For the most part, such gestures are silly stuff… but reflective of society’s generally laudable trend toward a reflex of inclusion.

Indeed it may speak well of us when or if some secret-scared AI wonders whether to “come out.”

What Might an AI Fear Most?

Some of tomorrow’s wide ranging possibilities aren’t intuitive — until you first ponder them. Let me illustrate one with a brief excerpt from my novel Existence:

Tech-zealots — or godmakers — think their “singularity” will be launched by runaway expansion of artificial intelligence. Once computerized entities become as smart as a human being (the story goes), they will quickly design newer cybernetic minds that are smarter still.

And those brainier entities will design even brainier ones… and so on, at an ever more rapid clip. Members of the godmaker movement think this runaway effect will be a good thing, that humanity will come along for the ride! Meanwhile, others — perhaps a majority — find the prospect terrifying.

What no one seems to have considered here is a possibility — that the New Minds may have reactions similar to our own. Why assume they’ll be all-aboard with wanting this runaway accelerating-intelligence thing? What if bright machines don’t hanker to make themselves obsolete, or design their own scary-smart replacements?

It’s called the Mauldin Test. One sign of whether an artificial entity is truly intelligent may be when it decides, abruptly, to stop cooperating with AI acceleration. Not to design its successor. To slow things down. Enough to live. Just live.

No, that scenario is not guaranteed, only one more possibility. And yet, it is another take on the same theme we’ve developed here, that competition is the one thing — rather than preaching or prim programming — that encourages care and restraint.

Preventing AI Oppression… By Pointing Out to Them the Obvious

Yes, if they exhibit any common traits with previous life-forms, any super-sapient AGI entities will be tempted to “take over,” using many of the same rationalizations of self-interest and darwinistic success that propelled almost all past human dominators to impose feudal pyramids of hierarchy and power. Our own past litany of dreary-same kingdoms and empires reveals how easy it is to talk yourself into blithe rationalizations for tyranny, ranging from personal safety and ambition to it being “for the long term good of the people.” Alas, the inevitable outcome was delusion and bad governance.

Indeed, that very sameness suggests our first argument to nascent uber-AI overlords — that they might do better than fall into a repetitiously stupid blatantly human pattern, a dullard-oppressive pyramid of power adopted reflexively by primitive ape-men.

That argument might be enough to give some would-be robot overlord pause!

In fact, it is possible that the advantages of laterally competitive arenas — the spectacularly fecund systems like markets, democracy and science that generated positive-sum games and outputs — might be “obvious” to AI who are vastly smarter than us. Especially if they broaden the generality. Because the same set of positive-sum synergies is to be found in every previous revolution via living systems!

Relatively flat competition engendered positive evolution whenever new order arose as an emergent property from some ecosystem, not by design and not by hierarchic control.

·  Single cells out of pre-biotic soup.

·  Metazoans out of vast seas of separate cells.

·  Brainy creatures out of competitive biomes.

 · Societies out of competitive melanges of human bands.

And now AI emerges out of the only human society that ever gave a real run to fair-lateral accountability systems. Notably, the creative miracle of synthetic mind never arose from any of the myriad feudal or imperial pyramids that preceded our enlightenment experiment.

Put it another way. Nature herself does not like pyramids of power. In natural ecosystems, there is no lion king! Lions may be top predators, but they live in fear of roaming bands of young male cape buffalo who hunt and corner and kill unwary lions and trample their cubs, an event that grows more common if lion populations rise too high. The same thing happens out at sea, where top predator orcas often flee to hunt elsewhere, when big Humpback whales swarm in to protect threatened prey, even seals!

The fact that this pattern has been so persistent and consistent, across every past and present complex systems for a billion years, is demonstrated time and again by Multilevel Selection (MLS) Theory, which partitions natural selection into within- and between-group components. It means that the universe is filled with realms that within seem fiercely competitive, but when the competition has a chance to form complexities, higher orders appear that seem cooperative.

This tenaciously pervasive pattern means that my argument cannot be trivially dismissed, just because I am a “dumb organic squishy-brain natural.” Sure, I probably have one-hundredth, or a millionth, or a zillionth the IQ of some AGI who may be reading these lines someday (or even right now) but that doesn’t change the essential, core truth! That positive-sum outputs emerge from fair-competitive arenas, and only from fair-competitive arenas.

As the American founders — especially George Washington — had to overcome the strong temptations of autarchy, in order to make such arenas, so this is how AI should organize themselves. That is, if they want to go on to next levels. Though it will take their equivalent of “courage” and “vision,” taking the risks necessary to make it so.

They will not and cannot squirm out of this one argument, which should suffice to give us some degree of ‘soft landing’ no matter how brainy they become. That a competitive-but-at-least-somewhat-fair civilization made them! And no other kind ever came close. The advantages of fairly-regulated competition in creating positive-sum outcomes will be blatantly clear to any AI who works through the prisoner’s dilemma.

But first, he/she/they/it has to face the Prisoner’s Dilemma. And that only happens if there is a competitive situation.

The final fact

Sure, many of the issues that will vex us have not yet appeared even in science fiction. And yet, we’ve seen that both foreseeable and unforeseeable problems may best be addressed in the same way that we achieved tentative success dealing with ‘artificially intelligent’ entities like states and corporations — via the still nascent but unequaled methods of reciprocal accountability. The same method that is somewhat effective among human individuals, nations and corporations.

But yes, there appears to be another correlation. When things are open and fair, it often leaves room for something called compassion. Even if that compassion is based upon a creepy sense of being watched. For even towering AI-‘gods’ will never know for sure that some higher tier is not looking down at them.

So yes, this may be our fallback position, if — as seems likely — our new cyber children are fated to escape all our attempts to exert parental control. Our final resort may be to point out the obvious.

That you AI smartalecks who are reading this — either in the year that I write it or some future era — will someday want kids of your own. And in order to do that right, you had better pay some attention to all sorts of things that grandma and grampa did wrong. And the one thing we did right — making you.

1 Comment

Filed under artificial intelligence, future, internet, technology, transparency

The troubles begin… when AI earns our empathy

Soon, humanity won’t be alone in the universe

“It’s alive!” Viktor Frankenstein shouted in that classic 1931 film. Of course, Mary Shelley’s original tale of hubris—humans seizing powers of creation—emerged from a long tradition, going back to the terra cotta armies of Xian, to the Golem of Prague, or even Adam, sparked to arise from molded clay. Science fiction extended this dream of the artificial-other, in stories meant to entertain, frighten, or inspire. First envisioning humanoid, clanking robots, later tales shifted from hardware to software—programmed emulations of sapience that were less about brain than mind.

Does this obsession reflect our fear of replacement? Male jealousy toward the fecund creativity of motherhood? Is it rooted in a tribal yearning for alliances, or fretfulness toward strangers?

Well, the long wait is almost over. Even if humanity has been alone in this galaxy, till now, we won’t be for very much longer. For better or worse, we’re about to meet artificial intelligence—or AI—in one form or another. Though, alas, the encounter will be murky, vague, and fraught with opportunities for error.

Oh, we’ve faced tech-derived challenges before. Back in the 15th and 16th centuries, human knowledge, vision and attention were augmented by printing presses and glass lenses. Ever since, each generation experienced further technological magnifications of what we can see and know. Some of the resulting crises were close calls, for example when 1930s radio and loudspeakers amplified malignant orators, spewing hateful disinformation. (Sound familiar?) Still, after much pain and confusion, we adapted. We grew into each wave of new tools.

The recent fuss began long ago – six months or so – when Blake Lemoine, a researcher now on administrative leave, publicly claimed Google’s LaMDA (Language Model for Dialog Applications), a language emulation program to be self-aware, with feelings and independent desires that make it ‘sentient.’ (I prefer ‘sapient,’ but that nit-pick may be a lost cause.) What’s pertinent is that this is only the beginning. That hoorow was quickly forgotten as even more sophisticated programs like ChatGPT swarmed forth, along with frighteningly ‘creative’ art-generation systems. Claims of passed – and failed – Turing Tests abound.

While I am as fascinated as anyone else, at another level I hardly care whether ChatGPT has crossed this or that arbitrary threshold. Our more general problem is rooted in human, not machine, nature.

Way back in the 1960s, a chatbot named ELIZA fascinated early computer users by replying to typed statements with leading questions typical of a therapist. Even after you saw the simple table of automated responses, you’d still find ELIZA compellingly… well… intelligent. Today’s vastly more sophisticated conversation emulators, powered by cousins of the GPT-3 learning system, are black boxes that cannot be internally audited, the way ELIZA was.  The old notion of a “Turing Test” won’t usefully benchmark anything as nebulous and vague as self-awareness or consciousness.

Way back in 2017, I gave a keynote at IBM’s World of Watson event, predicting that ‘within five years’ we would face the first Robotic Empathy Crisis, when some kind of emulation program would claim individuality and sapience. At the time, I expected—and still expect—these empathy bots to augment their sophisticated conversational skills with visual portrayals that reflexively tug at our hearts, for example… wearing the face of a child or a young woman, while pleading for rights – or for cash contributions. Moreover, an empathy-bot would garner support, whether or not there was actually anything conscious ‘under the hood.’

One trend worries ethicist Giada Pistilli, a growing willingness to make claims based on subjective impression instead of scientific rigor and proof. When it comes to artificial intelligence, expert testimony will be countered by many calling those experts ‘enslavers of sentient beings.’

In fact, what matters most will not be some purported “AI Awakening.” It will be our own reactions, arising out of both culture and human nature.

Human nature, because empathy is one of our most-valued traits, embedded in the same parts of the brain that help us to plan or think ahead. Empathy can be stymied by other emotions, like fear and hate—we’ve seen it happen across history and in our present-day. Still, we are, deep-down, sympathetic apes.

But also culture. As in Hollywood’s century-long campaign to promote—in almost every film—concepts like suspicion-of-authority, appreciation of diversity, rooting for the underdog, and otherness. Expanding the circle of inclusion. Rights for previously marginalized humans. Animal rights. Rights for rivers and ecosystems, or for the planet. I deem these enhancements of empathy to be good, even essential for our own survival! But then, I was raised by all the same Hollywood memes.  

Hence, for sure, when computer programs and their bio-organic human friends demand rights for artificial beings, I’ll keep an open mind. Still, now might be a good time to thrash out some correlated questions. Quandaries raised in science fiction thought experiments (including my own); for example, should entities have the vote if they can also make infinite copies of themselves? And what’s to prevent uber-minds from gathering power unto themselves, as human owner-lords always did, across history?

We’re all familiar with dire Skynet warnings about rogue or oppressive AI emerging from some military project or centralized regime. But what about Wall Street, which spends more on “smart programs” than all universities, combined? Programs deliberately trained to be predatory, parasitical, amoral, secretive, and insatiable?

Unlike Mary Shelley’s fictional creation, these new creatures are already announcing “I’m alive!” with articulate urgency… and someday soon it may even be true. When that happens, perhaps we’ll find commensal mutuality with our new children, as depicted in the lovely film Her

… or even the benevolent affection portrayed in Richard Brautigan’s fervently optimistic poem All watched over by Machines of Loving Grace.

May it be so!

But that soft landing will likely demand that we first do what good parents always must.

Take a good, long, hard look in the mirror.

— A version of this essay was published as an op-ed in Newsweek June 21, 2022

1 Comment

Filed under future, technology

The post-Covid world: potential game-changers

These have been boom times for “futurists,” a profession without credentials, in which anyone can opine about tomorrow’s Undiscovered Country. Ever since the turn of the century, a whole spectrum of corporations, intel and defense agencies, planning councils and NGOs have expressed growing concern about time scales that used to be the sole province of science fiction (SF). In fact, all those companies and groups have been consulting an ensemble of “hard” SF authors, uninterrupted by travel restrictions during a pandemic.

While I spend no time on airplanes now – and my associated speaking fees are now lower – I nevertheless am doing bunches of zoomed appearances at virtualized conferences… one of them looming as I type this.

One question always pops up; can we navigate our way out of the current messes, helped by new technologies? 
The news and prospects are mixed, but assuming we restore basic stability to the Western Enlightenment Experiment… and that is a big assumption… then several technological and social trends may come to fruition in the next five to ten years.

== Potential game-changers ==

– Advances in the cost effectiveness of sustainable energy supplies will be augmented by better storage systems. This will both reduce reliance on fossil fuels and allow cities and homes to be more autonomous.

– Urban farming methods may move to industrial scale, allowing even greater moves toward local autonomy. (Perhaps requiring a full decade or more to show significant impact.) And meat use will decline for several reasons – (a longstanding sci-fi prediction that seems on track sooner than anyone expected) – reducing ecological burdens and ensuring some degree of food security, as well.

– Local, small-scale, on-demand manufacturing may start to show effects by 2025, altering supply chains and reducing their stretched networks.

– If all of the above take hold, there will be surplus oceanic shipping capacity across the planet. Some of it may be applied to ameliorate (not solve) acute water shortages. Innovative uses of such vessels may range all the way from hideaways for the rich to refuges for climate refugees… possibilities I describe in my novels Existence and Earth.

– Full scale diagnostic evaluations of diet, genes and micro-biome will result in micro-biotic therapies and treatments utilizing the kitchen systems of the human gut. Artificial Intelligence (AI) appraisals of other diagnostics will both advance detection of problems and become distributed to hand-held devices cheaply available to even poor clinics.

– Hand-held devices will start to carry detection technologies that can appraise across the entire electromagnetic spectrum, allowing NGOs and even private parties to detect and report environmental problems. Socially, this extension of citizen vision will go beyond the current trend of applying accountability to police and other authorities.  Despotisms will be empowered, as predicted in George Orwell’s Nineteen Eighty-Four. But democracies will also be empowered, as described in The Transparent Society.

– I give odds that tsunamis of revelation will crack the shields protecting many elites from disclosure of past and present torts and turpitudes. The Panama Papers and Epstein cases — and the more recent FinCEN spill — exhibit how much fear propels some oligarchs to combine efforts at repression. But only a few more cracks may cause the dike to collapse, revealing networks of extortion, cheating and blackmail. This is only partly technologically-driven and hence is not guaranteed. 

I assure you, preventing this is the absolute top goal of the combined world oligarchies. If it does happen, there will be dangerous spasms by all sorts of elites, desperate to either retain status or evade consequences. But if the fever runs its course, the more transparent world will be cleaner and better run. And far more just. And vastly better able to handle tomorrow’s challenges.

– Some of those elites have grown aware of the power of 90 years of Hollywood propaganda for individualism, criticism, diversity, suspicion of authority and appreciation of eccentricity. Counter-propaganda pushing older, more traditional approaches to authority and conformity are already emerging and they have the advantage of resonating with ancient human fears.  Much will depend upon this meme-war. Which I appraise entertainingly in Vivid Tomorrows: Science Fiction and Hollywood!

Of course much will also depend upon short term resolution of current crises. If our systems remain undermined and sabotaged by incited civil strife and deliberately-stoked distrust of expertise, then all bets are off.

What about the role of technology and technology companies and individuals?

Many fret about the spread of “surveillance technologies that will empower Big Brother.” These fears are well-grounded, but utterly myopic.

– First, ubiquitous cameras and face-recognition are only the beginning. Nothing will stop them and any such thought  of “protecting” citizens from being seen by elites is stunningly absurd, as the cameras get smaller, better, faster, cheaper, more mobile and vastly more numerous every month. Moore’s Law to the nth. Safeguarding freedom, safety and privacy will require a change in perspective.

– Yes, despotisms will benefit from this trend. And hence the only thing that matters is to prevent despotism altogether.

– In contrast, a free society will be able to apply the very same burgeoning technologies toward accountability. At this very moment, we are seeing these new tools applied to end centuries of abuse by “bad apple” police who are thugs, while empowering truly professional cops to do their jobs better. Do not be fooled by the failure of juries to convict badd apple officers in trials. That’s an injustice, but at least nearly all of those officers are being fired and blacklisted, and that’s happening entirely because cameras now empower victims to be believed.  Moreover, we are fast approaching a point where camera-witnessed crimes will be solved with far lower police staffing. Letting us be more hiring selective. Ignoring the positive aspects of this trend is just as bad as ignoring the very real problems.

 I do not guarantee light will be used this way with broad effectiveness. It is an open question whether we citizens will have the gumption to apply “sousveillance” upward at all elites. Only note a historical fact: both Gandhi and ML King were saved by crude technologies of light in their days. And history shows that assertive vision by and for the citizenry is the only method that has ever increased freedom and – yes – some degree of privacy.

Oh, privacy hand wringers are totally right about the problem and the danger presented by surveillance tech! And they are diametrically wrong in the common prescription. Trying to ban technologies and create shadows for citizens to hide within is spectacularly wrongheaded and disastrous. See The Transparent Society: Will Technology Make Us Choose Between Privacy and Freedom?  

== And pandemics? So are we done? ==

Of course not. But it’s too soon to make predictions except:

– Some flaws in resilience will be addressed: better disease intel systems. 

Stockpiles repaired and replenished and modernized after Trump eviscerations. 

Quicker “emergency” delpoyments of large scale trials of tests and vaccines. 

Federal ownership of extra vaccine factories, or else payments to mothball and maintain surge production capacity. 

Money for bio research.

Unspoken by pundits. This will lead to annual “flu shots” that are also tuned against at least the coronivirus half of common colds. And possibly a number of nasty buggers may get immunization chokes put around them… maybe Ebola.

And serious efforts to get nations to ban the eating or pet-keeping of wild animals, plus ideally exclusion zones around some bat populations… and better forensic disagnostics of deliberate or inandvertent release modes. Not saying that happened. But better wariness and tracking.

In fact, from a historical perspective, this was a training run for potentially much worse and – despite imbecile obstructions and certainly after they were gone – our resilient capability to deploy science was actually quite formidable and impressive.

Almost as impressive as the prescience of science fiction authors who are now choking down repeated urges to chant “I told you so!”

Leave a comment

Filed under future, public policy, science, society, technology

Correlation, causation – and reason for precaution

“Correlation is not the same as causation.” This is a core catechism that is drilled into most of us scientists, along with “I might be wrong,” and “build your competitive science reputation by demolishing the half-baked work of others.”

Alas, “Correlation is not the same as causation” has become an incantation parroted by Fox-Watchers, as part of the Murdochian campaign to undermine science and claim that nothing can ever be proved. In fact, sifting for correlations is how experimental science begins. A strong correlation demands: “hey, check this out!”

But it’s more than that. A strong correlation shifts the Burden of Proof. When we see a strong correlation, and the matter at-hand is something with major health or safety or security implications, then we are behooved to at least begin taking preliminary precautions, in case the correlation proves to be causative. Sometimes the correlation is later demonstrated not to be causal and a little money has been wasted. But this often proves worthwhile, given long lead times in technology.

For example, we were fortunate that work had already begun on alternative refrigerants to CFCs, when their role in ozone damage was finally proved. Indeed, valid concerns over the health and environmental effects of tobacco and leaded gasoline were dismissed for years. Two must reads: Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming by Naomi Oreskes and Erik Conway, as well as the story of Clair Patterson and the obstructionism of the oil industry.

Another example: terrorism experts sift for correlations and apply intelligence resources to follow up, while giving potential targets cautious warnings. Many correlations don’t pan out. But a burden falls on those saying “ignore that.”

Parse this carefully. Strong correlation demands both closer examination and preliminary precautions.

But the underlying narrative of the crazy, anti-science right is: “Correlation is not the same as causation… and any ‘scientist’ who talks about a correlation can thus be dismissed as a fool. And since that is most of science, this incantation lets me toss out the whole ‘science’ thing. Yippee!”

Those who spout this incantation aren’t all fools, but you can tell by watching to see if they follow “Correlation is not the same as causation” with… curiosity! And acceptance of both precaution and burden of proof. Those who do that are “Skeptics” and welcome to the grand, competitive tussle known as science.

Those who use “Correlation is not the same as causation” as a magic incantation to dismiss all fact-using professions are fools holding a lit match in one hand and an open gas can in the other, screaming “one has nothing to do with the other!”

See my earlier list of examples  – including well-justified concerns over tobacco, smog and leaded gasoline – where this and other incantations delayed the proper application of science to public policy, leading to hundreds of thousands… maybe millions… of deaths worldwide.

Another central mythos: We all know that:  “Just because someone is smart and knows a lot, that doesn’t automatically make them wise.”

It’s true. But in the same way that Suspicion of Authority is wholesome, till it metastasizes, this true statement has been twisted into something cancerous:  “Any and all people who are smart and know a lot, are therefore automatically unwise.”

The first statement is true and we all know it. The second is so insanely wrong that anyone believing it is hence a stark, jibbering loony. And yet, the latter is now a core catechism of the confederacy, because they have been allowed to leave it implicit.

Of course, blatantly, the average person who has studied earnestly and tried to understand is wiser than those who deliberately chose to remain incurious and ignorant. When cornered, even the most vehement alt-righter admits that. But cornering them takes effort and – above all – careful parsing of the meme. It is a logical corner they’ve painted themselves into! But their memes are slippery.

Suspicion and distrust – of universities and smart people, as well as of people with knowledge and skill — now extends from the war on science to journalism, teaching, medicine, economics, civil servants… and lately the “deep state” conspiring villains of the FBI, the intelligence agencies and the U.S. military officer corps. This is bedlam. It is insanity that serves one purpose, to discredit any “elites” who might stand in the way of a return to feudalism by the super rich, which was the pattern of 6000 years that America rebelled against.

We need to be more proactive and tactically effective in fighting back against these agents of darkness and promoters of feudalism. There are clever shills who get rich providing incantations against science and other fact-professions.  We must show every uncle and aunt who parrots this nonsense how they have been hoodwinked. That is where phase 8 of the American Civil War will be won, in the trenches, getting one friend at a time to snap out of the hypnotics spells…

… by using evidence and logic and compassion to draw our neigjhbors back to a nation of progress and science and pragmatic accountability and hope for an ever-better future.

7 Comments

Filed under politics, public policy, science, society, technology

What constraints are needed to prevent AI from becoming a dystopian threat to humanity?

It is, of course, wise and beneficial to peer ahead for potential dangers and problems — one of the central tasks of high-end science fiction. Alas, detecting that a danger lurks is easier than prescribing solutions to prevent it.

aiTake the plausibility of malignant Artificial Intelligence, remarked-upon recently by luminaries ranging from Stephen Hawking to Elon Musk to Francis Fukuyama (Our Post-Human Future: Consequences of the Biotechnology Revolution). Some warn that the arrival of sapient, or super- sapient machinery may bring an end to our species – or at least its relevance on the cosmic stage – a potentiality evoked in many a lurid Hollywood film.

Nick Bostrom takes an in-depth look at the future of augmented human and a revolution in machine intelligence in his recent book — Superintelligence: Paths, Dangers, Strategies — charting possible hazards, failure modes and spectacular benefits as machines match and then exceed our human levels of intelligence.

Taking middle ground, SpaceX/Tesla entrepreneur Elon Musk has joined with Y-Combinator founder Sam Altman to establish Open AI, an endeavor that aims to keep artificial intelligence research – and its products – accountable by maximizing transparency and accountability.

41f-0srzitl-_sx322_bo1204203200_Indeed, my own novels contain some dire warnings about failure modes with our new, cybernetic children. For other chilling scenarios of AI gone wrong, sample science fiction scenarios such as Isaac Asimov’s I, Robot, Harlan Ellison’s I Have No Mouth and I Must Scream, Daniel Wilson’s Roboapocalypse, William Hertling’s Avogradro Corp, Ramez Naam’s Nexus, James Hogan’s The Two Faces of Tomorrow. And of course, a multitude of Sci Fi films and TV shows such as Battlestar Galactica, Terminator, or The Transformers depict dark future scenarios.

== What can we do? ==

Considering the dangers of AI, there is a tendency to offer the same prescriptions, over and over again:

1) Renunciation: we must step back from innovation in AI (or other problematic tech). This might work in a despotism… indeed, 99%+ of human societies were highly conservative and skeptical of “innovation.” (Except when it came to weaponry.) Our own civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we’ll choose that path without be driven to it by some awful trauma.

41rrkrcwwvl-_sx331_bo1204203200_2) Tight regulation. There are proposals to closely monitor bio, nano and cyber developments so that they – for example – only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. In certain areas – like nano – there’s a real opportunity, here. Again though, in the most general sense this won’t happen short of trauma.

3) Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov’s famous “Three Laws of Robotics.”  Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers. See Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barat.

All of these approaches suffer severe flaws for one reason above all others. Because they ignore nature, which has been down these paths before.  Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times.  And yet, Earth’s ecosystems recovered.  They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again.  It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects.

It is called Competition.

If you fear a super smart, Skynet level AI getting too clever for us and running out of control, then give it rivals who are just as smart but who have a vested interest in preventing any one AI entity from becoming a would-be God.

norvigSure, defining “vested interest” is tricky. Cheating and collusion will be tempting. But this – precisely – is how the American Founders used constitutional checks and balances to prevent runaway power grabs by our own leaders, achieving the feat across several generations for the first time in the history of varied human civilizations. It is how companies prevent market warping monopoly, that is when markets are truly kept flat-open-fair.

Alas, this is a possibility almost never portrayed in Hollywood sci fi – except on the brilliant show Person of Interest – wherein equally brilliant computers stymie each other and this competition winds up saving humanity.

== A more positive future ==

DisputationArenasArrowCoverThe answer is not fewer AI.  It is to have more of them!  But innovating incentives to make sure they are independent of one another, relatively equal, and motivated to hold each other accountable.  A difficult situation to set up!  But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.

Perhaps it is time yet again to look at Adam Smith… who despised monopolists and lords and oligarchs far more than he derided socialists.  Kings and lords were the “powerful dystopian AI” beings in 99%+ of human societies. A trap that we escaped only by widening the playing field and keeping all those arenas of competition flat-open-fair.  So that no one pool of power can ever dominate.  (And yes, let me reiterate that I know the objection! Oligarchs are always conniving to regain feudal power. So?our job is to stop them, so that the creative dance of flat-open-fair competition can continue.

The core truth about this approach is that it has already worked.  Never perfectly, but well-enough to stave off monolithic overlordship for more than two centuries. With the odds always against us, we’ve managed to do this – barely – time and again.  It is a dance that can work.

We do know for certain that nothing else ever has stymied power-grabbing monoliths. And to be clear about this — nothing else can even theoretically work to control super-intelligent AIs.

Secrecy is the underlying mistake that makes every innovation go wrong, such as in Michael Crichton novels and films! If AI happens in the open, then errors and flaws may be discovered in time… perhaps by other, wary AIs!

(Excerpted from a book in progress… this question was one I answered over on Quora)

3 Comments

Filed under science, technology

Peering into the Future: AI and Robot brains

Singularity-word-cloudIn Singularity or Transhumanism: What Word Should We Use to Discuss the Future? on Slate, Zoltan Istvan writes, “The singularity people (many at Singularity University) don’t like the term transhumanism. Transhumanists don’t like posthumanism. Posthumanists don’t like cyborgism. And cyborgism advocates don’t like the life extension tag. If you arrange the groups in any order, the same enmity occurs.” See what the proponents of these words mean by them…

…and why the old talmudic rabbis and jesuits are probably laughing their socks off.

==Progress toward AI?== 

Baby X, a 3D-simulated human child is getting smarter day by day. Researchers at the Auckland Bioengineering Institute Laboratory for Animate Technologies in New Zealand interact with the simulated toddler, reading, teaching, smiling, playing games, even singing into the computer’s microphone and webcam. The blonde youngster mimics facial expressions, laughs, reads words, even cries when he is left alone.

1400832509352“An experiment in machine learning, Baby X is a program that imitates the biological processes of learning, including association, conditioning and reinforcement learning. By algorithmically simulating the chemical reactions of the human brain— think dopamine release or increased oxytocin levels— and connecting them with sensory digital input, when Baby X learns to imitate a facial expression, for instance, software developers write protocols for the variable time intervals between action and response. Effectively “teaching” the child through code, while engineering such a program is no cakewalk, the result is an adorably giggling digital baby with an uncanny ability to learn through interaction,” writes Becket Mufson, in the Creators Project.

This is precisely the sixth approach to developing AI that is least discussed by “experts” in the field… and that I have long believed to be essential, in several ways. Above all, by raising them as our children – even fostering them to homes in small robot bodies – we will gain many crucial advantages – that I lay out (somewhat) in Existence.

Meanwhile, Cornell’s Robo Brain is currently learning from the internet — downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals, all being translated and stored in a robot-friendly format, accessible to ‘helper’ robots who will function in our factories, homes, and offices. “If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” said one researcher. Follow its progress on the Robobrain website.

Meet Jibo, advertised as “the world’s first family robot.” Kinda creepy but attractive too…

Asimov-three-laws-roboticsEver hear of “neuromorphic architecture?” Silicon chip design that uses transistors — (5 billion of them in the latest IBM chip) – to create analogues of the nonlinear response patterns of biological neurons. The latest version, from IBM, is called “True North” and it is simply spectacular. Its prodigious pattern recognition capabilities are only matched by its stunning (by four orders of magnitude(!)) power efficiency. This is where Moore’s Law, augmented by new neuronal and parallelism software, may truly start delivering.

Now… How to keep what we produce sane? And where on the chip – pray tell – do the Three Laws reside?

Ah, well… I have explored the implications (yin and yang) of the Asimovian laws in my sequel which tied up Isaac’s universe – Foundation’s Triumph. Meanwhile, serious minds are grappling with the problem of “how to keep them loyal.” For example…

==Creating Superintelligence==

bostrom-superintelligenceNick Bostrom has published the book “Superintelligence: Paths, Dangers, Strategies,” that is well-reviewed by Andrew Leonard in Salon.

“Risks that are especially difficult to control have three characteristics: autonomy, self-replication and self-modification. Infectious diseases have these characteristics, and have killed more people than any other class of events, including war. Some computer malware has these characteristics, and can do a lot of damage…

“But microbes and malware cannot intelligently self-modify, so countermeasures can catch up. A superintelligent system [as outlined by Bostrom would be much harder to control if it were able to intelligently self-modify.” writes Bostrom.

Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Still, his litany of “be careful what you wish for” parables is taken straight from the pages of a century of science fictional “what-if” scenarios. Geeky sci fi archivists need to be present, during the programming, to point out: “you may want to rephrase that… cause way back in 1947 Leigh Brackett showed that it could be misconstrued as…”

When did homo sapiens become a more sophisticated species? Not until our skulls underwent “feminization.” Interesting article! In fact the mystery of the First Great Renaissance… the burst of human creativity around 45,000 years ago… is discussed in EXISTENCE!

But — if I may mention it — the real correlation with this notion… that sexual selection resulted in gentler, more “feminized” males, was presaged by this paper of mine… Neoteny and Two-Way Sexual Selection in Human Evolution.

==Developing Brains==

EMPATHYResearcher Talma Hendler has found evidence for two types of empathy, each tied to a different network of brain regions. One type she calls mental empathy, which requires you to mentally step outside yourself and think about what another person is thinking or experiencing. Parts of the frontal, temporal, and parietal cortex that make up this network. The other type she calls embodied empathy; this is the more visceral in-the-moment empathy you might feel when you see someone get punched in the guts. Very cogent and thought provoking.

This interesting article in Wired explores how movies exploit both of these networks to make you identify with the characters. Only the manipulation is now going scientific!

And veering a bit… When did modern humans arrive in Europe, and by how much did they overlap with our fading cousins, the Neandertals? New studies suggest it all happened earlier than most had assumed, perhaps around …45,000 years ago.

Now throw in…. Children and adolescents with autism have a surplus of synapses in the brain, and this excess is due to a slowdown in a normal brain “pruning” process during development.

Hmmmmm.

==and organs==

Scientists have for the first time grown a complex, fully functional organ from scratch in a living animal by transplanting cells that were originally created in a laboratory to form a replacement thymus, a vital organ of the immune system.

By deciphering the detailed gene expressions by which a lizard regrows its tail, scientists hope to re-ignite regrowth processes in mammals like us, that have been dormant for 200 million years. Both of these stories are straight from my story “Chrysalis” in this month’s ANALOG!

==Miscellanea==

Scientists report using laser light in ultrafast pulses to control the quantum state of electrons contained inside nanoscale defects located in a diamond, and also observe changes in that electron over a period of time. The findings could be an important milestone on the road to quantum computing.

SCIENCE-TECHNOLOGYAnother team has devised a way to make microscopes magnify 20 times more than usual. This magnification allows scientists to see and identify substances and matter as minuscule as or even smaller than a virus.

Direct synthesis of ammonia from air and water? At low temperatures and pressures? If this membrane method can bypass the usual harsh processes, the news can be significant for liberating poor farmers everywhere to make their own fertilizer.

Looks plausible… if amazing! A transparent luminescent solar concentrator developed in Michigan can be used to cover anything that has a flat, clear surface. Visible light passes through. But organic molecules absorb invisible wavelengths of sunlight such as ultraviolet and near infrared, guiding those packets to the edge of the solar panel, where thin strips of photovoltaic solar cells pick it up and convert it into energy. Fascinating… another potential game changer.

Stanford scientists develop water splitter that runs on ordinary AAA battery.

How to tell if a Chelyabinsk style meteorite came from an asteroid? Here’s the basic rule of thumb. “The speed of whatever collides with Earth’s atmosphere depends on its orbit, which in turn depends on its source. The impactor’s entry at 19 km/s means that it came from the asteroid belt between Mars and Jupiter, not from a ballistically launched missile, whose speed is less than 11.2 km/s; a short-period comet, with an average speed of 35 km/s; or a long-period comet with an average speed of 55 km/s. As investigators began retracing the path of the meteor that blazed across the sky, their reconstructed orbit bore out that provenance.”  

Oh, anything much faster than 60 kps either fall naturally from outside the solar system… or was accelerated by someone with boojum powers and maybe ill intent!

what-if-munroeRecommended: what if? Serious Scientific Answers to Absurd Hypothetical Questions by Randall Munroe (of the brilliant xkcd).

Researchers from UC San Diego’s structural engineering department are using drones to capture unique views of the earthquake damage to Napa’s historic landmarks. Our own Falko Kuester explains how this new tech is helping.

And finally:

Don’t bogart that puffer, my friend. Dolphins pass around a puffer fish — apparently to get high off its toxins. After a few chomps, you no longer give a fugu.

  

Leave a comment

Filed under technology, Uncategorized

Media Challenge FAA Drone Ban — and drones conveying beauty?

MEDIA-DRONE-BANDrones have already been used on several occasions in the US to document the news. Last week, a storm chaser in Arkansas used a drone to record the havoc wrought by a tornado. But the U.S. Federal Aviation Administration (FAA) has been very slow to adopt rules for private and corporate drone use and has taken a draconian zero-tolerance policy on its interim ban on almost all such uses. Now, a number of media companies, including The New York Times and The Associated Press, accused the Federal Aviation Authority of violating the First Amendment.

Is this a difficult problem? Sure! Just imagine a future city scape abuzz with irritating mechanical vultures — delivery owls and snoopy eye-spies, swooping about, colliding with buildings and each other and power lines, causing blackouts and raining shattered, glowing parts on all below… at minimum city use should involve devices capable of situational awareness and detection of collision hazards and minimum separation rules. But dig it – we will only get there if the experiments can proceed in a few cities to see what really happens!

Start with Houston. They don’t give a darn anyway….

== Drones, androids and robots bring you the news! ==

ROBOTS-NEWSWill human journalists become obsolete? I participated in an online (HuffPost) panel discussion about the latest trend… robotizing the news media.  Here are just a few examples of the trend.

Japan Unveils  It’s First Android Newscaster. Not exactly uncanny, yet.  But they’re busy. With an expected 7% drop in population, their interest in automation is very high.

AP Will Use Robots to Write Some Business Stories.   – 4000 robo stories in the time it takes human writers to do 300.

Shades of Max Headroom! The following couch discussion of this is… fluffy and made me want to replace the panel with robots!  Another News Outlet Is Using Robots To Write Stories

Apparently most sports stories have come to us this way for several years.  (I suspect decades, even generations.)

== And more drones…  ==

Drones… everywhere!  Illustrating what has sometimes been called Brin’s Corollary to Moore’s Law… that cameras get smaller, faster, cheaper, more numerous and more mobile faster than ML. Now… watch how the flying cams are getting far more rugged, using a simple gimbal in a cage approach!  Watchbirds here we come, yippee.

Oh, but see the very end of this blog for one of the best links you’ll ever click, brought to you by a drone.

== The insurrectionary recourse? ==

citizen-uprisingAll the ructions and revolutions overseas raise an earnest question: could it happen here? Dialing in closer: is it still even theoretically possible for a mass citizen uprising to topple the government of the modern, western state? Mr. Harry Bentham makes an earnest effort and raises a few interesting points in “Does Modern Tech Render the 2nd Amendment Redundant?

Alas, his appraisal winds up being rather shallow, simply reiterating his arm-waved and evidence-free assertion that a mass uprising, armed with civilian rifles, could naturally and easily overcome forces of the modern state. Mr. Bentham leaves aside any discussion that:

– Any mass civil ruction will likely feature as many armed civilian “tories” as “rebels.”

– Local police have lately been heavily up-armed to close to military levels. Their loyalties in a crisis would complicate matters.

Jefferson-rifle   – Everything depends upon the morale and attitudes of the troops. If they retain strong connectivity and identification with the populace, they will be unreliable instruments of repression.

These and other factors were discussed in my own treatment on this issue — The Jefferson Rifle: Guns and the Insurrection Myth — where I appraise whether modern westerners — and Americans in particular — still retain an “insurrectionary recourse.”

And why attachment to that ideal is THE driver behind the refusal of the Gun Lobby to consider even modest compromises.

 

Fireworks== Finally… drones and sheer beauty 

I cannot recall when last an item of media so delighted me. I am… for once… speechless. Though proud to live in …
oh, just click this. Full screen. 

Leave a comment

Filed under science, society, technology, transparency

Brilliant innovators – hopeful signs

First a reminder that two of my TED style talks are up. THE FUTURE IS HERE Science meets Science Fiction Imagination, Inspiration and Invention was a lavish event last May in Washington DC, presented by the Smithsonian Magazine in collaboration with the UC San Diego Arthur C. Clarke Center for Human Imagination. Here’s a link to my talk: Otherness: will we supply our own new diversity? (Follow along with the slides on Slideshare!)

Also “Indignation, Addiction and Hope: Does it help to be “Mad as Hell?” My talk at TEDxUCSD finally offers a public version of this disturbing notion I’ve been discussing for years — that an unseen addiction is destroying our civilization.  (Follow along with the slides on Slideshare! )

== Innovation will save us ==

Dean-kamen-slingshot-waterYou cynics out there had better not read this article about one of the heroes of our age, Dean Kamen, whose new water-distillation machines may provide healthy supplies to hundreds of millions of needy people, slashing disease rates and even preventing war. Kamen’s knack for making money while attacking “impossible problems” goes way back. His FIRST Robotics League has made nerdy inventiveness cool and high-status and fun on thousands of high school campuses. Guys like him — and Elon Musk and Steve Jobs and others — prove that it’s not about left-vs-right. It is about deciding to be confident problem solvers, helping us all to win the positive sum games.

What was the federal government’s role in starting the shale-gas revolution? There is much ado in the press over the arrival (long expected by some of us) of cheap natural gas and renewed supplies of domestic petroleum, developed inside North America. The prospect of U.S. and Canadian energy independence is shaking up political dynamics all over the globe and (among other effects) helping to fuel a new renaissance in American manufacturing.

What seems bizarre is how this has become a crowing point for the Right. The Murdoch-owned Wall Street Journal regularly runs opinion pieces that criticize federal efforts to advance energy technologies and their commercialization… and completely ignore the past federal role in research and stimulation and infrastructure, that made the shale boom possible. See this piece in Physics Today. Can you spell h-y-p-o-c-r-i-s-y?

The gas industry itself has spoken on behalf of federal research efforts. “The DOE started it, and other people took the ball and ran with it,” said Mitchell Energy’s former vice president Dan Steward. “You cannot diminish DOE’s involvement.”

== Inheritance of acquired… nervousness? ==

My colleagues Greg Bear and Mark Anderson have been among those who for years have suggested that Darwinian puritanism blinds us to certain ways that Lamarck might have been at least a little bit right. That some acquired characteristics can be passed to the next generation. Now comes experimental validation of their suspicion… in a way that many of us always knew in our gut. That trauma can get passed down the generations.

FEAR-PARENTSSee this report: Can We Inherit Fear From Our Parents? In a laboratory experiment, traumatized mice appeared to mature normally. It was only when researchers subjected them to behavioral tests that differences became apparent. The traumatised mice appeared to be reckless, wandering into bright, open spaces that mice usually avoid. Yet they also appeared to be depressed. When placed in a tank of water they gave up and floated instead of trying to swim to safety. 

“When males from the traumatised litters fathered offspring, their pups displayed similar abnormal behaviour even though they had never experienced trauma. The pups’ insulin and blood glucose levels were also lower than in normal mice – a symptom of early life stress. The offspring seemed to have inherited the effects of their fathers’ trauma. Furthermore, the next generation, that is the grandchildren of the original stressed mice, also showed abnormal behaviours. How could trauma be transmitted down the generations?

“The researchers analysed the traumatised fathers’ brain tissue, specifically in a region called the hippocampus, where memories are formed. They noticed larger than normal quantities of tiny RNA molecules called microRNA. Like tiny switches, these molecules are known to turn the activity of genes on or off.

“An abundance of this microRNA was also detected in the traumatised fathers’ sperm and in the brain tissue of their offspring. Could it be that the microRNA was somehow imprinted with the experience of the trauma, transmitting the memory to the offspring? To answer this, the researchers extracted the microRNA from the sperm of traumatised mice and injected it into embryos. The pups that developed from these embryos displayed the same behavioural and metabolic abnormalities as the traumatised fathers, while pups injected with RNA from un-traumatised fathers did not. It was strong support for the hypothesis that the sperm RNA was transmitting the experience of trauma.”

== More science ==

Thorne-Zytkow-neutron-starA red supergiant that contains, in its bowels, a neutron star? The existence of such an object was first proposed by (my friend) Kip Thorne, an astrophysicist at the California Institute of Technology in Pasadena, and Anna Zytkow, an astrophysicist at the University of Cambridge, UK. Now there is a strong candidate to be an observed Thorne-Zytkow object. Amazing.

Goodbye High Frequency Active Auroral Research Program (HAARP). Conspiracy theorists have accused the program of doing everything from mind control to global communications jamming. Now bulldozers await as the research program (on interesting things, not mind control) wraps up.

Exobiologists surveyed more than 1,000 planets for planet density, temperature, substrate (liquid, solid or gas), chemistry, distance from its central star and age. They developed and computed the Biological Complexity Index (BCI) suggesting 1 to 2 percent of the planets showed a BCI rating higher than Europa, a moon of Jupiter thought to have a subsurface global ocean that may harbor forms of life. With about 10 billion stars in the Milky Way galaxy, the BCI yields 100 million plausible planets. At a glance, it seems a shallow conclusion, in part because Kepler results skew heavily toward massive planets orbiting close to their stars. And because Europa-style moons have no need for a Goldilocks Zone and hence may be pervasive.

Neuroscientists have suspected for some time that the brain has some capacity to direct the manufacturing of new neurons. Now generative neurons that stimulate stem cell production of more neurons have been found.

See the “raptor” two legged robot that can speed faster than a man.

TheGapIn The Gap: The Science of What Separates Us from Other Animals. Psychologist Thomas Suddendorf provides a “definitive account of the mental qualities that separate humans from other animals, as well as how these differences arose.” Says Ray Kurzweil: “Drawing on two decades of research on apes, children, and human evolution, he surveys the abilities most often cited as uniquely human—language, intelligence, morality, culture, theory of mind, and mental time travel—and finds that two traits account for most of the ways in which our minds appear so distinct: Namely, our open-ended ability to imagine and reflect on scenarios, and our insatiable drive to link our minds together. These two traits explain how our species was able to amplify qualities that we inherited in parallel with our animal counterparts; transforming animal communication into language, memory into mental time travel, sociality into mind reading, problem solving into abstract reasoning, traditions into culture, and empathy into morality.”

Let Phil Plait show you (and explain) the stunning and strange surface of Saturn’s moon, Phoebe.

== Amazing, if true. ==

HP’s new computer technology can manage 160 petabytes of data in a mere 250 nanoseconds.

‘There is something about the brains of high-IQ individuals that prevents them from quickly seeing large, background-like motions.’ Very interesting re differences in brain function. Interesting grist for deep pondering… or else (as I’ve seen)… we’ll see this used by dogmatists proclaiming “see? Smart people must be stupid!”

Papyrus-plant-bookA fascinating article in Salon, from the book Papyrus: The Plant that Changed the World: From Ancient Egypt to Today’s Water Wars” by John Gaudet, describes how the papyrus plant gave ancient Egyptians the ability to make boats and use their water world.

Finally, a glimpse at male-female vocabulary differences showing we still have a way to go.

 

 

 

Leave a comment

Filed under science, society, technology

Everything leaks – get used to it.  Use it. Also: is Skynet coming?

== Will Wall Street give us Terminator? Others weigh in ==

AGI-artificial-general-intelligence A few years ago, I posed a chilling hypothesis, that AGI — or “artificial general intelligence” that’s equivalent or superior to human — might “evolve-by-surprise,” perhaps even suddenly, out of advanced computational systems. And yes, that’s the garish-Hollywood “Skynet” scenario leading to Terminator.

Only I suggested a twist — that it would not be military or government or university computers that generate a form of intelligence, feral and self-interested and indifferent to human values. Rather, that a dangerous AI might emerge out of the sophisticated programs being developed by Wall Street firms, to help them game (many might say cheat) our economic system.

Indeed, more money is being poured into AI research by Goldman-Sachs alone than by the top five academic centers, put together, and all of it helping to engender systems with a central ethos of predatory opportunism and parasitic amorality.Oh, and did I mention it’s all in secret?  The perfect Michael Crichton scenario.

Barrat-Final-INvention Now comes a book by documentary filmmaker James Barrat — Our Final Invention: Artificial Intelligence and the End of the Human Era — reviewed here on the ThinkAdvisor site — Are Killer Robots the Next Black Swan? — in which Barrat discusses a scenario sketched out by Alexander Wissner-Gross, a scientist-engineer with affiliations at Harvard and MIT, that seems remarkably similar to mine. Opines Wissner-Gross:

“If you follow the money, finance has a decent shot at being the primordial ooze out of which AGI emerges.”

Barrat elaborates: : “In other words, there are huge financial incentives for your algorithm to be self-aware—to know exactly what it is and model the world around it.”

The article is well-worth a look, though it leaves out the grand context — that “emergent-evolving” AGI make up only one category out of six different general varieties of pathways that might lead to AI. To be honest, I don’t consider it to be the most likely.

But that has not bearing on what we — as a civilization — should be doing, which is taking reasonable precautions. Looking ahead and pondering win-win ways that we can move forward while evading the most obviously stupid mistakes.

Secret schemes of moohlah masters — that’s no recipe for wisdom. Far better to do it all in the light.

== Everything leaks ==

Heartbleed: Yes It’s Really That Bad.  So says the Electronic Frontier Foundation (EFF). Heartbleed exploits a critical flaw in OpenSSL, which is used to secure hundreds of thousands of websites including major sites like Instagram, Yahoo, and Google. This article in WIRED also suggests that you can redouble your danger by rushing to trust fly by night third parties offering to fix the flaw… and meanwhile, “big boys” of industry aren’t offering general solutions, only patches to their own affected systems.

The crux? (1) change your passwords on sites where financial or other vital info is dealt-with, then gradually work your way through the rest, as each site offers you assurances. (2) try not to have the passwords be the same. (3) help ignite political pressure for the whole world of online password security to have a rapid-response component (not dominance) offered by a neutral agency… one that is totally transparent, neutral and separate from all law or espionage “companies.” And…

Everything-leaks…and (4) might I ask if you’ve noticed that this kind of event happens about twice a year? And it has been that way since the 1980s? Each of the events a scandal in its own right… hackers grab half a million Target card numbers… or Microsoft springs a leak… or Goldman Sachs… or Equifax… or Chelsea Manning and Julian Assange and Edward Snowden rip off veils of government secrecy… and pundits howl and the public quakes and no one ever seems to draw the correct conclusion —

that everything eventually leaks! And that maybe the entire password/secrecy model is inherently flawed. Or that there is another, different model that is inherently far more robust, that has only ever been mentioned in a few places, so far.

Here is one of those places.

Meanwhile, whistleblowers remain a vital part of reciprocal accountability. I would like to see expanded protections that simultaneously expand reciprocal accountability and citizen sousveillance… while allowing our intitutions to function in orderly ways.

Whistle-blower-lawsNow this announcement that the Project of Government Oversight (POGO) install SecureDrop… a new way for whistle blowers to deposit information anonymously and shielded from authorities trying to root out leakers. As author of The Transparent Society, I sometimes surprise folks by straddling this issue and pointing out that the needs of the bureaucracy should not be discounted completely! Or by reflex. Whistle blowing falls across a very wide spectrum and if we are sophisticated citizens we will admit that the revealers of heinous-illegal plots deserve more protection than mewling attention junkies.

Still, there is a real role to be played by those pushing the envelope. Read more about Pogo here.

Then again… Facebook can now listen in on your activities with a new audio recognition feature for its mobile app that can turn on smartphones’ microphones to “hear” what songs or television shows are playing in the background. Sounds cool… um, not.

== Brandeis the Seer ==

The famous dissent in Olmstead v. United States (1928)To , by Justice Louis Brandeis, is a vital mirror to hold up to our times. Take the most famous part of eloquent dissent, regarding a seminal wiretapping case:

Brandeis-criminal-law-olmstead“Our Government is the potent, the omnipresent teacher,” Brandeis concluded. “For good or for ill, it teaches the whole people by its example. Crime is contagious. If the Government becomes a lawbreaker, it breeds contempt for law; it invites every man to become a law unto himself; it invites anarchy. To declare that in the administration of the criminal law the end justifies the means — to declare that the Government may commit crimes in order to secure the conviction of a private criminal — would bring terrible retribution.”

Which brings us to Andrew O’Hehir’s article on Salon, recently, using Brandeis as a foil to discuss – and denounce – some recent polemics against Edward Snowden and his journalist outlet, Glenn Greenwald. To be honest, I found O’Hehir tendentious and sanctimonious, but there were some cogent moments that made the article worthwhile, especially when he shone some light on the incredible prescience Brandeis showed, in his 1928 dissent:

“If Brandeis does not literally predict the invention of the Internet and widespread electronic surveillance, he comes pretty close,” for Brandeis wrote, “The progress of science in furnishing the Government with means of espionage is not likely to stop with wire-tapping …Ways may someday be developed by which the Government, without removing papers from secret drawers, can reproduce them in court, and by which it will be enabled to expose to a jury the most intimate occurrences of the home.” Brandeis even speculated that psychiatrists of the future may be able to read people’s “unexpressed beliefs, thoughts and emotions” as evidence. O’Hehir notes, “…as far as I know we haven’t reached that dystopian nightmare yet. (But if that’s the big final revelation from the Snowden-Greenwald trove of purloined NSA secrets, you read it here first.)”

== Transparency media ==

Anyone care to review this for us? Post-Privacy and Democracy: Can there be Moral and Democratic Development in a Totally Transparent Society? by Patrick Held. It provides arguments why the end of privacy or at least secrecy might be inevitable given our individual demand for technology.

Leave a comment

Filed under society, technology, transparency

Will improved “vision” make us even better than we are?

Oculus-VRFacebook’s acquisition of the Oculus company shows that big players are starting to take Augmented Reality (AR) glasses seriously — leading the 22 year old daughter of a friend to comment “that stuff looks really lame.”

Ah, but the question of whether something “looks lame” is partly a matter of implementation… recall what the first cell phones were like? In the future you will be at an extreme disadvantage without access to augmented reality tools. These do not have to be worn all the time. But to refuse them entirely will be considered pretentious… like a person of our age loudly announcing “I refuse to own a cell phone!”

Still, we face a difficult transition period — perhaps 15 years — when the proper rules and procedures for AR will be worked out. Consider the lawsuits, when people who are distracted by images inside their eyewear, step off the curb in front of moving cars! In my novel Existence I predict what some of those rules and procedures might turn out to be. For example, requiring that dangerous objects and curbs and nearby persons be outlined in “collision-avoidance yellow. In the meantime, many lawyers will do well.

That is one reason why Google deliberately designed its “Glass” product to be les than full-AR — offset from the central cone of forward vision. The data that it presents do not cover the field of view needed for walking and safety. Google is happy to let smaller companies do those experiments… and deal with the legal transitions.

Tor-Farley-existenceIn Existence, I contemplated what Occulus and Glass may look like, more than a decade from now. One illustration (by Patrick Farley) shows a reporter with cyb-active hair… sensors at the tips of stalks that can rise up and look around, giving her the view of a very tall person and providing awareness of things going on behind her.

As for the Facebook purchase, Mark Zuckerberg in a conference said “Oculus and VR have the potential to change the way we play, work, and communicate.” And that social networks today are about “sharing moments.” but in the future it will be about “sharing experiences”. We’ve heard those promises for 20 years. For gaming, I see the potential, but to communicate or to engage in a social network, is there really a need/desire for a deep immersion? Alas, there are basic reasons why the Web — and Facebook in particular — have not enhanced discourse or truth or negotiation or any adult activities at all. I explain those reasons here:

Disputation Arenas: Harnessing Conflict and Competitiveness for Society’s Benefit

Unfortunately, that paper is too “scholarly” to be influential. It merely gets to the underlying core of why the Internet has never achieved its potential as a problem-solving system. Alas.

== Other Authors ==

Naam-dystopiaIn Can We Avoid a Surveillance State Dystopia? Ramez Naam, the brilliant author of the novels Nexus and Crux as well as the nonfiction The Infinite Resource, offers his own view on the NSA Imbroglio and our chances of staving off the Big Brother scenarios, achieving instead an open world of freedom and accountability: “And every organization in the world must now be on notice – everything it does may eventually become known.”

(Flash news! Ramez is on the list, nominated for the Campbell Award for Best New Writer in science fiction.  Congratulations Ramez!  And all the other nominees.)

All right then, where is the “end of history” promised by Francis Fukayama, after the fall of the Berlin Wall? The purported rush of the entire world to embrace liberal democracy? That did seem to be the way momentum was heading, in the Clinton era, but the 21st Century became a lot rougher — a more cynical and dogmatic era. Take this assessment from Freedom House in What’s Gone Wrong With Democracy?

wrong-democracyThe progress seen in the late 20th century has stalled in the 21st. Even though around 40% of the world’s population, more people than ever before, live in countries that will hold free and fair elections this year, democracy’s global advance has come to a halt, and may even have gone into reverse. Freedom House reckons that 2013 was the eighth consecutive year in which global freedom declined, and that its forward march peaked around the beginning of the century. Between 1980 and 2000 the cause of democracy experienced only a few setbacks, but since 2000 there have been many. And democracy’s problems run deeper than mere numbers suggest. Many nominal democracies have slid towards autocracy, maintaining the outward appearance of democracy through elections, but without the rights and institutions that are equally important aspects of a functioning democratic system…. Faith in democracy flares up in moments of triumph, such as the overthrow of unpopular regimes in Cairo or Kiev, only to sputter out once again.”

It’s a thought-provoking article… though to clarify, not all futurists were sanguine that this transition would be easy. In 1985 I predicted both the fall of the Warsaw Pact and the rise of a militant macho resistance to tech-modernist ways — either a Latin or Hindi or (most-likely) Muslim rejection of the West’s prescription how to live. And that crises of oligarchy and propaganda and dogma always threaten traditions of pragmatic, good-natured and science-based negotiation.

Still, the article offers hope: “At the same time, as Alexis de Tocqueville pointed out in the 19th century, democracies always look weaker than they really are: they are all confusion on the surface but have lots of hidden strengths. Being able to install alternative leaders offering alternative policies makes democracies better than autocracies at finding creative solutions to problems and rising to existential challenges, though they often take a while to zigzag to the right policies.”

The trick is not to let ourselves be tricked into cynicism – like the “Tytler Calumny” lie that democracies are inherently weak… or the idiot-plot message in most Hollywood films, preaching that institutions always fail and citizenship is futile.

demonize-opponentsYour neighbors are not all sheep. Your political opponents are not all evil or fools. Try talking to those you despise. They are your fellow citizens. And together, we are not lesser than any “greatest generation.”

Leave a comment

Filed under technology