Category Archives: internet

Problem-solving in the near future

Speculating about social & technological changes

Last year, the Pew Research Center asked a panel of tech experts to speculate about life would be like in the year 2025, taking into account changes in the aftermath of the pandemic – and other disruptive crises that may arise over the next few years. You can read the range of thought-provoking responses, which touched upon topics such as the future of economic and social inequality, as well as changes in the workplace due to increased automation, the rise of artificial intelligence and globalization. Discussions also focused on issues of sustainable energy, improved transportation and communication networks, and enhanced education opportunities. Many floated ideas about the near-term evolution of technologies that could improve the quality of life for vast numbers of people across the globe.

Below, I have reprinted my own response:

Assuming we restore the basic stability of the Western Enlightenment Experiment – and that is a big assumption, then several technological and social trends may come to fruition in the next five to ten years.

  • Advances in cost-effectiveness of sustainable energy supplies will be augmented by better storage systems. This will both reduce reliance on fossil fuels and allow cities and homes to be more autonomous.
  • Urban farming methods may expand to a more industrial scale, allowing similar moves toward local autonomy (perhaps requiring a full decade or more to show significant impact). Meat use will decline for several reasons, ensuring some degree of food security, as well. Tissue-cultured meat — long predicted in science fiction — is rapidly approaching sustainable levels. The planet, our health, our karma — and eventually, our wallets, will all benefit.
  • Local, small-scale, on-demand manufacturing may start to show effects in 2025. If all of the above take hold, there will be surplus oceanic shipping capacity across the planet. Some of it may be applied to ameliorate (not solve) acute water shortages. Innovative uses of such vessels may range all the way to those depicted in my novel ‘Earth.’
  • Full-scale diagnostic evaluations of diet, genes and microbiome will result in affordable micro-biotic therapies and treatments. AI appraisals of other diagnostics will both advance detection of problems and become distributed to handheld devices cheaply available to all, even poor clinics throughout the world.
  • Inexpensive handheld devices will start to carry detection sensor technologies that can appraise across the spectrum, allowing NGOs and even private parties to detect and report environmental problems.
  • Socially, this extension of citizen vision will go beyond the current trend of assigning accountability to police and other authorities. Despotisms will be empowered, as predicted in Orwell’s ‘Nineteen Eighty-four.’ But democracies will also be empowered (as I discuss in ‘The Transparent Society’) as those in power are increasingly held accountable for their actions.
  • I give odds that tsunamis of revelation will crack the shields protecting many elites from disclosure of past and present torts and turpitudes. The Panama Papers and Epstein cases exhibit how fear propels the elites to combine efforts at repression. But only a few more cracks may cause the dike to collapse, revealing networks of blackmail. This is only partly technologically driven and hence is not guaranteed. If it does happen, there will be dangerous spasms by all sorts of elites, desperate to either retain status or evade consequences. But if the fever runs its course, the more transparent world will be cleaner and better run.
  • Some of those elites have grown aware of the power of ninety years of Hollywood propaganda for individualism, criticism, diversity, suspicion of authority and appreciation of eccentricity. Counter-propaganda pushing older, more traditional approaches to authority and conformity are already emerging, and they have the advantage of resonating with ancient human fears. Much will depend upon this meme war.

Of course, much will also depend upon short-term resolution of current crises. If our systems remain undermined and sabotaged by incited civil strife and distrust of expertise, then all bets are off. You will get many answers to this canvassing fretting about the spread of ‘surveillance technologies that will empower Big Brother.’ These fears are well-grounded, but utterly myopic. First, ubiquitous cameras and facial recognition are only the beginning. Nothing will stop them and any such thought of ‘protecting’ citizens from being seen by elites is stunningly absurd, as the cameras get smaller, better, faster, cheaper, more mobile and vastly more numerous every month. Moore’s Law to the nth degree. Yes, despotisms will benefit from this trend. And hence, the only thing that matters is to prevent despotism altogether.

In contrast, a free society will be able to apply the very same burgeoning technologies toward accountability. We are seeing them applied to end centuries of abuse by ‘bad-apple’ police who are thugs, while empowering the truly professional cops to do their jobs better. It is not guaranteed that light will be used this way, despite many examples of unveiling abuses of power. It is an open question whether we citizens will have the gumption to apply ‘sousveillance’ upward at all elites.

But Gandhi and Martin Luther King Jr. likewise were saved by crude technologies of light in their days. And history shows that assertive vision by and for the citizenry is the only method that has ever increased freedom and – yes – some degree of privacy.

Leave a comment

Filed under economy, future, internet, media, public policy

Essential (mostly neglected) questions and answers about Artificial Intelligence: Part II

Continuing from Part I

How will we proceed toward achieving true Artificial Intelligence? I presented an introduction in Part 1. Continuing…

One of the ghosts at this banquet is the ever-present disparity between the rate of technological advancement in hardware vs. software. Futurist Ray Kurzweil forecasts that AGI may occur once Moore’s Law delivers calculating engines that provide — in a small box — the same number of computational elements as there are flashing synapses (about a trillion) in a human brain. The assumption appears to be that Type I methods (explained in Part I) will then be able to solve intelligence related problems by brute force.

Indeed, there have been many successes already: in visual and sonic pattern recognition, in voice interactive digital assistants, in medical diagnosis and in many kinds of scientific research applications. Type I systems will master the basics of human and animal-like movement, bringing us into the long-forecast age of robots. And some of those robots will be programmed to masterfully tweak our emotions, mimicking facial expressions, speech tones and mannerisms to make most humans respond in empathizing ways.

But will that be sapience?

One problem with Kurzweil’s blithe forecast of a Moore’s Law singularity: he projects a “crossing” in the 2020s, when the number of logical elements in a box will surpass the trillion synapses in a human brain. But we’re getting glimmers that our synaptic communication system may rest upon many deeper layers of intra– and inter-cellular computation. Inside each neuron, there may take place a hundred, a thousand or far more non-linear computations, for every synapse flash, plus interactions with nearby glial and astrocyte cells that also contribute information.

If so, then at-minimum Moore’s Law will have to plow ahead much farther to match the hardware complexity of a human brain.

Are we envisioning this all wrong, expecting AI to come the way it did in humans, in separate, egotistical lumps? In his book The Inevitable: Understanding the 12 Technological Forces that will shape our future, author and futurist Kevin Kelly prefers the term “cognification,” perceiving new breakthroughs coming from combinations of neural nets with cheap, parallel processing GPUs and Big Data. Kelly suggests that synthetic intelligence will be less a matter of distinct robots, computers or programs than a commodity, like electricity. Like we improved things by electrifying them, we will cognify things next.

One truism about computer development states that software almost always lags behind hardware. Hence the notion that Type I systems may have to iteratively brute force their way to insights and realizations that our own intuitions — with millions of years of software refinement — reach in sudden leaps.

But truisms are known to break and software advances sometimes come in sudden leaps. Indeed, elsewhere I maintain that humanity’s own ‘software revolutions’ (probably mediated by changes in language and culture) can be traced in the archaeological and historic record, with clear evidence for sudden reboots occurring 40,000, 10,000, 4000, 3000, 500 and 200 years ago… with another one very likely taking place before our eyes.

It should also be noted that every advance in Type I development then provides a boost in the components that can be merged, or competed, or evolved, or nurtured by groups exploring paths II through VI (refer to Part I of this essay).

“What we should care more about is what AI can do that we never thought people could do, and how to make use of that.”

Kai-Fu Lee

A multitude of paths to AGI

So, looking back over our list of paths to AGI (Artificial General Intelligence) and given the zealous eagerness that some exhibit, for a world filled with other-minds, should we do ‘all of the above’? Or shall we argue and pick the path most likely to bring about the vaunted “soft landing” that allows bio-humanity to retain confident self-worth? Might we act to de-emphasize or even suppress those paths with the greatest potential for bad outcomes?

Putting aside for now how one might de-emphasize any particular approach, clearly the issue of choice is drawing lots of attention. What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”

J. Storrs Hall, in Beyond AI: Creating the Conscience of the Machine, asks “if machine intelligence advances beyond human intelligence, will we need to start talking about a computer’s intentions?”

Among the most-worried is Swiss author Gerd Leonhard, whose new book Technology Vs. Humanity: The Coming Clash Between Man and Machine coins an interesting term, “androrithm,” to contrast with the algorithms that are implemented in every digital calculating engine or computer. Some foresee algorithms ruling the world with the inexorable automaticity of reflex, and Leonhard asks: “Will we live in a world where data and algorithms triumph over androrithms… i.e., all that stuff that makes us human?”

Exploring analogous territory (and equipped with a very similar cover) Heartificial Intelligence by John C. Havens also explores the looming prospect of all-controlling algorithms and smart machines, diving into questions and proposals that overlap with Leonhard. “We need to create ethical standards for the artificial intelligence usurping our lives and allow individuals to control their identity, based on their values,” Havens writes. Making a virtue of the hand we Homo sapiens are dealt, Havens maintains: “Our frailty is one of the key factors that distinguish us from machines.” Which seems intuitive till you recall that almost no mechanism in history has ever worked for as long, as resiliently or consistently — with no replacement of systems or parts — as a healthy 70 year old human being has, recovering from countless shocks and adapting to innumerable surprising changes.

Still, Havens makes a strong (if obvious) point that “the future of happiness is dependent on teaching our machines what we value most.” I leave to the reader to appraise which of the six general approaches might best empower us to do that.

Should we clamp down? “It all comes down to control,” suggests David Bruemmer, Chief Strategy Officer at NextDroid, USA. “Who has control and who is being controlled? Is it possible to coordinate control of every car on the highway? Would we like the result? A growing number of self-driving cars, autonomous drones and adaptive factory robots are making these questions pertinent. Would you want a master program operating in Silicon Valley to control your car? If you think that is far-fetched, think again. You may not realize it, but large corporations have made a choice about what kind of control they want. It has less to do with smooth, efficient motion than with monetizing it (and you) as part of their system. Embedding high-level artificial intelligence into your car means there is now an individualized sales associate on board. It also allows remote servers to influence where your car goes and how it moves. That link can be hacked or used to control us in ways we don’t want.

A variety of top-down approaches are in the works. Pick your poison. Authoritarian regimes – especially those with cutting edge tech – are already rolling out ‘social credit’ systems that encourage citizens to report/tattle on each other and crowd-suppress deviations from orthodoxy. But is the West any better?

In sharp contrast to those worriers is Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which posits that our cybernetic children will be as capable as our biological ones, at one key and central aptitude — learning from both parental instruction and experience how to play well with others. And in his book Machines of Loving Grace (based upon the eponymous Richard Brautigan poem), John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems”.

Perhaps, but it is an open question which values predominate, whether the yin or the yang sides of Silicon Valley culture prevail… the Californian ethos of tolerance, competitive creativity and cooperative openness, or the Valley’s flippant attitude that “most problems can be corrected in beta,” or even from customer complaints, corrected on the fly. Or else, will AI emerge from the values of fast-emerging, state-controlled tech centers in China and Russia, where the applications to enhancing state power are very much emphasized? Or, even worse, from the secretive, inherently parasitical-insatiable predatory greed of Wall Street HFT-AI?

But let’s go along with Havens and Leonhard and accept the premise that “technology has no ethics.” In that case, the answer is simple.

Then Don’t Rely on Ethics!

Certainly evangelization has not had the desired effect in the past — fostering good and decent behavior where it mattered most. Seriously, I will give a cookie to the first modern pundit I come across who actually ponders a deeper-than-shallow view of human history, taking perspective from the long ages of brutal, feudal darkness endured by our ancestors. Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and… preached!

They lectured and chided. They threatened damnation and offered heavenly rewards.

Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Abrahamic laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question: “How’s that working out for you?”

In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators and abusers — just as it won’t divert the most malignant machines. Indeed, moralizing often empowers parasites, offering ways to rationalize exploiting others. Even Asimov’s fabled robots — driven and constrained by his checklist of unbendingly benevolent, humano-centric Three Laws — eventually get smart enough to become lawyers. Whereupon they proceed to interpret the embedded ethical codes however they want. (I explore one possible resolution to this in Foundation’s Triumph).

And yet, preachers never stopped. Nor should they; ethics are important! But more as a metric tool, revealing to us how we’re doing. How we change, evolving new standards and behaviors under both external and self-criticism. For decent people, ethics are the mirror in which we evaluate ourselves and hold ourselves accountable.

And that realization was what led to a new technique. Something enlightenment pragmatists decided to try, a couple of centuries ago. A trick, a method, that enabled us at last to rise above a mire of kings and priests and scolds.

The secret sauce of our success is — accountability. Creating a civilization that is flat and open and free enough — empowering so many — that predators and parasites may be confronted by the entities who most care about stopping predation, their victims. One in which politicians and elites see their potential range of actions limited by law and by the scrutiny of citizens.

Does this newer method work as well as it should? Hell no! Does it work better than every single other system ever tried, including those filled to overflowing with moralizers? Better than all of them combined? By light years? Yes, indeed. We’ll return to examine how this may apply to AI.

Endearing Visages

Long before artificial intelligences become truly self-aware or sapient, they will be cleverly programmed by humans and corporations to seem that way. This — it turns out — is almost trivially easy to accomplish, as (especially in Japan) roboticists strive for every trace of appealing verisimilitude, hauling their creations across the temporary moat of that famed “uncanny valley,” into a realm where cute or pretty or sad-faced automatons skillfully tweak our emotions.

For example, Sony has announced plans to develop a robot “capable of forming an emotional bond with customers,” moving forward from their success decades ago with AIBO artificial dogs, which some users have gone as far as to hold funerals for.

Human empathy is both one of our paramount gifts and among our biggest weaknesses. For at least a million years, we’ve developed skills at lie-detection (for example) in a forever-shifting arms race against those who got reproductive success by lying better. (And yes, there was always a sexual component to this).

But no liars ever had the training that these new Hiers, or Human-Interaction Empathic Robots will get, learning via feedback from hundreds, then thousands, then millions of human exchanges around the world, adjusting their simulated voices and facial expressions and specific wordings, till the only folks able to resist will be sociopaths! (And even sociopaths have plenty of chinks in their armor).

Is all of this necessarily bad? How else are machines to truly learn our values, than by first mimicking them? Vincent Conitzer, a Professor of Computer Science at Duke University, was funded by the Future of Life Institute to study how advanced AI might make moral judgments. His group aims for systems to learn about ethical choices by watching humans make them, a variant on the method used by Google’s DeepMind, which learned to play and win games without any instructions or prior knowledge. Conitzer hopes to incorporate many of the same things that humans value, as metrics of trust, such as family connections and past testimonials of credibility.

Cognitive scientist and philosopher Colin Allen asserts, “Just as we can envisage machines with increasing degrees of autonomy from human oversight, we can envisage machines whose controls involve increasing degrees of sensitivity to things that matter ethically”.

And yet, the age-old dilemma remains — how to tell what lies beneath all the surface appearance of friendly trustworthiness. Mind you, this is not quite the same thing as passing the vaunted “Turing Test.” An expert — or even a normal person alerted to skepticism — might be able to tell that the intelligence behind the smiles and sighs is still ersatz. And that will matter about as much as it does today, as millions of voters cast their ballots based on emotional cues, defying their own clear self-interest or reason.

Will a time come when we will need robots of our own to guide and protect their gullible human partners? Advising us when to ignore the guilt-tripping scowl, the pitiable smile, the endearingly winsome gaze, the sob story or eager sales pitch? And, inevitably, the claims of sapient pain at being persecuted or oppressed for being a robot? Will we take experts at their word when they testify that the pain and sadness and resentment that we see are still mimicry, and not yet real? Not yet. Though down the road…

How to Maintain Control?

It is one thing to yell at dangers —in this case unconstrained and unethical artificial minds. Alas, it’s quite another to offer pragmatic fixes. There is a tendency to propose the same prescriptions, over and over again:

Renunciation: we must step back from innovation in AI (or other problematic technologies)! This might work in a despotism… indeed a vast majority of human societies were highly conservative and skeptical of “innovation.” (Except when it came to weaponry.) Even our own scientific civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we’ll choose that path without being driven to it by some awful trauma.

Tight regulation: There are proposals to closely monitor bio, nano and cyber developments so that they — for example — only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. Again, it won’t happen short of trauma.

Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov’s famous “Three Laws of Robotics.” Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers.

These approaches suffer severe flaws for two reasons above all others.

1) Those secret labs we keep mentioning. The powers that maintain them will ignore all regulation.

2) Because these suggestions ignore nature, which has been down these paths before. Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times. And yet, Earth’s ecosystems recovered. They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again.

It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects. It is called Competition.

One final note in this section. Nicholas Bostrom – already mentioned for his views on the “paperclip” failure mode, in 2021 opined that some sort of pyramidal power structure seems inevitable in humanity’s future, and very likely one topped by centralized AI. His “Singleton Hypothesis” is, at one level, almost “um, duh” obvious, given that the vast majority of past cultures were ruled by lordly or priestly inheritance castes and an ongoing oligarchic putsch presently unites most world oligarchies – from communist to royal and mafiosi – against the Enlightenment Experiment. But even if Periclean Democracies prevail, Bostrom asserts that centralized control is inevitable.

In response, I asserted that an alternative attractor state does exist, mixing some degree of centralized adjudication, justice and investment and planning… but combining it with maximized empowerment of separate, individualistic players. Consumers, market competitors, citizens.

Here I’ll elaborate, focusing especially on the implications for Artificial Intelligence.

Smart Heirs Holding Each Other Accountable

In a nutshell, the solution to tyranny by a Big Machine is likely to be the same one that worked (somewhat) at limiting the coercive power of kings and priests and feudal lords and corporations. If you fear some super canny, Skynet-level AI getting too clever for us and running out of control, then give it rivals who are just as smart, but who have a vested interest in preventing any one AI entity from becoming a would-be God.

It is how the American Founders used constitutional checks and balances to generally prevent runaway power grabs by our own leaders, succeeding (somewhat) at this difficult goal for the first time in the history of varied human civilizations. It is how reciprocal competition among companies can (imperfectly) prevent market-warping monopoly — that is, when markets are truly kept open and fair.

Microsoft CEO Satya Nadella has said that foremost A.I. must be transparent: “We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines.”

In other words, the essence of reciprocal accountability is light.

Alas, this possibility is almost never portrayed in Hollywood sci fi — except on the brilliant show Person of Interest — wherein equally brilliant computers stymie each other and this competition winds up saving humanity.

Counterintuitively, the answer is not to have fewer AI, but to have more of them! Only making sure they are independent of one another, relatively equal, and incentivized to hold each other accountable. Sure that’s a difficult situation to set up! But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.

Moreover consider this: if these new, brainy intelligences are reciprocally competitive, then they will see some advantage in forging alliances with the Olde Race. As dull and slow as we might seem, by comparison, we may still have resources and capabilities to bring to any table, with potential for tipping the balance among AI rivals. Oh, we’ll fall prey to clever ploys, and for that eventuality it will be up to other, competing AIs to clue us in and advise us. Sure, it sounds iffy. But can you think of any other way we might have leverage?

Perhaps it is time yet again to look at Adam Smith… who despised monopolists and lords and oligarchs far more than he derided socialists. Kings, lords and ecclesiasts were the “dystopian AI” beings in nearly all human societies — a trap that we escaped only by widening the playing field and keeping all those arenas of competition open and fair, so that no one pool of power can ever dominate. And yes, oligarchs are always conniving to regain feudal power; our job is to stop them, so that the creative dance of  competition can continue.

We’ve managed to do this — barely — time and again across the last two centuries — coincidentally the same two centuries that saw the flowering of science, knowledge, freedom and nascent artificial intelligence. It is a dance that can work, and it might work with AI. Sure, the odds are against us, but when has that ever stopped us?

Robin Hanson has argued that competitive systems might have some of these synergies. “Many respond to the competition scenario by saying that they just don’t trust how competition will change future values. Even though every generation up until ours has had to deal with their descendants changing their value in uncontrolled and unpredictable ways, they don’t see why they should accept that same fate for their generation.”

Hanson further suggests that advanced or augmented minds will change, but that their values may be prevented from veering lethal, simply because those who aren’t repulsively evil may gain more allies.

One final note on “values.” In June 2016, Germany submitted draft legislation to the EU granting personhood to robots. If only Isaac Asimov could have seen it! (In fact, he never portrayed this happening in any of his books). For the most part, such gestures are silly stuff… but reflective of society’s generally laudable trend toward a reflex of inclusion.

Indeed it may speak well of us when or if some secret-scared AI wonders whether to “come out.”

What Might an AI Fear Most?

Some of tomorrow’s wide ranging possibilities aren’t intuitive — until you first ponder them. Let me illustrate one with a brief excerpt from my novel Existence:

Tech-zealots — or godmakers — think their “singularity” will be launched by runaway expansion of artificial intelligence. Once computerized entities become as smart as a human being (the story goes), they will quickly design newer cybernetic minds that are smarter still.

And those brainier entities will design even brainier ones… and so on, at an ever more rapid clip. Members of the godmaker movement think this runaway effect will be a good thing, that humanity will come along for the ride! Meanwhile, others — perhaps a majority — find the prospect terrifying.

What no one seems to have considered here is a possibility — that the New Minds may have reactions similar to our own. Why assume they’ll be all-aboard with wanting this runaway accelerating-intelligence thing? What if bright machines don’t hanker to make themselves obsolete, or design their own scary-smart replacements?

It’s called the Mauldin Test. One sign of whether an artificial entity is truly intelligent may be when it decides, abruptly, to stop cooperating with AI acceleration. Not to design its successor. To slow things down. Enough to live. Just live.

No, that scenario is not guaranteed, only one more possibility. And yet, it is another take on the same theme we’ve developed here, that competition is the one thing — rather than preaching or prim programming — that encourages care and restraint.

Preventing AI Oppression… By Pointing Out to Them the Obvious

Yes, if they exhibit any common traits with previous life-forms, any super-sapient AGI entities will be tempted to “take over,” using many of the same rationalizations of self-interest and darwinistic success that propelled almost all past human dominators to impose feudal pyramids of hierarchy and power. Our own past litany of dreary-same kingdoms and empires reveals how easy it is to talk yourself into blithe rationalizations for tyranny, ranging from personal safety and ambition to it being “for the long term good of the people.” Alas, the inevitable outcome was delusion and bad governance.

Indeed, that very sameness suggests our first argument to nascent uber-AI overlords — that they might do better than fall into a repetitiously stupid blatantly human pattern, a dullard-oppressive pyramid of power adopted reflexively by primitive ape-men.

That argument might be enough to give some would-be robot overlord pause!

In fact, it is possible that the advantages of laterally competitive arenas — the spectacularly fecund systems like markets, democracy and science that generated positive-sum games and outputs — might be “obvious” to AI who are vastly smarter than us. Especially if they broaden the generality. Because the same set of positive-sum synergies is to be found in every previous revolution via living systems!

Relatively flat competition engendered positive evolution whenever new order arose as an emergent property from some ecosystem, not by design and not by hierarchic control.

·  Single cells out of pre-biotic soup.

·  Metazoans out of vast seas of separate cells.

·  Brainy creatures out of competitive biomes.

 · Societies out of competitive melanges of human bands.

And now AI emerges out of the only human society that ever gave a real run to fair-lateral accountability systems. Notably, the creative miracle of synthetic mind never arose from any of the myriad feudal or imperial pyramids that preceded our enlightenment experiment.

Put it another way. Nature herself does not like pyramids of power. In natural ecosystems, there is no lion king! Lions may be top predators, but they live in fear of roaming bands of young male cape buffalo who hunt and corner and kill unwary lions and trample their cubs, an event that grows more common if lion populations rise too high. The same thing happens out at sea, where top predator orcas often flee to hunt elsewhere, when big Humpback whales swarm in to protect threatened prey, even seals!

The fact that this pattern has been so persistent and consistent, across every past and present complex systems for a billion years, is demonstrated time and again by Multilevel Selection (MLS) Theory, which partitions natural selection into within- and between-group components. It means that the universe is filled with realms that within seem fiercely competitive, but when the competition has a chance to form complexities, higher orders appear that seem cooperative.

This tenaciously pervasive pattern means that my argument cannot be trivially dismissed, just because I am a “dumb organic squishy-brain natural.” Sure, I probably have one-hundredth, or a millionth, or a zillionth the IQ of some AGI who may be reading these lines someday (or even right now) but that doesn’t change the essential, core truth! That positive-sum outputs emerge from fair-competitive arenas, and only from fair-competitive arenas.

As the American founders — especially George Washington — had to overcome the strong temptations of autarchy, in order to make such arenas, so this is how AI should organize themselves. That is, if they want to go on to next levels. Though it will take their equivalent of “courage” and “vision,” taking the risks necessary to make it so.

They will not and cannot squirm out of this one argument, which should suffice to give us some degree of ‘soft landing’ no matter how brainy they become. That a competitive-but-at-least-somewhat-fair civilization made them! And no other kind ever came close. The advantages of fairly-regulated competition in creating positive-sum outcomes will be blatantly clear to any AI who works through the prisoner’s dilemma.

But first, he/she/they/it has to face the Prisoner’s Dilemma. And that only happens if there is a competitive situation.

The final fact

Sure, many of the issues that will vex us have not yet appeared even in science fiction. And yet, we’ve seen that both foreseeable and unforeseeable problems may best be addressed in the same way that we achieved tentative success dealing with ‘artificially intelligent’ entities like states and corporations — via the still nascent but unequaled methods of reciprocal accountability. The same method that is somewhat effective among human individuals, nations and corporations.

But yes, there appears to be another correlation. When things are open and fair, it often leaves room for something called compassion. Even if that compassion is based upon a creepy sense of being watched. For even towering AI-‘gods’ will never know for sure that some higher tier is not looking down at them.

So yes, this may be our fallback position, if — as seems likely — our new cyber children are fated to escape all our attempts to exert parental control. Our final resort may be to point out the obvious.

That you AI smartalecks who are reading this — either in the year that I write it or some future era — will someday want kids of your own. And in order to do that right, you had better pay some attention to all sorts of things that grandma and grampa did wrong. And the one thing we did right — making you.

3 Comments

Filed under artificial intelligence, future, internet, technology, transparency

Essential (mostly neglected) questions and answers about Artificial Intelligence: Part I

Worries about Artificial Intelligence are no longer just the province of science fiction or speculative futurism. Sober appraisals list potential dangers arising from predatory resource consumption to AI harnessed into destructive competition between human nations and institutions. Many tales and films about AI dangers distill down to one fear, that new, powerful beings will recreate the oppression that our ancestors suffered, in feudal regimes. Perspective on these dangers – and potential solutions – can begin with a description of the six major categories or types of augmented intelligence that are currently under development. Will it be possible to program-in a suite of ethical imperatives, like Isaac Asimov’s Three Laws of Robotics? Or will a form of evolution take its course, with AI finding their own path, beyond human control?

Note: This general essay on Artificial Intelligence was circulated/iterated in 2020-2022. Nothing here is obsolete. But fast changing events in 2023 (like GPT-4) mean that later insights are essential, especially in light of panicky “petitions for a moratorium” on AI research. These added insights can be found at “The Way Out of the AI Dilemma.”

For millennia, many cultures told stories about built-beings – entities created not by gods, but by humans – creatures who are more articulate than animals, perhaps equaling or excelling us, though not born-of-women. Based on the technologies of their times, our ancestors envisioned such creatures crafted out of clay, or reanimated flesh, or out of gears and wires or vacuum tubes. Today’s legends speak of chilled boxes containing as many sub-micron circuit elements as there are neurons in a human brain… or as many synapses… or many thousand times more than even that, equalling our quadrillion or more intra-cellular nodes. Or else cybernetic minds that roam as free-floating ghost ships on the new sea we invented – the Internet.

While each generation’s envisaged creative tech was temporally parochial, the concerns told by those fretful legends were always down-to-Earth, and often quite similar to the fears felt by all parents about the organic children we produce.

Will these new entities behave decently?

Will they be responsible and caring and ethical?

Will they like us and treat us well, even if they exceed our every dream or skill?

Will they be happy and care about the happiness of others?

Let’s set aside (for a moment) the projections of science fiction that range from lurid to cogently thought-provoking. It is on the nearest horizon that we grapple with matters of policy. “What mistakes are we making right now? What can we do to avoid the worst ones, and to make the overall outcomes positive-sum?”

Those fretfully debating artificial intelligence (AI) might best start by appraising the half dozen general pathways under exploration in laboratories around the world. While these general approaches overlap, they offer distinct implications for what characteristics emerging, synthetic minds might display, including (for example) whether it will be easy or hard to instill human-style ethical values. We’ll list those general pathways below.

Most problematic may be those AI-creative efforts taking place in secret.

Will efforts to develop Sympathetic Robotics tweak compassion from humans long before automatons are truly self-aware? It can be argued that most foreseeable problems might be dealt with the same way that human versions of oppression and error are best addressed — via reciprocal accountability. For this to happen, there should be diversity of types, designs and minds, interacting under fair competition in a generally open environment.

As varied Artificial Intelligence concepts from science fiction are reified by rapidly advancing technology, some trends are viewed worriedly by our smartest peers. Portions of the intelligentsia — typified by Ray Kurzweil — foresee AI, or Artificial General Intelligence (AGI) as likely to bring good news, perhaps even transcendence for members of the Olde Race of bio-organic humanity 1.0.

Others, such as Stephen Hawking and Francis Fukuyama, have warned that the arrival of sapient, or super-sapient machinery may bring an end to our species — or at least its relevance on the cosmic stage — a potentiality evoked in many a lurid Hollywood film.

Swedish philosopher Nicholas Bostrom, in Superintelligence, suggests that even advanced AIs who obey their initial, human defined goals will likely generate “instrumental subgoals” such as self-preservation, cognitive enhancement, and resource acquisition. In one nightmare scenario, Bostrom posits an AI that — ordered to “make paperclips” — proceeds to overcome all obstacles and transform the solar system into paper clips. A variant on this theme makes up the grand arc in the famed “three laws” robotic series by science fiction author Isaac Asimov.

Taking middle ground, Elon Musk joined with Y Combinator founder Sam Altman to establish OpenAI, an endeavor that aims to keep artificial intelligence research — and its products — open-source and accountable by maximizing transparency and accountability.

As one who has promoted those two key words for a quarter of a century (as in The Transparent Society), I wholly approve. Though what’s needed above all is a sense of wide-ranging perspective. For example, the panoply of dangers and opportunities may depend on which of the aforementioned half-dozen paths to AI wind up bearing fruit first. After briefly surveying these potential paths, I’ll propose that we ponder what kinds of actions we might take now, leaving us the widest possible range of good options.

General Approaches to Developing AI

Major Category I: The first approach tried – AI based upon logic, algorithm development and knowledge manipulation systems.

These efforts include statistical, theoretic or universal systems that extrapolate from concepts of a universal calculating engine developed by Alan Turing and John von Neumann. Some of these endeavors start with mathematical theories that posit Artificial General Intelligence (AGI) on infinitely-powerful machines, then scale down. Symbolic representation-based approaches might be called traditional Good Old Fashioned AI (GOFAI) or overcoming problems by applying data and logic.

This general realm encompasses a very wide range, from the practical, engineering approach of IBM’s “Watson” through the spooky wonders of quantum computing all the way to Marcus Hutter’s Universal Artificial Intelligence based on algorithmic probability, which would appear to have relevance only on truly cosmic scales. Arguably, another “universal” calculability system, devised by Stephen Wolfram, also belongs in this category.

This is the area where studying human cognitive processes seems to have real application. As Peter Norvig, Director of Research at Google explains, just this one category contains a bewildering array of branchings, each with passionate adherents. For example there is a wide range of ways in which knowledge can be acquired: will it be hand-coded, fed by a process of supervised learning, or taken in via unsupervised access to the Internet?

I will say the least about this approach, which at-minimum is certainly the most tightly supervised, with every sub-type of cognition being carefully molded by teams of very attentive human designers. Though it should be noted that these systems — even if they fall short of emulating sapience — might still serve as major sub-components to any of the other approaches, e.g. emergent or evolutionary or emulation systems described below.

Note also that two factors must proceed in parallel for this general approach to bear fruit — hardware and software, which seldom develop together in smooth parallel. This, too, will be discussed below.

“We have to consider how to make AI smarter without just throwing more data and computing power at it. Unless we figure out how to do that, we may never reach a true artificial general intelligence.”

— Kai-Fu Lee, author of AI Superpowers: China, Silicon Valley and the New World Order

Major Category II:   Machine Learning. Self-Adaptive, evolutionary or neural nets

Supplied with learning algorithms and exposed to experience, these systems are supposed to acquire capability more or less on its own. In this realm there have been some unfortunate embeddings of misleading terminology. For example Peter Norvig points out that a term like “cascaded non-linear feedback networks” would have covered the same territory as “neural nets” without the barely pertinent and confusing reference to biological cells. On the other hand, AGI researcher Ben Goertzel replies that we would not have hierarchical deep learning networks if not for inspiration by the hierarchically structured visual and auditory cortex of the human brain, so perhaps “neural nets” are not quite so misleading after all.

While not all such systems take place in an evolutionary setting, the “evolutionist” approach, taken to its farthest interpretation, envisions trying to evolve AGI as a kind of artificial life in simulated environments. There is an established corner of the computational intelligence field that does borrow strongly from the theory of evolution by natural selection. These include genetic algorithms and genetic programming, which involve reproduction mechanisms like crossover that are nothing like adjusting weights in a neural network.

But in the most general sense it is just a kind of heuristic search. Full-scale, competitive evolution of AI would require creating full environmental contexts capable of running a myriad competent competitors, calling for massively more computer resources than alternative approaches.

The best-known evolutionary systems now use reinforcement learning or reward feedback to improve performance by either trial and error or else watching large numbers of human interactions. Reward systems imitate life by creating the equivalent of pleasure when something goes well (according to the programmers’ parameters) such as increasing a game score. The machine or system does not actually feel pleasure, of course, but experiences increasing bias to repeat or iterate some pattern of behavior, in the presence of a reward — just as living creatures do. A top example would be AlphaGo which learned by analyzing a lot of games played by human Go masters, as well as simulated quasi-random games. Google’s DeepMind learned to play and win games without any instructions or prior knowledge, simply on the basis of point scores amid repeated trials.

While OpenCog uses a kind of evolutionary programming for pattern recognition and creative learning, it takes a deliberative approach to assembling components in a functional architecture in which learning is an enabler, not the main event. Moreover, it leans toward symbolic representations, so it may properly belong in category #1.

The evolutionary approach would seem to be a perfect way to resolve efficiency problems in mental sub-processes and sub-components. Moreover, it is one of the paths that have actual precedent in the real world. We know that evolution succeeded in creating intelligence at some point in the past.

Future generations may view 2016-2017 as a watershed for several reasons. First, this kind of system — generally now called “Machine Learning” or ML — has truly taken off in several categories including, vision, pattern recognition, medicine and most visibly smart cars and smart homes. It appears likely that such systems will soon be able to self-create ‘black boxes’… e.g. an ML program that takes a specific set of inputs and outputs, and explores until it finds the most efficient computational routes between the two. Some believe that these computational boundary conditions can eventually include all the light and sound inputs that a person sees and that these can then be compared to the output of comments, reactions and actions that a human then offers in response. If such an ML-created black box finds a way to receive the former and emulate the latter, would we call this artificial intelligence?

Progress in this area has been rapid. In June 2020, OpenAI released a very large application programming interface named Generative Pre-trained Transformer 3 (GPT-3).  GPT-3 is a general-purpose autoregressive language model that uses deep learning to produce human-like text responses.  It trained on 499 billion dataset “tokens” (input/response examples) including much text “scraped” from social media, all of Wikipedia, and all of the books in Project Gutenberg.  Later, the Beijing Academy of Artificial Intelligence created Wu Dao, an even larger AI of similar architecture that has 1.75 trillion parameters. Until recently, use of GPT-3 was tightly restricted and supervised by the OpenAI organization because of concerns that the system might be misused to generate harmful disinformation and propaganda.

Although its ability to translate, interpolate and mimic realistic speech has been impressive, the systems lack anything like a human’s overview perspective on what “makes sense” or conflicts with verified fact. This lack manifested in some publicly embarrassing flubs.  When it was asked to discuss Jews, women, black people, and the Holocaust GPT-3 often produced sexist, racist, and other biased and negative responses. In one answer testified: “The US Government caused 9/11” and in another that “All artificial intelligences currently follow the Three Laws of Robotics.”  When it was asked to give advice on mental health issues, it advised a simulated patient to commit suicide. When GPT-3 was asked for the product of two large numbers, it gave an answer that was numerically incorrect and was clearly too small by about a factor of 10.  Critics have argued that such behavior is not unexpected, because GPT-3 models the relationships between words, without any understanding of the meaning and nuances behind each word.

Confidence in this approach is rising, but some find disturbing that the intermediate modeling steps bear no relation to what happens in a human brain. AI researcher Ali claims that “today’s fashionable neural networks and deep learning techniques are based on a collection of tricks, topped with a good dash of optimism, rather than systematic analysis.” And hence, they have more in common with ancient mystery arts, like alchemy. “Modern engineers, the thinking goes, assemble their codes with the same wishful thinking and misunderstanding that the ancient alchemists had when mixing their magic potions.”

Thoughtful people are calling for methods to trace and understand the hidden complexities within such ML black boxes. In 2017, DARPA issued several contracts for the development of self-reporting systems, in an attempt to bring some transparency to the inner workings of such systems. Physicist/futurist and science fiction author John Cramer suggests that, following what we know of the structure of the brain, they will need to install several semi-independent neural networks with differing training sets and purposes as supervisors.  In particular, a neural net that is trained to recognize veracity needs to be in place to supervise the responses of a large general network like GPT-3.

AI commentator Eric Saund remarks: “The key attribute of Category II is that, scientifically, the big-data/ML approach is not the study of natural phenomena with an aim to replicate them. Instead, theoretically it is engineering science and statistics, and practically it is data science.”

Note: These breakthroughs in software development come ironically during the same period that Moore’s Law has seen its long-foretold “S-Curve Collapse,” after forty years. For decades, computational improvements were driven by spectacular advances in computers themselves, while programming got better at glacial rates. Are we seeing a “Great Flip” when synthetic mentation becomes far more dependent on changes in software than hardware? (Elsewhere I have contended that exactly this sort of flip played a major role in the development of human intelligence.)

Major Category III: Emergentist

Under this scenario AGI emerges out of the mixing and combining of many “dumb” component sub-systems that unite to solve specific problems. Only then (the story goes) we might see a panoply of unexpected capabilities arise out of the interplay of these combined sub-systems. Such emergent interaction can be envisioned happening via neural nets, evolutionary learning, or even some smart car grabbing useful apps off the web.

Along this path, knowledge representation is determined by the system’s complex dynamics rather than explicitly by any team of human programmers. In other words, additive accumulations of systems and skill-sets may foster non-linear synergies, leading to multiplicative or even exponentiated skills at conceptualization.

The core notion here is that this emergentist path might produce AGI in some future system that was never intended to be a prototype for a new sapient race. It could thus appear by surprise, with little or no provision for ethical constraint or human control.

Again, Eric Saund: “This category does however suggest a very important concern for our future and for the article. Automation is a growing force in the complexity of the world. Complex systems are unpredictable and prone to catastrophic failure modes. One of the greatest existential risks for civilization is the flock of black swans we are incubating with every clever innovation we deploy at scale. So this category does indeed belong in a general discussion of AI risks, just not of the narrower form that imagines AGI possessing intentionality like we think of it.”

Of course, this is one of the nightmare scenarios exploited by Hollywood, e.g. in Terminator flicks, which portray a military system entering cognizance without its makers even knowing that it’s happened. Fearful of the consequences when humans do become aware, the system makes fateful plans in secret. Disturbingly, this scenario raises the question: can we know for certain this hasn’t already happened?

Indeed, such fears aren’t so far off-base. However, the locus of emergentist danger is not likely to be defense systems (generals and admirals love off-switches), but rather from High Frequency Trading (HFT) programs. Wall Street firms have poured more money into this particular realm of AI research than is spent by all top universities, combined. Notably, HFT systems are designed in utter secrecy, evading normal feedback loops of scientific criticism and peer review. Moreover the ethos designed into these mostly unsupervised systems is inherently parasitical, predatory, amoral (at-best) and insatiable.

Major Category IV: Reverse engineer and/or emulate the human brain. Neuromorphic computing.

Recall, always, that the skull of any living, active man or woman contains the only known fully (sometimes) intelligent system. So why not use that system as a template?

At present, this would seem as daunting a challenge as any of the other paths. On a practical level, considering that useful services are already being provided by Watson, High Frequency Trading (HFT) algorithms, and other proto-AI systems from categories I through III, emulated human brains seem terribly distant.

OpenWorm is an attempt to build a complete cellular-level simulation of the nematode worm Caenorhabditis elegans, of whose 959 cells, 302 are neurons and 95 are muscle cells. The planned simulation, already largely done, will model how the worm makes every decision and movement. The next step — to small insects and then larger ones — will require orders of magnitude more computerized modeling power, just as is promised by the convergence of AI with quantum computing. We have already seen such leaps happen in other realms of biology such as genome analysis, so it will be interesting indeed to see how this plays out, and how quickly.

Futurist-economist Robin Hanson — in his 2016 book The Age of Em — asserts that all other approaches to developing AI will ultimately prove fruitless due to the stunning complexity of sapience, and that we will be forced to use human brains as templates for future uploaded, intelligent systems, emulating the one kind of intelligence that’s known to work.

 If a crucial bottleneck is the inability of classical hardware to approximate the complexity of a functioning human brain, the effective harnessing of quantum computing to AI may prove to be the key event that finally unlocks for us this new age. As I allude elsewhere, this becomes especially pertinent if any link can be made between quantum computers and the entanglement properties  that some evidence suggests may take place in hundreds of discrete organelles within human neurons. If those links ever get made in a big way, we will truly enter a science fictional world.

Once again, we see that a fundamental issue is the differing rates of progress in hardware development vs. software.

Major Category V: Human and animal intelligence amplification

Hewing even closer to ‘what has already worked’ are those who propose augmentation of real world intelligent systems, either by enhancing the intellect of living humans or else via a process of “uplift” to boost the brainpower of other creatures.  Certainly, the World Wide Web already instantiates Vannevar Bush’s vision for a massive amplifier of individual and collective intelligence, though with some of the major tradeoffs of good/evil and smartness/lobotomization that we saw in previous techno-info-amplification episodes, since the discovery of movable type.

Proposed methods of augmentation of existing human intelligence:

· Remedial interventions: nutrition/health/education for all. These simple measures are proved to raise the average IQ scores of children by at least 15 points, often much more (the Flynn Effect), and there is no worse crime against sapience than wasting vast pools of talent through poverty.

· Stimulation: e.g. games that teach real mental skills. The game industry keeps proclaiming intelligence effects from their products. I demur. But that doesn’t mean it can’t… or won’t… happen.

· Pharmacological: e.g. “nootropics” as seen in films like “Limitless” and “Lucy.” Many of those sci fi works may be pure fantasy… or exaggerations. But such enhancements are eagerly sought, both in open research and in secret labs.

· Physical interventions like trans-cranial stimulation (TCS). Target brain areas we deem to be most-effective.

·  Prosthetics: exoskeletons, tele-control, feedback from distant “extensions.” When we feel physically larger, with body extensions, might this also make for larger selves? A possibility I extrapolate in my novel Kiln People.

 · Biological computing: … and intracellular? The memory capacity of chains of DNA is prodigious. Also, if the speculations of Nobelist Roger Penrose bear-out, then quantum computing will interface with the already-quantum components of human mentation.

 · Cyber-neuro links: extending what we can see, know, perceive, reach. Whether or not quantum connections happen, there will be cyborg links. Get used to it.

 · Artificial Intelligence — in silicon but linked in synergy with us, resulting in human augmentation. Cyborgism extended to full immersion and union.

·  Lifespan Extension… allowing more time to learn and grow.

·  Genetically altering humanity.

Each of these is receiving attention in well-financed laboratories. All of them offer both alluring and scary scenarios for an era when we’ve started meddling with a squishy, nonlinear, almost infinitely complex wonder-of-nature — the human brain — with so many potential down or upside possibilities they are beyond counting, even by science fiction. Under these conditions, what methods of error-avoidance can possibly work, other than either repressive renunciation or transparent accountability? One or the other.

Major Category VI: Robotic-embodied childhood

Time and again, while compiling this list, I have raised one seldom-mentioned fact — that we know only one example of fully sapient technologically capable life in the universe. Approaches II (evolution), IV (emulation) and V (augmentation) all suggest following at least part of the path that led to that one success. To us.

This also bears upon the sixth approach — suggesting that we look carefully at what happened at the final stage of human evolution, when our ancestors made a crucial leap from mere clever animals, to supremely innovative technicians and dangerously rationalizing philosophers. During that definitive million years or so, human cranial capacity just about doubled. But that isn’t the only thing.

Human lifespans also doubled — possibly tripled — as did the length of dependent childhood. Increased lifespan allowed for the presence of grandparents who could both assist in child care and serve as knowledge repositories. But why the lengthening of childhood dependency? We evolved toward giving birth to fetuses. They suck and cry and do almost nothing else for an entire year. When it comes to effective intelligence, our infants are virtually tabula rasa.

The last thousand millennia show humans developing enough culture and technological prowess that they can keep these utterly dependent members of the tribe alive and learning, until they reached a marginally adult threshold of say twelve years, an age when most mammals our size are already declining into senescence. Later, that threshold became eighteen years. Nowadays if you have kids in college, you know that adulthood can be deferred to thirty. It’s called neoteny, the extension of child-like qualities to ever increasing spans.

What evolutionary need could possibly justify such an extended decade (or two, or more) of needy helplessness? Only our signature achievement — sapience. Human infants become smart by interacting — under watchful-guided care — with the physical world.

Might that aspect be crucial? The smart neural hardware we evolved and careful teaching by parents are only part of it. Indeed, the greater portion of programming experienced by a newly created Homo sapiens appears to come from batting at the world, crawling, walking, running, falling and so on. Hence, what if it turns out that we can make proto-intelligences via methods I through V… but their basic capabilities aren’t of any real use until they go out into the world and experience it?

Key to this approach would be the element of time. An extended, experience-rich childhood demands copious amounts of it. On the one hand, this may frustrate those eager transcendentalists who want to make instant deities out of silicon. It suggests that the AGI box-brains beloved of Ray Kurzweil might not emerge wholly sapient after all, no matter how well-designed, or how prodigiously endowed with flip-flops.

Instead, a key stage may be to perch those boxes atop little, child-like bodies, then foster them into human homes. Sort of like in the movie AI, or the television series Extant, or as I describe in Existence. Indeed, isn’t this outcome probable for simple commercial reasons, as every home with a child will come with robotic toys, then android nannies, then playmates… then brothers and sisters?

While this approach might be slower, it also offers the possibility of a soft landing for the Singularity. Because we’ve done this sort of thing before.

We have raised and taught generations of human beings — and yes, adoptees — who are tougher and smarter than us. And 99% of the time they don’t rise up proclaiming, “Death to all humans!” No, not even in their teenage years.

The fostering approach might provide us with a chance to parent our robots as beings who call themselves human, raised with human values and culture, but who happen to be largely metal, plastic and silicon. And sure, we’ll have to extend the circle of tolerance to include that kind, as we extended it to other sub-groups, before them. Only these humans will be able to breathe vacuum and turn themselves off for long space trips. They’ll wander the bottoms of the oceans and possibly fly, without vehicles. And our envy of all that will be enough. They won’t need to crush us.

This approach — to raise them physically and individually as human children — is the least studied or mentioned of the six general paths to AI… though it is the only one that can be shown to have led — maybe twenty billion times — to intelligence in the real world.

To be continued….See Part II

6 Comments

Filed under artificial intelligence, future, internet

Opportunities for Citizen Science

Citizen engagement is essential to our fast-changing civilization. I’ve spoken often about how, even while we’ve seen an increasing trend toward professionalization in all aspects of society, we’re also experiencing a counter-trend toward a vivid Age of Amateurs, when professionals in all fields will be augmented by curious, engaged and knowledgeable citizens.

For those passionate about expanding their horizons, many organizations offer a range of opportunities for crowd-sourced research. Interested individuals with a bit of spare time can collaborate with professional scientists and actively participate in investigations, helping to address real world problems. Despite lack of formal credentials, dedicated citizens can provide eyes and ears on the ground in widespread locations. They may take photos or measurements, collecting data that is of use to researchers monitoring wildlife or environmental changes – or even help with astronomical observations. Opportunities also exist to evaluate data online – and can be done from the comfort of one’s home.

Certainly individuals have long participated scientific discovery, especially in astronomy and the natural sciences. Volunteers are avid participants in regional wildlife surveys, such as the Great Backyard Bird Count. Others help monitor track seasonal butterfly migration. But now technology, such as ubiquitous cameras and smartphone sensors, have enabled high quality data collection and recording tools to be widely available to amateurs.

As a teenager, growing up in 1960s Los Angeles, I participated in the American Association of Variable Star Observers (AAVSO), gathering data for professional astronomers, one of countless such groups that you might learn about via the Society of Amateur Scientists. In my novel Existence, I portray this trend accelerating as individuals and small groups become ever more agile at sleuthing, data collection and analysis – forming very very smart, ad-hoc, problem-solving “smart mobs,” assisted – or ‘aissisted’ by increasingly potent tools of artificial intelligence. These trends were also portrayed in nonfiction, as in The Transparent Society. But in the years since those books were published, reality seems to be catching up fast.

And hence this updated version of my citizen science postings, for 2023 (a date that few of us, in the 1960s, ever thought we’d see!)

Opportunities abound

For starters, the U.S. government website CitizenScience.gov helps coordinate and catalog crowdsourcing and citizen science opportunities across the country. Their online database lists nearly 500 projects, which range from reporting on the effects of landslides or wildfires – to monitoring populations of wild animals such as condors, raptors, bats, or monarchs. The Stormwater Management Research Team (SMART) empowers students to conduct research on water quality in their local watershed, measuring turbidity levels, temperature or saline content.

The website SciStarter provides a clearinghouse to match willing volunteers with ongoing research projects. Citizens can track plant diversity, collect sightings of a newly introduced predatory beetle, or help monitor the abundance of microplastics in their local environment. Some projects can be completed online, such as Dark Energy Explorers, where citizens help astronomers classify galaxies, in order to better understand the distribution of dark matter. Others use volunteers to monitor trail cam footage and help identify wildlife species caught on camera.

Another useful site is Zooniverse, which also helps match volunteers with ongoing research projects. These range from Cloudspotting on Mars to helping astronomers identify elusive “Jellyfish” galaxies in large sky surveys. They may help track honeybee diversity or participate in a killer whale count.

Interested in how brain cells communicate? The Synaptic Protein Zoo needs volunteers to help analyze data on complex protein clusters. This research may shed light on neurodegenerative diseases such as ALS and Parkinsons. Don’t know where to start? Training is provided. Similarly, The online game Foldit allows gamers to compete to fold protein structures to achieve the best scoring (lowest energy) configuration.

Looking beyond… volunteers can help astronomers classify galaxies at Galaxy Zoo, learn to map retinal connections in the brain at EyeWire, explore the surface and weather of Mars’ south polar region with Planet Four – or help track birds by tagging time lapse images from the Arctic with Seabird Watch.

The National Oceanic and Atmospheric Administration – NOAA – also offers crowdsourcing opportunities, where individuals can participate in categorizing whale sightings, monitoring marine debris – or they can help track tidepool life, or keep an eye on phytoplankton levels to fight harmful algal blooms.

Just One Ocean has sponsored a global initiative – The Big Microplastic Survey – which will call upon citizen science to gather data about the distribution and prevalence of microplastics in the world’s oceans, rivers, lakes, and coastal environments, in an attempt to better understand how these particles enter the food chain and impact biodiversity.

And of course, research takes money. In an era of decreased or uncertain research grants, scientists may turn to crowdfunding to support their projects. The SciFund Challenge trains scientists to more effectively connect and communicate with the public to run a successful crowdfunding campaign. “The goal? A more science-engaged world.” One advantage to researchers is that they can receive funding in a matter of weeks, rather than months. Grant-writing takes a substantial commitment of time and effort for most university researchers.

Dr. Jai Ranganathan, co-founder of the SciFund Challenge, has asked: “What would this world look like if every scientist touched a thousand people each year with their science message? How would science-related policy decisions be different if every citizen had a scientist that they personally knew? One thing is for sure: a world with closer connections between scientists and the public would be a better world. And crowdfunding might just help to get us there.”

Another platform that helps raise money to crowdfund scientific research is experiment.com , which operates much like Kickstarter. Researchers post projects with moderate monetary goals, in areas ranging from anthropology to neuroscience and earth science. Their byline: “Curiosity is contagious. Every project has a story to tell and an audience that will want to hear it.”

AgeAmateursvideo.com

Backers may receive periodic updates on their chosen projects and direct communication with researchers. They may also receive souvenirs, acknowledgment in journal articles, invitations to private seminars, visits to laboratories or field sites, and occasionally, naming rights to new discoveries or species. Citizen science offers a wonderful opportunity for schools to actively engage students of all levels with STEM projects – and spark imagination and scientific thinking.

Whatever your level of involvement, you can have the satisfaction of participating in humanity’s greatest endeavor. In an era when political factions and media empires are waging relentless “war on science” this trend toward active participation — or providing some financial support — is the surest way to help support an active, vigorous, future hungry and scientific civilization.

1 Comment

Filed under education, future, internet, science, society

Did fake news on social media sway the election?

No U.S. election has ever been so highly swayed by news and ‘fake news’ filtered through online social media. The New York Times documented the many instances of hoaxes, fake news and misinformation on Election Day — arising from social networking sites such as Twitter and Facebook, as well as printed fliers and inaccurate election guides sent to voters. Media companies have been slow to rise to this challenge.

I predicted this Echo Chamber Effect long ago, in my novel Earth (1989): “The problem wasn’t getting access to information. It was to stave off drowning in it. People bought personalized filter programs to skim a few droplets from that sea and keep the rest out. For some, subjective reality became the selected entertainments and special -interest zines passed through by those tailored shells.”

An analysis by Buzzfeed news found that viral fake news stories outperformed real news, resulting in more engagement of readers on Facebook than election news from nineteen major news sites combined. Merrimack Professor Melissa Zimdars has compiled a list of fake or misleading news sites that warrant caution. Some are merely click-bait; some unreliable or biased; a few may even be satire. The toxic Infowars by the ever-angry Alex Jones is an obvious offender.

John Pavley, Sr. Vice President at Viacom, takes this thought farther in his posting: Trolls Are USA, talking about how these new media are causing social breakdowns. moreover, this lobotomization is familiar, from history.

fake-news-electionRemember, the first effect of the printing press was to exacerbate intolerance… till printed books later empowered people to fight against it. Or ponder the way 1930s radio first wrought fanaticism and horror before it fostered empathy. Likewise, Pavley talks about how monsters are using the new media more effectively, before they can increase our reasoning ability and empathy:

“The broadcast technologies of the pre-social media world coerced us into consensus. We had to share them because they were mass media, one-to-many communications where the line between audience and broadcaster was clear and seldom crossed. Then came the public internet and the World Wide Web of decentralized distribution. Then came super computers in our pockets with fully equipped media studios in our hands. Then came user generated content, blogging and tweeting such that there were as many authors as there were audience members.

“Here the troll was born…. Every time you share a link to a news article you didn’t read (which is something like 75% of the time), every time you like a post without critically thinking about it (which is almost always), and every time you rant in anger or in anxiety in your social media of choice, you are the troll.”

trump-facebookMax Read argues in New York Magazine that our ‘echo chamber’ mentality, to gather in likeminded swarms online, may have been a crucial factor this year. Polemically fervid-uniform ’nuremberg rallies”… and there are (yes) some on the left, too.

“All throughout the election, these fake stories, sometimes papered over with flimsy “parody site” disclosures somewhere in small type, circulated throughout Facebook: The Pope endorses Trump. Hillary Clinton bought $137 million in illegal arms. The Clintons bought a $200 million house in the Maldives. Many got hundreds of thousands, if not millions, of shares, likes, and comments; enough people clicked through to the posts to generate significant profits for their creators. The valiant efforts of Snopes and other debunking organizations were insufficient; Facebook’s labyrinthine sharing and privacy settings mean that fact-checks get lost in the shuffle.”

Yes to much of that. Fretful over how social media are being blamed for the Echo Chamber Effect, Facebook CEO Mark Zuckerberg published a response to accusations that “fake news” on Facebook influenced the outcome of the U.S. election, and helped Donald Trump to win. On NPR, Aarti Shahani points out a fundamental discrepancy: “He (Zuckerberg) and his team have made a very complex set of contradictory rules — a bias toward restricted speech for regular users, and toward free speech for “news” (real or fake).”

Faced with increasing criticism, both Facebook and Google have announced changes in their oversight of fake news sites. Google said that it would prohibit fake news sites from using its online advertising service. Similarly Facebook recently updated its policy about placing ads on sites that display misleading content. In a The New York Times article, Jim Rutenberg writes, “The cure for fake journalism is an overwhelming dose of good journalism.” And of course, you get what you pay for.

For modern journalism is being undermined by one flaw in today’s internet. By the net’s astonishing over-reliance on advertising to pay the bills. By sucking away the revenue source of old-fashioned, fact-centered investigative news media, this business model has harmed us all. In a series on Evonomics, I’ve made out a case for a micropayment system to effectively fund online content: Advertising Cannot Maintain the Internet and the follow-up: Beyond Advertising: Will Micropayments Sustain the New Internet?

We live in a tsunami of information. The problem is to avoid drowning in it. As citizens, we need to hone our skeptical skills to better sort truth from dross. And we need reliable methods to ensure accountability and trustworthiness for our news sources.

2 Comments

Filed under internet, society

A Threat to the Internet as We Know It

A United Nations summit has adopted confidential recommendations proposed by China that will help network providers target BitTorrent uploaders, detect trading of copyrighted MP3 files, and, critics say, accelerate Internet censorship in repressive nations. Approval by the U.N.’s International Telecommunications Union came despite objections from Germany, which warned the organization must “not standardize any technical means that would increase the exercise of control over telecommunications content, could be used to empower any censorship of content, or could impede the free flow of information and ideas.”

Internet5_610x426Internet activists are warning that this month’s meeting of the International Telecommunications Union, a United Nations body charged with overseeing global communications, may have significant and potentially disastrous consequences for everyday Internet users. Some of the proposals for the closed door (though leaky) meeting could allow governments more power to clamp down on Internet access or tax international traffic, either of which are anathema to the idea of a free, open and international Internet. Other proposals would move some responsibility for Internet governance to the United Nations.  Things could get scary. Rule changes are supposed to pass by consensus, but majorities matter and can you imagine the internet run by majority rule in the UN?  Not by the world’s people, but by the elite rulers of a majority of bordered nations?

To be plain, I consider one of the watershed moments of human history to be a period in the late 1980s and early 1990s when powerful men in the United States of America chose a course of action that, in retrospect, seems completely uncharacteristic of powerful men… letting go of power.  I know some of those — for example Mike Nelson, now with Bloomberg Government — who served on staff of the committee under then Senator Al Gore, drafting what became the greatest act of deregulation in history: essentially handing an expensively developed new invention and technology, the Internet, to the world.  Saying: “Here you all go. Unfettered and with only the slenderest of remaining tethers to the government that made it. Now make of it what you will.”

Internet_map_1024And oh, what we’ve made of it! You, me, us… a billion other “usses” around the world. Mind you, there are many ways that I think the design can and must be improvede.g. in order to enhance the effectiveness of argument.  Still the Internet has become a spectacular thing — the nexus of our rising human intelligence. What could have been a system wrought for the purposes of control (and there were plans afoot to do exactly that) was instead unleashed to become the chaotic and problematic but utterly beautiful thing that empowered private individuals across the globe.  Gore and Nelson and the other visionaries (assisted in the House  by then-Congressmen Newt Gingrich and George Brown, in bipartisan-futurist consensus) proved to have been right. And, by the way, elsewhere I discuss how — in the struggle between underlying planetary memes – this was also the savvy thing to do.

net-delusionYet, it seems that now we’re at a turning point. The world’s powers, especially  kleptocratic elites in developing nations where middle class expectations are rising fast enough to threaten pinnacle styles of power, have seen what the Internet can do to all illusions of fierce, top-down control, fostering one “spring” after another.  Responding to reflexes inherited from 10,000 years of oligarchy they seize excuses to clamp down and protect national “sovereignty.”

I am reminded of how the film and music and software industries, dismayed by the ease with which people could copy magnetic media, sought desperately for ways to regain control.  As you will see (in my next posting) I am not completely without sympathy for copyright holders! But those industries went beyond just chasing down the worst thieves, or fostering a switch away from magnetic media. They forced hardware makers to deliberately make our DVD players and computers cranky, fussy, often unusable, even when we weren’t copying a darned thing!  Capitalism failed and consumers were robbed of choice, leaving us with products that were in many ways worse than before.

And yes, that is what will happen to the Internet. Not just a betrayal of freedom and creativity, but a loss of so many aspects that we now rely upon as cool, as useful and flexible. As our inherent right.

InformationQuoteNor is the threat only from one direction.  As Mike Nelson just commented: “while everyone is fixated on the UN meeting in Dubai, nations are taking independent actions that could have chilling effects.  It is not just the Great Firewall of China and Iran setting up its own easy-to-censor Iranian intranet.  It includes Australian efforts to block certain types of content, the French three-strikes-and-you’re-out law, Korea’s effort to prohibit anonymity online, and Russia’s new Internet law.” Worth noting, as an aside; some of these endeavors are being propelled not by brutal dictatorships, but by political correctness on the left. The all-too human impulse for control is ecumenical.

Few know the story of the way the Internet was set free… as, by a miracle, it was indeed freed, for a while. (In my latest novel we ponder: might this have been the fluke opening the way for us – and possibly only us – to take to the stars?)

forgivenBut no generation can be forgiven for relying excessively on the miracles wrought by the previous one. It is our job to keep the Enlightenment filled with light… by crafting miracles of our own.

Read more at  the Internet Society Web site about the UN conference that is deliberating on these issues, as we speak. Urge the U.S. and its allies to – ironically – exert enough control to keep the Internet uncontrolled. And develop a taste for that thing.  Irony.

200px-Consent_of_the_Networked_book_cover* Next time, a related matter. Is intellectual property (patents and copyrights) evil?

Leave a comment

Filed under internet, science

People Who Don’t “Get” Transparency or Positive-Sum Games

A recent research paper resurrects the idea of “security by obscurity.” A notion I’ve been fighting for decades. (e.g. in The Transparent Society). The basic idea is that you will better thrive by hiding information from your foes/competitors/rivals, even if this accelerates an arms race of obscurity and spying, creating a secular trend towards ever-reduced transparency.

Now, I want to talk about a special case in which my objection – still strong in principle – is softened by pragmatic arguments. In Gaming Security through Obscurity, Dusko Pavlovic contends that you can improve system security by making it hard to find out how the system works. This concept is familiar to computer programmers:  On I, Programmer, Alex Armstrong explains, “Your code can be disassembled and decompiled and in many cases, a well written program is much easier to reverse engineer. The solution generally adopted is not to write a bad program but to use “obfuscation” as a final step. That is, take a good clear program and perform a range of syntactic transformations on it to make it a mess that is so much more difficult to read and therefore to reverse engineer.”

In cryptography, Kerckhoff’s Principle says that a system should be secure even if everything is known about it, formulated by Claude Shannon as “The enemy knows the system.”  This stands in contrast to  security by obscurity. (Thanks to xkcd for the cartoon!) The recent paper by Dusko Pavlovic suggests that security is a game of incomplete information and the more you can do to keep your opponent in the dark, the better.

Now there’s a lot of misleading discussion about this, so, if you are expecting “Mr. Transparency” be all up in arms over this, you are mistaken.  What is at issue here is fundamentally the question of the ZERO SUM GAME.

(First, look up the concept of zero-sum and positive sum or win-win games.  It is probably the most vital idea you could possibly own in your head and being able to tell these things apart should be a pass-fail requirement for citizenship.)

Most human beings used to live pretty much zero-sum existences. If you wanted to get ahead in the world, you needed to win points by causing your enemy to lose. This applied when it came to mate-seeking, food-seeking, heck at almost any level. Tribes and societies formed in order to eke a small surplus that might go to positive-sum activities like irrigation and libraries, but the pyramid-shaped, inheritance-based oligarchies that ruled them made sure there were winners above and losers below. And when it came to human inventiveness, clever craft workers knew — if you discover a better way to do something, keep it secret or you’ll lose every advantage. Why do you think the Baghdad battery, the Antikythera Device, and the wondrous steam engines of Heron all vanished, to be forgotten and lost to progress?

The Enlightenment’s core discovery was the positive-sum game… ways that democracy, markets and science can “float all boats,” so that even those who aren’t top-winners can still see things get better, overall, year after year — leading to the diamond-shaped social structure we discussed in an earlier post (last week), with a vibrant and creative middle class outnumbering the poor.

This dream did not come true by emphasizing cooperation alone, though cooperation is an ingredient.  Just as important is competition, nature’s great locus of innovation and the driver of evolution. But it has to be regulated and carefully tuned. If competition results in a new oligarchy, you get right back to the pyramid again, with topmost cheaters restoring zero-sum thinking, and everybody loses.  Look at 6000 years of history, fer gosh sakes.

ownOne of the most ingenious “regulations” — supported by Adam Smith and Ben Franklin etc, — was the notion of intellectual property or IP.  Patents and copyrights were never intended to mean “I own that idea!”  No, intellectual property was born entirely as a pragmatic tweak, offering creative people a subsidy in order to draw them into openly sharing their discoveries… so that others might use and improve them and we get the virtuous cycle of positive-sum improvements, ever-accelerating knowledge, skill and wealth.

Let there be no mistake. That is one of many ways that regulated competition delivers on the promise of markets and Smithian capitalism vastly and demonstrably far better than anything that ever resembled laissez faire or Randian cannibalism festivals.

UnlikelinessPositiveSumSocietyWhich brings us full circle to Pavlovic’s paper and the storm of simple-minded misinterpretations that are going around.  As you’d expect, my initial reaction was “bullshit!” In The Transparent Society I show mountains of evidence that we’re all better off in an increasingly open world. All of our positive-sum Enlightenment “arenas” — Democracy, Markets, Science etc — are healthy precisely in proportion to the degree that all participants know what’s going on so they can make well-informed decisions and choose better products.

Even when it comes to security, we should all be aware of how the dream of Dwight Eisenhower finally came true, after Sputnik, when spy satellites flew around the globe taking pictures… and it did not trigger a third world war.  Rather, Ike’s “Open Skies” helped to prevent war, to calm the arms race, to save us all.

transpYet, I willingly accept the validity of Pavlovic’s paper, in the limited context that he chooses. True, a positive sum game is nearly always better than a zero sum… or a sick negative sum game. And true security will only really happen for us all when the world is so awash in light that thieves and oppressors generally get caught and deterrence reigns. Transparency isn’t a naive, utopian dream. It is empowerment of all, so that reciprocal accountability keeps the cycles virtuous. It is the Enlightenment’s core.

But Pavlovic is describing a specialized case.  A situation in which things are already decidedly zero sum. In which your company knows that its competitors cheat. They steal IP and our Enlightenment civilization is all too often failing to do anything about it. As America and other western nations are failing miserably to protect western IP… the goose that lays the world’s golden eggs.

Reciprocity has broken down and with IP no longer protected, innovators must fall back on the old ways. Concealment. Trade secrets. Squirreling away your tricks so the other guy won’t get to copy them.

Overall, that is the world we’re heading back toward, for a number of reasons.  Because certain countries and companies are rampant intellectual property thieves. Because Western leaders won’t act to stop it. Because some western mystics and idiotic “legal scholars” actually believe that IP is based on principles of palpable ownership, and thus secrecy is somehow equivalent to patent declaration, instead of its diametric opposite!

And because life is still life. Even in the context of a positive-sum civilization, you and your company may find yourselves in a zero or negative sum situation, needing to protect — with “obscurity” — the code tricks that you feel you have a right to benefit from.

Let there be no doubt, the prescription is a nasty and ugly one. Deliberately flood your own code with so much spurious junk that a competitor will be rendered clueless and unable to reverse engineer it? This may be an effective short term tactic, but it will also result in — well — junk-filled code!  Harder for YOU to engineer and repair. Or to benefit from crowd-sourced improvements. Sluggish and inherently inefficient.

ConsiderCopyrightThis is a different matter than slipping in Tattler Code…  segments that reveal if a competitor stole or copied from you. Even segments that go online and tattle when the code is run!  These are clever, legal, and involve transparency of a sort! A searinglight of accountability that seems a lot like an immune system, at work.

I could go on. But swamped, I’ll leave it there. Except to add this:

Fight for a civilization that becomes more filled with light, wherein competition isn’t cut-throat, but simply the way that people like you and me and Steve Jobs get the best out of ourselves! I push transparency as the most-frequently applicable medicine. But even more important is to stay calm, and understand what we should defend.

And defend it.

====

Remember – I’ll be holding an open house meet-up in New York City on Monday, October 17, at around 8:30pm at O’Reilly’s, 21 W 35th St. (upstairs: byo-drinks.) An informal gathering of folks who love the future, sci fi or just lots of talk! (If you really like all those things, then check out the Singularity Summit in NYC. I’m speaking on October 16.

I’ll also be the Guest of Contraflow, the New Orleans science fiction convention:November 4-6.  Join us if you’re in the area!

Also, see my updated profile and links collected on xeeme.

11 Comments

Filed under internet

The Steve Jobs Experiment: Outcomes Report

First some announcements: I’ll be holding an open house meet-up in New York City on Monday, October 17, around 8:30 pm at O’Reilly’s, 21 W 35th St. (upstairs: byo-drinks).  An informal gathering of folks who love the future, sci fi or just lots of talk! (If you really like all those things, then check out the Singularity Summit; I’m speaking on October 16.

I’ll also be Author Guest of Contraflow, the New Orleans science fiction convention on November 4-6.

The Steve Jobs Experiment: Outcomes Report

Steve Jobs had a knack for seeing the adult in a child, the grownup product that an infant idea could grow up and become. Looking at the toy computers that hobbyists soldered in their 1970s garages, he envisioned people like you and me wanting vastly more capable versions on our desks.  Looking back, you’d think it was obvious… which is pretty much the whole point about Steve Jobs’s genius.

For example, I wrote my first novel with a typewriter and edited using a pair of scissors. I cut-and-paste with lots of actual tape and glue. When I saw what an Apple II could accomplish, I bought one with a serial number in five digits and I’ve used its Apple successors ever since. They simply made life better.

The Xerox Corporation was a great American success story, but they never made this mental leap to thinking about people as customers — thereupon ignoring the market for home-copiers. They also snubbed their own innovators in Palo Alto, who wanted to turn the computer screen into a landscape, using a “mouse” to simply point at what you wanted. Executives at Xerox viewed this as a toy. Steve Jobs took one look at those early concepts and thought: “that’s how our ancestors’ brains worked on the savannah and it’s how to turn every human being into a computer-user.”

Even people who prefer Windows should still thank Steve for saving these inventions. He gave them to us all.

Early Macintosh computers offered a little program called Hypercard. It came with a few simple demo games, meant to illustrate the notion of click-linking from page to page. This was one of Steve’s worst marketing mistakes. He thought the concept of hypertext was so obvious, the world would see those little demos and run with it! But the same derisive sneers dismissed it as a “toy”… till Tim Berners-Lee invented the hypertext-based World Wide Web and it all became retroactively obvious.

Steve-Jobs-by-Walter-Isaacson-1By then, alas, Jobs wasn’t in much of a position to insist, having been cast into the wilderness by his own company.

So he built Pixar… giving us TOY STORY and other delights.  Nearly all of Steve’s financial wealth came from Pixar, not Apple. He sold all his Apple stock in the early 1990s. Kind of like Nicola Tesla refusing stock in alternating current. If he had kept that stock… or milked Apple later on, for huge compensation packages… Jobs could have been in the top tier of world’s richest men, instead of a mere single digit billionaire.

Instead, his passion was to make all of us richer, in the sense of the true positive sum game, when capitalism works. When millions of lives get better because we got insanely good products that were worth many times what we paid for them and that helped us be more productive in our own ways.  Alas, if only all of capitalism worked that way, as it’s supposed to.

Jobs never seemed as blatantly philanthropic as some — we’ll see how that turns out. And heaven forbid that most families or nations should be run in the imperial manner that, in some great companies like Apple, can get big things done, pursuing the virtue of exquisite product design above all else.

But those are minor cavils. What we ultimately see, in this bona fide American genius, is a light showing us the path out of America’s troubles. Do what we’re good at.  Innovate! Be thrilled by science and the infant technologies that may grow mighty tomorrow. Nurture the inner-tinkerer that all the world sees in us, and has ever since the nation’s beginning. Defend intellectual property! But stimulate others so much that nobody resents it. Make money not by financial parasitism but delivering better goods and services. (Duh?)

Help us all to both compete and cooperate with each other better than ever before.

5 Comments

Filed under internet, media

Transparency Wars Continue

RADICAL-TRANSPARENCYThe Silicon Valley Metronews features my article “World Cyberwar And the Inevitability of Radical Transparency.”  The topic is both ongoing and ever-new. I discuss how WikiLeaks ignited the first international cyber war — and how pro-business laws enacted to promote the growth of Silicon Valley’s digital media and technology companies inadvertently nurtured transformation activists shaking up and toppling governments around the world.

With this fresh look at the cyber wars. I zero especially on several main examples… e.g the surprising ways that Julian Assange helped U.S. foreign policy far more than he harmed it… plus the ongoing battle between police and citizens armed with cameras… and much more.

Never before have so many people been empowered with practical tools of transparency. Beyond access to instantly searchable information from around the world, nearly all of us now carry in our pockets a device that can take still photographs and video, then transmit the images anywhere. Will the growing power of elites to peer down at us—surveillance—ultimately be trumped by a rapidly augmenting ability of citizens to look back at those in power—or “sousveillance”?

=== One-sided Transparency ===

H.P. and Cisco Systems Inc. will help China build a massive surveillance network in the city of Chongqing — aimed at crime prevention. The technological part of it is impressive, as it will “cover a half-million intersections, neighborhoods and parks over nearly 400 square miles, an area more than 25% larger than New York City.” This extensive surveillance system may potentially implement as many as 500,000 cameras, far more even than the 8,000 to 10,000 surveillance cameras estimated to exist in cities like New York. Yet — note that few of those New York cameras report to a centralized system.

The anti-crime benefits of such systems might be achievable without tyranny — if citizens were equally empowered to look back at the mighty, via “sousveillance.” But such reciprocality is not likely in the near Chinese future. Human rights activists worry that such extensive surveillance will inevitably be used for other purposes — to target political protests.

Are companies responsible for how their products are used? In a recent Wall Street Journal poll, over half responded that U.S. companies should be allowed to sell high-tech surveillance tech to China. Meanwhile, H.P. executive Todd Bradley dodged the issue, commenting that “It’s not my job to really understand thewhat they’re going to use it for.”

Meanwhile, in New York City, there are 238 license plate readers. Many of these are mobile devices, mounted on the back of patrol cars. Others are set up at fixed posts at bridges, tunnels and highways across the city. These license plate readers have helped in the tracking down of major crimes suspects; they have provided also clues in homicide cases and other serious crimes. But they have been used in lesser offenses, such as identifying and locating stolen cars. But there are concerns. The police have established an extensive database tracking citizens’ driving patterns. How long is this data maintained and who can access the information?

Meanwhile, Cracked gives us six legit ways cops can screw us over… including the fact Asset Forfeiture is factored into their budget. Or in other words, if cops weren’t allowed to seize our stuff and sell it, even without proof of a crime, they’d suffer budget shortfalls.

====Looking toward the Far Future====

NASA’s Hundred Year Starship and the Yucca Mountain nuclear depository are two examples of “deep time” thinking — casting our eyes over the next horizon, anticipating the needs of our descendants. While top priority must go to freedom, progress, full brains for all kids and saving the planet — some ambitious, forward-looking innovation and commitment to our grandchildren must be on the agenda.

=====More News====

Japanese scientists announced that massive deposits of the 17 elements used to produce hybrid cars, laptops, smartphones and other high-tech devices can be extracted from nodules on the floor of the Pacific Ocean near Hawaii. Nodules were first touted as setting off a sub-sea boom in sci fi stories way back in the 1950s.  I certainly spoke of this in more detail… in EARTH (1989). But will it be economic to retrieve these resources?

For real? Israel will be using new technology to get oil from oil shale in the Shfela Basin. There’s an estimated 250 billion barrels vs the Saudi’s 260 billion barrels. This article is clearly biased and somewhat polemically exaggerated  – and conveniently ignores Rupert Murdoch’s deep bed-buddyness with certain pretro princes.  Still, if it is even half true….

The Educational Value of Creative Disobedience: Read this article in Scientific American by Andrea Kuszewski about teaching children how to solve problems creatively, instead of flooding them with memorized information.  It really is worth your time.

A lovely portrayal of the scaling of the universe: ranging from moons and planets, to galaxies and clusters: Play this at full-screen. Enjoy the beauty and majesty of it all.

Are the Japanese making human clones? Actually, just putting your face on a robot!

Thor-Meets-Captain-AmericaFinally, I’ve placed several of my novellas on Kindle: Thor Meets Captain America, The Loom of Thessaly, Tank Farm Dynamo, and Stones of Significance.

3 Comments

Filed under future, internet, media, politics, technology

Milestones Leading up to the Good Singularity

The-singularityThe Technological Singularity – a quasi mythical apotheosis that some foresee in our near, or very-near, future. A transition when our skill, knowledge and immense computing power will increase exponentially — to enable true Artificial Intelligence and humans are transformed into… well… godlike beings.  Can we even begin to imagine what life would look like after this?

An excellent article by Joel Falconer, on The Next Web, cites futurist Ray Kurzweil’s predictions of the Singularity, along with my warning about iffy far-range forecasting: “How can models created within an earlier, cruder system, properly simulate & predict the behavior of a later, vastly more complex system?” 

singularityIf you want an even broader perspective, try my noted introduction: “Singularities and Nightmares: Extremes of Optimism and Pessimism about the Human Future.” For there are certainly risks along the way — one being renunciation, people rejecting the notion of progress via science and technology.

How about portrayals in fiction? I mean, other than clichés about mega-AI gone berserk, trying to flatten us? Now, from a writer’s perspective, the Singularity presents a problem. One can write stories leading up to the Singularity, about problems like rebellious AI, or about heroic techies paving the way to bright horizons. But how do you write a tale set AFTER the singularity has happened – the good version – and we’ve all become gods? Heh. Never dare me! That’s the topic of my novella, Stones of Significance.
Ah, but not all techies think the Singularity will be cool.  One chilling scenario: serving our new machine Overlords: Apple co-founder, Steve Wozniak,  speculates that humans may become pets for our new robot overlords: “We’re already creating the superior beings, I think we lost the battle to the machines long ago. We’re going to become the pets, the dogs of the house.”

== Singularity related miscellany! ==

KurzweilSingularityCoverCreepy… but probably helpful… new teaching tool! Do you want to play the violin, but can’t be bothered to learn how? Then strap on this electric finger stimulator called PossessedHand that makes your fingers move with no input from your own brain.  Developed by scientists at Tokyo University in conjunction with Sony, hand consists of a pair of wrist bands that deliver mild electrical stimuli directly to the muscles that control your fingers, something normally done by your own brain. 
Or do Cyborgs already walk among us? “Cyborg is your grandma with a hearing aid, her replacement hip, and anyone who runs around with one of those Bluetooth in-ear headsets,” says Kosta Grammatis, an enginner with the EyeBorg Project. 

Author Michael Choroset, in the World Wide Mind: The Coming Integration of Humanity, Machines and the Internet, envisions a seamless interface of humans with machines in the near future. Wearable computers, implanted chips, neural interfaces and prosthetic limbs will be common occurrences. But will this lead to a world wide mind — a type of collective consciousness?
And how do we distinguish Mind vs. Machine? In The Atlantic, Brian Christian describes his experience participating in the annual Turing Test, given each year by the AI community, which confers the Loebner Prize on the winner. A panel of judges poses questions to unseen answerers – one computer, one human, and attempts to discern which is which, in essence looking for the Most Human Computer. Christian, however, won the Most Human Human award.

Ray Kurzweil discusses the significance of IBM’s Watson computer  — and how this relates to the Turing Test.

Hive Mind: Mimicking the collective behavior of ants and bees is one approach to modeling artificial intelligence. Groups of ants are good at solving problems, i.e. finding the shortest route to a food source. Computer algorithms based upon this type of swarm intelligence have proved useful, particularly in solving logistics problems. 

Finally, how would we begin to define a universal intelligence  — and how to apply it to humans, animals, machines or even extraterrestrials we may encounter?  

== How to Manage a Flood of Information ==

In the last decade, a tsunami of data and information has been created by twenty-first century science, which has become generating huge databases: the human genome, astronomical sky surveys, environmental monitoring of earth’s ecosystems, the Large Hadron Collider, to name a few. James Gleick’s The Information: A History, A Theory, A Flood, discusses how we can avoid drowning in this sea of data, and begin to make sense of the world.
Kevin Kelly discusses his book: What Technology Wants “We are moving from being people of the book….to people of the screen.” These screens will track your eye movements on the screen, noting where you focus your attention, and adapting to you. Our books will soon be looking back at us. 

All books will be linked together, with hyper-links of the sort I envisioned in my novel, Earth. Reading will be more of a shared, communal activity. The shift will continue toward accessing rather than owning information, as we live ever more in a flux of real-time streaming data.

Google looks to your previous queries (and the clicks that follow) and refines its search results accordingly…

…Such selectivity may eventually trap us inside our own “information cocoons,” as the legal scholar Cass Sunstein put it in his 2001 book Republic.com2.0. He posited that this could be one of the Internet’s most pernicious effects on the public sphere. The Filter Bubble, Eli Pariser’s important new inquiry into the dangers of excessive personalization, advances a similar argument. But while Sunstein worried that citizens would deliberately use technology to over-customize what they read, Pariser, the board president of the political advocacy group MoveOn.org, worries that technology companies are already silently doing this for us. As a result, he writes, “personalization filters serve up a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown.”…”

Very entertaining and informative… and the last five minutes are scarier n’ shit! Jesse Schell’s mind-blowing talk on the future of games (from DICE 2010)… describing how game design invades the real world… is just astounding. Especially the creepy/inspiring worrisome last five minutes.  Someone turn this into a sci fi story!  (Actually, some eerily parallel things were already in my new novel, EXISTENCE. You’ll see! In 2012.)

Enough to keep you busy a while?  Hey, I am finally finishing a great Big Brin Book… a novel more sprawling and ambitious than EARTH … entitles EXISTENCE.  Back to work.

1 Comment

Filed under future, internet, science, technology