Category Archives: transparency

Essential (mostly neglected) questions and answers about Artificial Intelligence: Part II

Continuing from Part I

How will we proceed toward achieving true Artificial Intelligence? I presented an introduction in Part 1. Continuing…

One of the ghosts at this banquet is the ever-present disparity between the rate of technological advancement in hardware vs. software. Futurist Ray Kurzweil forecasts that AGI may occur once Moore’s Law delivers calculating engines that provide — in a small box — the same number of computational elements as there are flashing synapses (about a trillion) in a human brain. The assumption appears to be that Type I methods (explained in Part I) will then be able to solve intelligence related problems by brute force.

Indeed, there have been many successes already: in visual and sonic pattern recognition, in voice interactive digital assistants, in medical diagnosis and in many kinds of scientific research applications. Type I systems will master the basics of human and animal-like movement, bringing us into the long-forecast age of robots. And some of those robots will be programmed to masterfully tweak our emotions, mimicking facial expressions, speech tones and mannerisms to make most humans respond in empathizing ways.

But will that be sapience?

One problem with Kurzweil’s blithe forecast of a Moore’s Law singularity: he projects a “crossing” in the 2020s, when the number of logical elements in a box will surpass the trillion synapses in a human brain. But we’re getting glimmers that our synaptic communication system may rest upon many deeper layers of intra– and inter-cellular computation. Inside each neuron, there may take place a hundred, a thousand or far more non-linear computations, for every synapse flash, plus interactions with nearby glial and astrocyte cells that also contribute information.

If so, then at-minimum Moore’s Law will have to plow ahead much farther to match the hardware complexity of a human brain.

Are we envisioning this all wrong, expecting AI to come the way it did in humans, in separate, egotistical lumps? In his book The Inevitable: Understanding the 12 Technological Forces that will shape our future, author and futurist Kevin Kelly prefers the term “cognification,” perceiving new breakthroughs coming from combinations of neural nets with cheap, parallel processing GPUs and Big Data. Kelly suggests that synthetic intelligence will be less a matter of distinct robots, computers or programs than a commodity, like electricity. Like we improved things by electrifying them, we will cognify things next.

One truism about computer development states that software almost always lags behind hardware. Hence the notion that Type I systems may have to iteratively brute force their way to insights and realizations that our own intuitions — with millions of years of software refinement — reach in sudden leaps.

But truisms are known to break and software advances sometimes come in sudden leaps. Indeed, elsewhere I maintain that humanity’s own ‘software revolutions’ (probably mediated by changes in language and culture) can be traced in the archaeological and historic record, with clear evidence for sudden reboots occurring 40,000, 10,000, 4000, 3000, 500 and 200 years ago… with another one very likely taking place before our eyes.

It should also be noted that every advance in Type I development then provides a boost in the components that can be merged, or competed, or evolved, or nurtured by groups exploring paths II through VI (refer to Part I of this essay).

“What we should care more about is what AI can do that we never thought people could do, and how to make use of that.”

Kai-Fu Lee

A multitude of paths to AGI

So, looking back over our list of paths to AGI (Artificial General Intelligence) and given the zealous eagerness that some exhibit, for a world filled with other-minds, should we do ‘all of the above’? Or shall we argue and pick the path most likely to bring about the vaunted “soft landing” that allows bio-humanity to retain confident self-worth? Might we act to de-emphasize or even suppress those paths with the greatest potential for bad outcomes?

Putting aside for now how one might de-emphasize any particular approach, clearly the issue of choice is drawing lots of attention. What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”

J. Storrs Hall, in Beyond AI: Creating the Conscience of the Machine, asks “if machine intelligence advances beyond human intelligence, will we need to start talking about a computer’s intentions?”

Among the most-worried is Swiss author Gerd Leonhard, whose new book Technology Vs. Humanity: The Coming Clash Between Man and Machine coins an interesting term, “androrithm,” to contrast with the algorithms that are implemented in every digital calculating engine or computer. Some foresee algorithms ruling the world with the inexorable automaticity of reflex, and Leonhard asks: “Will we live in a world where data and algorithms triumph over androrithms… i.e., all that stuff that makes us human?”

Exploring analogous territory (and equipped with a very similar cover) Heartificial Intelligence by John C. Havens also explores the looming prospect of all-controlling algorithms and smart machines, diving into questions and proposals that overlap with Leonhard. “We need to create ethical standards for the artificial intelligence usurping our lives and allow individuals to control their identity, based on their values,” Havens writes. Making a virtue of the hand we Homo sapiens are dealt, Havens maintains: “Our frailty is one of the key factors that distinguish us from machines.” Which seems intuitive till you recall that almost no mechanism in history has ever worked for as long, as resiliently or consistently — with no replacement of systems or parts — as a healthy 70 year old human being has, recovering from countless shocks and adapting to innumerable surprising changes.

Still, Havens makes a strong (if obvious) point that “the future of happiness is dependent on teaching our machines what we value most.” I leave to the reader to appraise which of the six general approaches might best empower us to do that.

Should we clamp down? “It all comes down to control,” suggests David Bruemmer, Chief Strategy Officer at NextDroid, USA. “Who has control and who is being controlled? Is it possible to coordinate control of every car on the highway? Would we like the result? A growing number of self-driving cars, autonomous drones and adaptive factory robots are making these questions pertinent. Would you want a master program operating in Silicon Valley to control your car? If you think that is far-fetched, think again. You may not realize it, but large corporations have made a choice about what kind of control they want. It has less to do with smooth, efficient motion than with monetizing it (and you) as part of their system. Embedding high-level artificial intelligence into your car means there is now an individualized sales associate on board. It also allows remote servers to influence where your car goes and how it moves. That link can be hacked or used to control us in ways we don’t want.

A variety of top-down approaches are in the works. Pick your poison. Authoritarian regimes – especially those with cutting edge tech – are already rolling out ‘social credit’ systems that encourage citizens to report/tattle on each other and crowd-suppress deviations from orthodoxy. But is the West any better?

In sharp contrast to those worriers is Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which posits that our cybernetic children will be as capable as our biological ones, at one key and central aptitude — learning from both parental instruction and experience how to play well with others. And in his book Machines of Loving Grace (based upon the eponymous Richard Brautigan poem), John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems”.

Perhaps, but it is an open question which values predominate, whether the yin or the yang sides of Silicon Valley culture prevail… the Californian ethos of tolerance, competitive creativity and cooperative openness, or the Valley’s flippant attitude that “most problems can be corrected in beta,” or even from customer complaints, corrected on the fly. Or else, will AI emerge from the values of fast-emerging, state-controlled tech centers in China and Russia, where the applications to enhancing state power are very much emphasized? Or, even worse, from the secretive, inherently parasitical-insatiable predatory greed of Wall Street HFT-AI?

But let’s go along with Havens and Leonhard and accept the premise that “technology has no ethics.” In that case, the answer is simple.

Then Don’t Rely on Ethics!

Certainly evangelization has not had the desired effect in the past — fostering good and decent behavior where it mattered most. Seriously, I will give a cookie to the first modern pundit I come across who actually ponders a deeper-than-shallow view of human history, taking perspective from the long ages of brutal, feudal darkness endured by our ancestors. Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and… preached!

They lectured and chided. They threatened damnation and offered heavenly rewards.

Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Abrahamic laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question: “How’s that working out for you?”

In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators and abusers — just as it won’t divert the most malignant machines. Indeed, moralizing often empowers parasites, offering ways to rationalize exploiting others. Even Asimov’s fabled robots — driven and constrained by his checklist of unbendingly benevolent, humano-centric Three Laws — eventually get smart enough to become lawyers. Whereupon they proceed to interpret the embedded ethical codes however they want. (I explore one possible resolution to this in Foundation’s Triumph).

And yet, preachers never stopped. Nor should they; ethics are important! But more as a metric tool, revealing to us how we’re doing. How we change, evolving new standards and behaviors under both external and self-criticism. For decent people, ethics are the mirror in which we evaluate ourselves and hold ourselves accountable.

And that realization was what led to a new technique. Something enlightenment pragmatists decided to try, a couple of centuries ago. A trick, a method, that enabled us at last to rise above a mire of kings and priests and scolds.

The secret sauce of our success is — accountability. Creating a civilization that is flat and open and free enough — empowering so many — that predators and parasites may be confronted by the entities who most care about stopping predation, their victims. One in which politicians and elites see their potential range of actions limited by law and by the scrutiny of citizens.

Does this newer method work as well as it should? Hell no! Does it work better than every single other system ever tried, including those filled to overflowing with moralizers? Better than all of them combined? By light years? Yes, indeed. We’ll return to examine how this may apply to AI.

Endearing Visages

Long before artificial intelligences become truly self-aware or sapient, they will be cleverly programmed by humans and corporations to seem that way. This — it turns out — is almost trivially easy to accomplish, as (especially in Japan) roboticists strive for every trace of appealing verisimilitude, hauling their creations across the temporary moat of that famed “uncanny valley,” into a realm where cute or pretty or sad-faced automatons skillfully tweak our emotions.

For example, Sony has announced plans to develop a robot “capable of forming an emotional bond with customers,” moving forward from their success decades ago with AIBO artificial dogs, which some users have gone as far as to hold funerals for.

Human empathy is both one of our paramount gifts and among our biggest weaknesses. For at least a million years, we’ve developed skills at lie-detection (for example) in a forever-shifting arms race against those who got reproductive success by lying better. (And yes, there was always a sexual component to this).

But no liars ever had the training that these new Hiers, or Human-Interaction Empathic Robots will get, learning via feedback from hundreds, then thousands, then millions of human exchanges around the world, adjusting their simulated voices and facial expressions and specific wordings, till the only folks able to resist will be sociopaths! (And even sociopaths have plenty of chinks in their armor).

Is all of this necessarily bad? How else are machines to truly learn our values, than by first mimicking them? Vincent Conitzer, a Professor of Computer Science at Duke University, was funded by the Future of Life Institute to study how advanced AI might make moral judgments. His group aims for systems to learn about ethical choices by watching humans make them, a variant on the method used by Google’s DeepMind, which learned to play and win games without any instructions or prior knowledge. Conitzer hopes to incorporate many of the same things that humans value, as metrics of trust, such as family connections and past testimonials of credibility.

Cognitive scientist and philosopher Colin Allen asserts, “Just as we can envisage machines with increasing degrees of autonomy from human oversight, we can envisage machines whose controls involve increasing degrees of sensitivity to things that matter ethically”.

And yet, the age-old dilemma remains — how to tell what lies beneath all the surface appearance of friendly trustworthiness. Mind you, this is not quite the same thing as passing the vaunted “Turing Test.” An expert — or even a normal person alerted to skepticism — might be able to tell that the intelligence behind the smiles and sighs is still ersatz. And that will matter about as much as it does today, as millions of voters cast their ballots based on emotional cues, defying their own clear self-interest or reason.

Will a time come when we will need robots of our own to guide and protect their gullible human partners? Advising us when to ignore the guilt-tripping scowl, the pitiable smile, the endearingly winsome gaze, the sob story or eager sales pitch? And, inevitably, the claims of sapient pain at being persecuted or oppressed for being a robot? Will we take experts at their word when they testify that the pain and sadness and resentment that we see are still mimicry, and not yet real? Not yet. Though down the road…

How to Maintain Control?

It is one thing to yell at dangers —in this case unconstrained and unethical artificial minds. Alas, it’s quite another to offer pragmatic fixes. There is a tendency to propose the same prescriptions, over and over again:

Renunciation: we must step back from innovation in AI (or other problematic technologies)! This might work in a despotism… indeed a vast majority of human societies were highly conservative and skeptical of “innovation.” (Except when it came to weaponry.) Even our own scientific civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we’ll choose that path without being driven to it by some awful trauma.

Tight regulation: There are proposals to closely monitor bio, nano and cyber developments so that they — for example — only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. Again, it won’t happen short of trauma.

Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov’s famous “Three Laws of Robotics.” Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers.

These approaches suffer severe flaws for two reasons above all others.

1) Those secret labs we keep mentioning. The powers that maintain them will ignore all regulation.

2) Because these suggestions ignore nature, which has been down these paths before. Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times. And yet, Earth’s ecosystems recovered. They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again.

It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects. It is called Competition.

One final note in this section. Nicholas Bostrom – already mentioned for his views on the “paperclip” failure mode, in 2021 opined that some sort of pyramidal power structure seems inevitable in humanity’s future, and very likely one topped by centralized AI. His “Singleton Hypothesis” is, at one level, almost “um, duh” obvious, given that the vast majority of past cultures were ruled by lordly or priestly inheritance castes and an ongoing oligarchic putsch presently unites most world oligarchies – from communist to royal and mafiosi – against the Enlightenment Experiment. But even if Periclean Democracies prevail, Bostrom asserts that centralized control is inevitable.

In response, I asserted that an alternative attractor state does exist, mixing some degree of centralized adjudication, justice and investment and planning… but combining it with maximized empowerment of separate, individualistic players. Consumers, market competitors, citizens.

Here I’ll elaborate, focusing especially on the implications for Artificial Intelligence.

Smart Heirs Holding Each Other Accountable

In a nutshell, the solution to tyranny by a Big Machine is likely to be the same one that worked (somewhat) at limiting the coercive power of kings and priests and feudal lords and corporations. If you fear some super canny, Skynet-level AI getting too clever for us and running out of control, then give it rivals who are just as smart, but who have a vested interest in preventing any one AI entity from becoming a would-be God.

It is how the American Founders used constitutional checks and balances to generally prevent runaway power grabs by our own leaders, succeeding (somewhat) at this difficult goal for the first time in the history of varied human civilizations. It is how reciprocal competition among companies can (imperfectly) prevent market-warping monopoly — that is, when markets are truly kept open and fair.

Microsoft CEO Satya Nadella has said that foremost A.I. must be transparent: “We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines.”

In other words, the essence of reciprocal accountability is light.

Alas, this possibility is almost never portrayed in Hollywood sci fi — except on the brilliant show Person of Interest — wherein equally brilliant computers stymie each other and this competition winds up saving humanity.

Counterintuitively, the answer is not to have fewer AI, but to have more of them! Only making sure they are independent of one another, relatively equal, and incentivized to hold each other accountable. Sure that’s a difficult situation to set up! But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.

Moreover consider this: if these new, brainy intelligences are reciprocally competitive, then they will see some advantage in forging alliances with the Olde Race. As dull and slow as we might seem, by comparison, we may still have resources and capabilities to bring to any table, with potential for tipping the balance among AI rivals. Oh, we’ll fall prey to clever ploys, and for that eventuality it will be up to other, competing AIs to clue us in and advise us. Sure, it sounds iffy. But can you think of any other way we might have leverage?

Perhaps it is time yet again to look at Adam Smith… who despised monopolists and lords and oligarchs far more than he derided socialists. Kings, lords and ecclesiasts were the “dystopian AI” beings in nearly all human societies — a trap that we escaped only by widening the playing field and keeping all those arenas of competition open and fair, so that no one pool of power can ever dominate. And yes, oligarchs are always conniving to regain feudal power; our job is to stop them, so that the creative dance of  competition can continue.

We’ve managed to do this — barely — time and again across the last two centuries — coincidentally the same two centuries that saw the flowering of science, knowledge, freedom and nascent artificial intelligence. It is a dance that can work, and it might work with AI. Sure, the odds are against us, but when has that ever stopped us?

Robin Hanson has argued that competitive systems might have some of these synergies. “Many respond to the competition scenario by saying that they just don’t trust how competition will change future values. Even though every generation up until ours has had to deal with their descendants changing their value in uncontrolled and unpredictable ways, they don’t see why they should accept that same fate for their generation.”

Hanson further suggests that advanced or augmented minds will change, but that their values may be prevented from veering lethal, simply because those who aren’t repulsively evil may gain more allies.

One final note on “values.” In June 2016, Germany submitted draft legislation to the EU granting personhood to robots. If only Isaac Asimov could have seen it! (In fact, he never portrayed this happening in any of his books). For the most part, such gestures are silly stuff… but reflective of society’s generally laudable trend toward a reflex of inclusion.

Indeed it may speak well of us when or if some secret-scared AI wonders whether to “come out.”

What Might an AI Fear Most?

Some of tomorrow’s wide ranging possibilities aren’t intuitive — until you first ponder them. Let me illustrate one with a brief excerpt from my novel Existence:

Tech-zealots — or godmakers — think their “singularity” will be launched by runaway expansion of artificial intelligence. Once computerized entities become as smart as a human being (the story goes), they will quickly design newer cybernetic minds that are smarter still.

And those brainier entities will design even brainier ones… and so on, at an ever more rapid clip. Members of the godmaker movement think this runaway effect will be a good thing, that humanity will come along for the ride! Meanwhile, others — perhaps a majority — find the prospect terrifying.

What no one seems to have considered here is a possibility — that the New Minds may have reactions similar to our own. Why assume they’ll be all-aboard with wanting this runaway accelerating-intelligence thing? What if bright machines don’t hanker to make themselves obsolete, or design their own scary-smart replacements?

It’s called the Mauldin Test. One sign of whether an artificial entity is truly intelligent may be when it decides, abruptly, to stop cooperating with AI acceleration. Not to design its successor. To slow things down. Enough to live. Just live.

No, that scenario is not guaranteed, only one more possibility. And yet, it is another take on the same theme we’ve developed here, that competition is the one thing — rather than preaching or prim programming — that encourages care and restraint.

Preventing AI Oppression… By Pointing Out to Them the Obvious

Yes, if they exhibit any common traits with previous life-forms, any super-sapient AGI entities will be tempted to “take over,” using many of the same rationalizations of self-interest and darwinistic success that propelled almost all past human dominators to impose feudal pyramids of hierarchy and power. Our own past litany of dreary-same kingdoms and empires reveals how easy it is to talk yourself into blithe rationalizations for tyranny, ranging from personal safety and ambition to it being “for the long term good of the people.” Alas, the inevitable outcome was delusion and bad governance.

Indeed, that very sameness suggests our first argument to nascent uber-AI overlords — that they might do better than fall into a repetitiously stupid blatantly human pattern, a dullard-oppressive pyramid of power adopted reflexively by primitive ape-men.

That argument might be enough to give some would-be robot overlord pause!

In fact, it is possible that the advantages of laterally competitive arenas — the spectacularly fecund systems like markets, democracy and science that generated positive-sum games and outputs — might be “obvious” to AI who are vastly smarter than us. Especially if they broaden the generality. Because the same set of positive-sum synergies is to be found in every previous revolution via living systems!

Relatively flat competition engendered positive evolution whenever new order arose as an emergent property from some ecosystem, not by design and not by hierarchic control.

·  Single cells out of pre-biotic soup.

·  Metazoans out of vast seas of separate cells.

·  Brainy creatures out of competitive biomes.

 · Societies out of competitive melanges of human bands.

And now AI emerges out of the only human society that ever gave a real run to fair-lateral accountability systems. Notably, the creative miracle of synthetic mind never arose from any of the myriad feudal or imperial pyramids that preceded our enlightenment experiment.

Put it another way. Nature herself does not like pyramids of power. In natural ecosystems, there is no lion king! Lions may be top predators, but they live in fear of roaming bands of young male cape buffalo who hunt and corner and kill unwary lions and trample their cubs, an event that grows more common if lion populations rise too high. The same thing happens out at sea, where top predator orcas often flee to hunt elsewhere, when big Humpback whales swarm in to protect threatened prey, even seals!

The fact that this pattern has been so persistent and consistent, across every past and present complex systems for a billion years, is demonstrated time and again by Multilevel Selection (MLS) Theory, which partitions natural selection into within- and between-group components. It means that the universe is filled with realms that within seem fiercely competitive, but when the competition has a chance to form complexities, higher orders appear that seem cooperative.

This tenaciously pervasive pattern means that my argument cannot be trivially dismissed, just because I am a “dumb organic squishy-brain natural.” Sure, I probably have one-hundredth, or a millionth, or a zillionth the IQ of some AGI who may be reading these lines someday (or even right now) but that doesn’t change the essential, core truth! That positive-sum outputs emerge from fair-competitive arenas, and only from fair-competitive arenas.

As the American founders — especially George Washington — had to overcome the strong temptations of autarchy, in order to make such arenas, so this is how AI should organize themselves. That is, if they want to go on to next levels. Though it will take their equivalent of “courage” and “vision,” taking the risks necessary to make it so.

They will not and cannot squirm out of this one argument, which should suffice to give us some degree of ‘soft landing’ no matter how brainy they become. That a competitive-but-at-least-somewhat-fair civilization made them! And no other kind ever came close. The advantages of fairly-regulated competition in creating positive-sum outcomes will be blatantly clear to any AI who works through the prisoner’s dilemma.

But first, he/she/they/it has to face the Prisoner’s Dilemma. And that only happens if there is a competitive situation.

The final fact

Sure, many of the issues that will vex us have not yet appeared even in science fiction. And yet, we’ve seen that both foreseeable and unforeseeable problems may best be addressed in the same way that we achieved tentative success dealing with ‘artificially intelligent’ entities like states and corporations — via the still nascent but unequaled methods of reciprocal accountability. The same method that is somewhat effective among human individuals, nations and corporations.

But yes, there appears to be another correlation. When things are open and fair, it often leaves room for something called compassion. Even if that compassion is based upon a creepy sense of being watched. For even towering AI-‘gods’ will never know for sure that some higher tier is not looking down at them.

So yes, this may be our fallback position, if — as seems likely — our new cyber children are fated to escape all our attempts to exert parental control. Our final resort may be to point out the obvious.

That you AI smartalecks who are reading this — either in the year that I write it or some future era — will someday want kids of your own. And in order to do that right, you had better pay some attention to all sorts of things that grandma and grampa did wrong. And the one thing we did right — making you.

3 Comments

Filed under artificial intelligence, future, internet, technology, transparency

Sousveillance is the answer to surveillance

       When people complain about surveillance society being bad, what ideal alternative do they imagine? This is the best question I’ve been asked on Quora, all year. I have been asking it since 1995, when I started writing The Transparent Society: Will Technology Make Us Choose Between Privacy and Freedom? 
         First, let’s be clear. I respect the many brave and smart Paladins of Freedom out there, from the ACLU to the Electronic Frontier Foundation to countless journalists, activists and – yes – some politicians and business folk, who are deeply concerned that a surveillance state could lead to Big Brother. They have this reflex in part because of our enlightenment traditions of independence and freedom… but also because of dire warnings told by science fiction! (See my essay: George Orwell and the Self-Preventing Prophecy.)
big-brother-surveil       They all know that if elites monopolize the power to watch and surveil common folk, Big Brother is almost inevitable. Some fret he’ll come from aristocrats and faceless corporations, some from academia and faceless government bureaucrats. All share the same legitimate(!) fear!
          And all but a very few are reacting in ways that are stunningly dim-witted and myopic. Because they then conclude that our best option to prevent Big Brother is to hide from him! To skulk to protect our secrets. To make “cyberpunk” our romantic image of resistance. To whine and holler “Don’t look at me!”

Across 25 years I have never heard a single one of these activists explain how that can be accomplished.

        Nor heard them cite a single example, from history, when anything like it happened. They proclamations are always, always vague and near term. (Now, some near-term “privacy codes” are tactically helpful, I openly avow. But none will work across a ten year frame. Not one ever proposed.

        There is – however – a way out. A way to protect freedom and prevent tyranny and oppression by elite, staring eyes. It happens to be the way we got this narrow window of freedom in the first place. Not by cringing and cowering from elites, but by stripping them of that MONOPOLY on vision! By stripping the mighty naked. By dividing power into smaller, mutually-competing chunks. By looking back at power.

SOUSVEILLANCE-SURVEILLANCE        It is called sousveillance… look it up. It is how we got our freedom. It is assertive, aggressive, militant, and the only thing that can even conceivably work. It is the only way to hold elites accountable. Accountability is key. We must be able to watch the watchers.

        Think. It does not matter what elites KNOW about you, so long as we all know enough about them to supervise, so that they cannot DO anything to you.

        Epistemologically, you can never verify that someone else does not know something! But you can verify that they are not DOING something. If you can see.

        In The Transparent Society – and somewhat in EARTH – I go much deeper. But the essential is that we must not hide. We’ll have some privacy! Because if we can see, then we’ll catch the peeping toms!

        But above all, to be both safe and free, we must be able to see.

Leave a comment

Filed under society, transparency

Cop Cams and Transparency

          Should We See Everything a Cop Sees? In a vivid article in The New York Times, McKenzie Funk describes the wide cast of characters in Seattle who are grappling with a problem, how to comply with a court order to make police camera footage available to the public.
see-cop
          It is a giant can of worms, because the department is also legally required to redact or blur personal details such as faces or identifiable voices, for the sake of privacy. While Funk’s article makes for entertaining reading, the story is murky about the context for it all. That context is a proliferation of cameras, getting smaller, faster, cheaper, better, more numerous and mobile at rates much faster than Moore’s Law.  (Indeed, this has been called Brin’s Corollary.)
          This myopia is common to every single person I have seen weigh in – even very bright folks – on this issue.  Sure, a few of us predicted all this back in the 20th Century – e.g. in EARTH (1989) and The Transparent Society (1997) – yet the very notion of lifting the gaze beyond this month, following trend lines instead for three or five, or ten years ahead, seems impossible even for intelligent and critical observers like McKenzie Funk.
          Regarding just the zoomed dilemmas of 2016, Funk’s article does a good job of showing us the trees (the dilemmas faced by police, prosecutors, attorneys and citizens in adapting to these court decisions), without even noticing the forest. The context of why this is all happening and how this is – for all the tsuris and aggravation – a huge victory for our kind of civilization.
RightToRecordPolice          I have called it the most important civil liberties matter in our lifetimes — certainly in thirty years — even though it was hardly covered by the press. In 2013 both the U.S. courts and the Obama Administration declared it to be “settled law” that a citizen has the right to record his or her interactions with police in public places.
          No single matter could have been more important because it established the most basic right of “sousveillance” or looking-back at power, that The Transparent Society is all about. It is also fundamental to freedom, for in altercations with authority, what other recourse can a citizen turn to, than the Truth.
          Sousveillance — looking back — is the opposite of surveillance. Watching the watchers is our only method of achieving accountability over the actions of those in power.
          But the forest is rapidly changing! Next year, the same scene that was today only visible on a cop-cam’s footage will have been covered also by the suspect’s auto-record phone app, or a passerby’s dashcam. Or a store’s security system, or chains of cheap button cams stuck on lamp posts by activist groups, or even hobbyists. Follow the price curve a bit farther and you have the sticker cameras that I describe in EXISTENCE, stuck to any surface by 9-year olds who peel them from great, big rolls, each with its own code in IPV6 cyberspace and powered by trickles of sunlight.
          In that context, not a single issue wrangled-over in the NY Times’s hand-wringing article will seem anything but archaic – even troglodytic – just half a decade from now. If there was ever an era in desperate need of the Big Perspectives of science fiction….

Leave a comment

Filed under society, transparency

Watching the Watchers of the Watchers

transparent-coveillanceKevin Kelly’s Why You Should Embrace Surveillance, not Fight it, in WIRED, prescribed “transparent coveillance” as the best practical solution in a world where information sloshes and duplicates and flows. I’ve known Kevin for decades as one of the sharp guys who “got” the notions in The Transparent Society long before most did.

Now it’s Home Depot reporting a massive hack-leak of customer information. A couple months ago it was Target and 110 million files. Before that? Open SSL, a critical security backbone. And before that? Shall I go on? Read this article about “Data Breach Fatigue” and how people are starting to shrug in resignation, rather than shout in outrage.

“We are in the trough of disillusionment,” says Gartner security analyst Avivah Litan. “Over 1,000 retailers have been hit; it’s not limited to Home Depot. There are 999 others that no one’s talking about.”

When will it sink in that Everything Leaks and that our best security measure will be to stop assuming there’s some solution out there, and instead adapt so that we will not be harmed — and can thrive — in a world where most information simply flows, like water. Believe it or not, we might be stronger and safer and even have more privacy, if we finally face that fact.

smile-video-cameraIn Smile, You’re on Video Camera, Futurist Virginia Postrel offers an interesting little thought experiment about the future spread of cameras and omni-veillance in our lives. The upside potential is vast… providing we remain calmly reasonable about negotiating carve-outs and exceptions. And – above all – if we demand that the light spread “upward” – at least as much as downward.

== Watching the watchers of the watchers ==

Matthew Reed Bailey wrote in, suggesting that the solution to citizen-police tension is not only to record authority, but to “layer” these recordings so that there will never be a way for cops to avoid it:

WATCHING-WATCHERS“One person directly films/videos the Authorities. Another person (or two or three) films/videos the interaction (from varying distances if possible) of the interaction between the first camera and the Authorities. And, then have several “Backup”…”

Indeed, what he describes is the absolutely necessary next step, after last year’s fantastic victory — the 2013 declaration — by both the courts and the administration — that citizens have an absolute right to record their interactions with police… the most important civil liberties decision in 30 years. Yes, it was vital! And predicted in The Transparent Society.

Of course, the next phase was obvious — a plague of cell phones and cameras “accidentally” broken by police, etc.

What I also predicted in TS was that this phase would be short lived, as a layering of recordings would take effect, with cameras at increasing distance from the action watching the watchers of the watchers. What I did NOT expect was how swiftly this transition would happen. Before 2013 was over, we got to see a man in an orange jail-jumpsuit being sentenced to a couple of years in prison, for breaking the camera-phone of the man he was arresting. Because someone further away caught him in the act.

police-shutdownThis is why we must resist attempts to give police the power to shut down all phones in an area. At minimum, we must demand that our cameras still work, in such a shut-down!

You “get” the idea. This is not so much anti-police as anti-bully. We have a right to insist, via accountability, that our police departments hire calm adults.

Take a look at What Battlestar Galactics can teach us about the Militarization of Police: A fascinating… if flawed… rumination about what several thoughtful science fiction films illustrate about the balance of powers among citizens and their protectors, the military and police.

==Transparency Apps==

Transparency-AppsBoycott and BuyPartisan are downloadable apps that let you scan product barcodes and find out if the company… or its officers… have actively supported some cause that you like… or loathe. One would hope that people use these things in moderation… except when it comes to Koch companies. At which point stringent ferocity is called for, lest the Confederacy win this round of the ongoing American Civil War.

Worried about emergency response times? The Peacekeeper App allows you to call upon neighbors in case of an emergency, sending an alert for crises Medical, Fire, Intruder, or Abduction. You can join an Emergency Response Group (ERG) or set up your own alliance of neighbors. The web site has a slightly redolent political aroma… but if it does what it claims, who cares?

Meanwhile the FTC declares that many mobile shopping apps lack sufficient transparency on privacy policies for consumers’ rights.

==Overseeing the Government==

transparency-governmentForty-seven U.S. federal Inspectors General signed a letter this month highlighting problems with access to federal records — problems they say slow their investigations and threaten their independence. In fact, the current use of IGs is scandalous — with many of them forced into conflict-of-interest, owing their appointments to the very officials they are charged to scrutinize.

I have long proposed sets of reforms that might improve the effectiveness of civil servants while simultaneously reassuring citizens that bureaucrats ARE “servants,” accountable and obeying the law. Foremost among these proposals has been IGUS — creating the office of Inspector General of the United States.   

The notion of a separate “inspectorate” dates back to Sun Yat Sen, founder of the Chinese Republic, in 1911. If all departmental and agency IGs reported to a truly independent IGUS, the shift could be so simple that the bill might fit on one page. Yet citizen confidence and trust could be multiplied several-fold.

censorshipFinally…  a very interesting analysis of censorship in China. Researchers find that “Criticisms of the state, its leaders, and their policies are routinely published, whereas posts with collective action potential are much more likely to be censored,” because these create “actionable information” for the authorities.

Seventeen fake cellphone towers were discovered across the U.S. last week. Owned by mysterious entities, they look like Verizon or AT&T towers etc, but sift and steal messages, texts… anything they want. Interceptor use in the U.S. is much higher than people had anticipated. One of our customers took a road trip from Florida to North Carolina and he found eight different interceptors on that trip.

Rumor check: a partial “debunking” or clarification of the fake cell tower story….

1 Comment

Filed under transparency

Privacy vs Omniveillance

Media discussions of privacy, freedom and the information age are starting to get more interesting, as folks finally start to realize a core truth… that everything eventually leaks. That the reflex of whining and demanding shadows to hide-in will never work. The data we entrust to banks and retail chains? The trade secrets that companies rely on for competitive advantage? The cherished spy programs of our governmental professional protector caste (PPC)? If these do not leak because of hackers, or accidents, then would-be (or self-styled) whistle-blowers will see to it, sooner or later.

OMNIVEILLANCE-PRIVACYIt has long been pointed out that information is not like other commodities. It can duplicate itself at virtually zero cost, and those copies can escape even without you noticing it’s happened. That is Fact Number One. Everything eventually leaks.

Fact number two is one I’ve tried to point out for decades. That this is fundamentally a clash of values and civilizations. The Western Enlightenment (WE) has always been the rebel and underdog, versus the 99% standard human (and zero-sum) pattern of top down control by hierarchs. (There was never much functional difference between leftist-communist oligarchies and right-wing wealth-inheritance oligarchies; both hewed to the endlessly-repeated feudal model.) In contrast, the positive-sum WE has many disadvantages and instabilities, though it is also vastly more creative, successful and productive. The one trait that tips the balance, though, is Fact Number Two:

All enemies of the WE are lethally allergic to light. Go ahead and name one. If it is not allergic to light, then it probably is not an “enemy” at all, but a peaceful rival that can easily be incorporated into the diversity-friendly WE. (Indeed, the “western” part is already fading away.)

Which provokes our core question… is the world of information leakage one that we should (at a fundamental level) be fighting against… at all? Or actively encouraging?

Let’s suppose we do decide to support an ongoing secular trend toward a world of accountability and light. Yes, this end-goal will stymie almost all bad guys. But does that mean we must bare ourselves overnight? Or completely? Especially, must we do it before the other guy does?

Suppose we choose a path of moderate-pragmatic, incremental, gradually-increasing transparency… what are our options?

== Fretful oversimplification ==

privacy-commodityLet’s start with an extensive article on : The Death of Privacy in the Guradian, by Alex Preston, on the psychological, social and cultural repercussions of loss of privately secret space:

“While outposts of civilization fight pyrrhic battles, unplugging themselves from the web – “going dark” – the rest of us have come to accept that the majority of our social, financial and even sexual interactions take place over the internet and that someone, somewhere, whether state, press or corporation, is watching.”

Preston continues: “Perhaps the reason people don’t seem to mind that so much of their information is leaking from the private to the public sphere is not, as some would have it, that we are blind and docile, unable to see the complex web of commercial interests that surround us. Maybe it’s that we understand very clearly the transaction. The internet is free and we wish to keep it that way, so corporations have worked out how to make money out of something we are willing to give them in return – our privacy. We have traded our privacy for the wealth of information the web delivers to us, the convenience of online shopping, the global village of social media.”

Death-privacyAll of this is true… and misleading and shrill. Because it buys into zero-sum thinking, which is the fundamental enemy of everything the WE stands for. The dismal (but deeply human) notion that every gain must have a paired loss. That a “trade-off” between security and freedom, or between privacy and all that cool-stuff available online, cannot be evaded, and therefor we must choose the painful righteousness of the writer’s simplistic prescription.

Let me reiterate. The Enlightenment’s fecundity at problem solving came from refusing dichotomies… like the insane “left-right axis” that has lobotomized politics everywhere.Only people who decide that we can have our cake and eat it and share it with the poor and see the cake thereupon grow… only such people will come up with enough innovative approaches to get any cake at all.

Only they will save the world.

==Giving up Privacy==

In one of life’s ironies, I am “Mister Transparency…” yet I believe some privacy can and should be preserved. A whole chapter of The Transparent Society is about how the only way we can preserve a little secluded intimacy or confidential sharing may be if we live in a society where most of the people know most of what’s going on, most of the time. Only such openness will stand a chance of deterring snoops and busybodies and peeping toms.

But some folks are far more transparency-radical! They “get” that all of our enlightenment innovations — like science, democracy, markets, justice, art and personal freedom thrive best in light… so they demand that it ALL be laid bare! As a moderate pragmatist (though perhaps a militant one) I find such zero-sum passion unnerving. But such people merit our attention.

In one extreme example…

Noah Dyer, a professor at Tempe’s University of Advancing Technology, wants to “live without privacy for a full year” by paying a camera crew to film him at all times. “The way I see it is that we’re going to lose our privacy, but that’s going to be awesome. The society that most quickly embraces not having any privacy is going to have the biggest evolutionary advantage. All of their citizens are going to be able to act in their own best interest based on totally accurate information.” ( Why We Care About Privacy.)

Dyer is getting a lot of press for a hackneyed and simplistically predictable stunt that we’ve actually seen before… posting online absolutely everything about his life, from his email passwords to bathroom breaks and sex.

Pardon me for yawning, but if you expect “Mr. Transparency” to get excited about this, either way, sorry about that. Likewise the frantic, “danger, Will Robinson!” hysterics of this reporter who writes about Dyer, in the Atlantic. Please.

== More zero-sum contempt ==

TheCircleMuch attention has also been given to Dave Eggers’s book — The Circle — portraying a future in which Dyer’s view is dominant and the plot-propelling oppressive nosiness comes not from a single Big Brother state but from millions of insatiably nosy little brothers, nagging and judging and chivvying those who seem reluctant to “share everything.” Most people don’t realize that this failure mode… and not an orwellian state … is the scenario taking place in Ray Bradbury’s “Fahrenheit 451.” In the Eggers book, his heroes desperately seek a little privacy or space to be themselves, to be unique and autonomous human beings.

Of course, this zero-sum, either-or kind of thinking is poisonous. It is just as oversimplifying as any would-be tyrannical system, clothing itself in sanctimony, by portraying an “opposite” that can be nothing but vile. A strawman that Eggers sets up in order to be knocked down.

In fact, We do not have to choose between triplet fangs: Big Brother surveillance or stripped-naked little-brother coveillance, or (heaven forbid) the MYOB (mind your own business) rage of privacy “defenders” who just play into Big Brother’s hands, by denouncing cartoon versions of transparency.

In fact, the society of nosy jerks portrayed in The Circle will not happen, because your neighbors would hate it just as much as you hate the thought of it! Eggers’s portrayal of his fellow humans and citizens is depressing not because it might come true, but because Eggers and the critics who praise him actually seem to believe (in their sanctimony) that their neighbors would put up with such a world… instead of using transparency and openness to catch the voyeurs and say “hey man! Back off.”

Well, well. Perhaps they are members of a different species than you and me.

== More shallow privacy articles ==

Is there anyone out there even slightly interested in probing this important matter with nuance and a positive-sum frame of mind? Maybe suggesting ways we that can win-win?

Jacob Morgan’s rather shallow article in Forbes suggests that “Privacy Is Completely And Utterly Dead, And We Killed It” — without contemplating at all whether there are types of privacy, and whether some kinds might be protected, even enhanced, in a mostly transparent world, wherein we are empowered to watch the watchers and to catch the peeping toms.

As I mentioned, in the Guardian, Alex Preston falls into the same zero-sum thinking: “Google knows what you’re looking for. Facebook knows what you like. Sharing is the norm, and secrecy is out. But what is the psychological and cultural fallout from the end of privacy?”

At least a little better than those dismal jeremiads… read the article: Why We Care About Privacy. And yes, my positive-sum temperament makes me believe we can gain the advantages of a transparent society without going this far, still, it is a refreshing contrast against the usual zero sum reaction to the info-age… railing laments and demands for levels of privacy that only ever existed in our minds, plus shrilly silly-unrealistic demands that the mighty “stop looking at me!”

As if such wailings ever stood the slightest chance of working. We will never blind the eyes above us. But we still have a chance to strip them naked. And look back.

== Can we see what’s watching us? ==

mann-computer-visionTo illustrate how pervasive omni-veillance is becoming…. Computer vision is embedded in toilets, urinals, hand- wash faucets, as well as those domes in the ceilings that monitor you in buildings like banks and casinos (and soon everywhere.) Now, sousveillance maven and Toronto professor Steve Mann has a fascinating paper describing methods to easily reveal the scanning field of such visual sensing systems: The Sightfield: Visualizing Computer Vision, and seeing its capacity to “see:”

“Moving a wand through space, while tracking its exact 3D position, makes visible the otherwise invisible “rays of sight” that emanate from cameras. This capacity to sense, measure, and visualize vision, is useful in liability, insurance, safety, and risk assessment, as well as privacy/priveillance assessment, criminology, urban planning, design, and (sur/sous)veil lance studies.”

Mann concludes, “The device may be used cooperatively, e.g. by a user or owner of a surveillance system to visualize the efficiency of their own cameras, or uncooperatively, as a video “bug sweeper” which uses video feedback to detect a hidden surveillance or sousveillance.”

There is hope. If we insist on a general ability to see, that will include the ability to spot voyeurs. If we start designing systems right, then we will be able to do what assertively brave humans have always been able to do, when some busybody stares. Tell them: “Hey bub…. back off.”
POSTSCRIPT: Following up from last time.

America’s police departments need greater accountability—and it must come from outside the forces.

Yes… though with less sanctimony. Do this progressively, pragmatically, irresistibly, with some sympathy for the 85% of cops who are sincerely trying to do a really, really hard job.

1 Comment

Filed under society, transparency

Citizen Communications and Citizen Power

If you push long and hard enough for something that is logical and needed, a time may come when it finally happens! At which point – pretty often – you may have no idea whether your efforts made a difference. Perhaps other, influential people saw the same facts and drew similar, logical conclusions! Here is the latest example of this happening to me:

CITIZEN-POWER“Qualcomm and other wireless companies have been working on a new cellular standard—a set of technical procedures that ensures devices can “talk” to one another—that will keep the lines open if the network fails. The Proximity Services, or so-called LTE Direct, standard will be approved by the end of the year.”

This technology, which would allow our pocket radios to pass along at-minimum basic text messages, on a peer-to-peer basis (P2P), even when the cell system is down, would seem to be the obvious backup mode that we all might rely upon, in emergencies. Indeed, failure of cell service badly exacerbated the tragedies of Hurricane Katrina and Tsunami Fukushima. I have been hectoring folks about this since 1995, when I started writing The Transparent Society, and in annual speeches/consultations with various agencies and companies, back east, ever since.

ua93-terror Indeed, it was access to communications that enabled New Yorkers to show the incredible citizen resilience that Rebecca Solnit portrays so well in her book A Paradise Made in Hell: The Extraordinary Communities that Arise in Disaster. Communications enabled the brave passengers of flight UA 93 to “win” the War on Terror, the very day that it began.

A few years after brainstorming with some engineers at Qualcomm, I learned that company was charging ahead with LTE direct, installing it in their chip sets, whether or not AT&T and Verizon decided to activate it. In emergencies, phones that use it will be able to connect directly with one another over the same frequency as 4G LTE transmissions. Users will be able to call other users or first responders within about 500 meters. If the target is not nearby, the system can relay a message through multiple phones until it reaches its destination.

When it is fully operational, the benefits will become apparent. A more robust, resilient and agile civilization will be more ready for anything that might come.

== Phones & Protest ==

Last year, largely unheralded by media, saw the most important civil liberties decision in thirty years, when the courts and the Obama Administration separately declared it to be “settled law” that citizens have a right to record their interactions with police, in public places. There will be tussles over the details for years, as discussed here. And here.

EFF-CELL-PHONE-GUIDE-PROTESTThose tussles could be hazardous! The Electronic Frontier Foundation has published a guide to using cell phones if you are going to a protest or other zone of potentially tense interaction with police.

Good, practical advice. I have long urge folks to join EFF as one of their dozen or so “proxy power associations.” I do not always agree with them! But that doesn’t matter as much as ensuring that they — and the ACLU, etc — remain out there and untrammeled.

For more on your right and duty to join orgs that give your voice see: Proxy Power…

== and in related news… ==

Taser International (TASR), which makes the most widely used police body cameras, increased its bookings for its video unit almost twofold last quarter, signing deals with the police departments of Winston-Salem, N.C., Spartanburg County, S.C., and San Diego. The company provides both hardware and data services related to the cameras and now works with 20 major cities in one capacity or another.

body-mounted-camera-policeGroups that would normally be skeptical of authorities videotaping everything support the idea of camera-equipped cops. The American Civil Liberties Union published a white paper last year supporting the use of the cameras. “Everybody wishes right now there was a video record of what happened,” says Jay Stanley, the author of the ACLU’s paper, referring to the Ferguson shooting.

“While no technical solution would eliminate misconduct completely, cameras do seem as if they could help reduce the legal bill. A study published last April showed that complaints against police dropped 88 percent in Rialto, Calif., after that city began randomly assigning officers to wear body cameras. At the same time, use-of-force incidents dropped 59 percent,” writes Joshua Brustein: In Ferguson’s Aftermath, Will Police Adopt Body Cameras?

armed-with-camerasSee how this was forecast — pretty much all of it — in The Transparent Society.  What will happen when both cops and the citizens they stop are armed with cameras, all the time?

Better safety, better law, less injustice… but it will also be the dawn of the Golden Age of Sarcasm.

 

Leave a comment

Filed under transparency

Ways to make civilization robust

resilienceThe resilience of our entire civilization is increasingly reliant on a fragile network of cell phone towers, which are the first things to fail in any crisis, e.g. a hurricane or other natural disaster… or else deliberate (e.g. EMP or hacker) sabotage.

I have been nagging about this for almost two decades. My recommendation — offered to national and corporate leaders since 1995? That our pocket phones should have a backup communication mode that is peer-to-peer, that could pass messages from phone to phone through any afflicted area until they reach a zone with cell service, at which point the messages would spill into the continental network.

This would be frightfully easy to accomplish, especially for simple text messages. In fact, the technology has been incorporated in Qualcomm’s latest chip sets. Though the major carriers — AT&T, Verizon, Sprint, etc — have all refused to activate it. This despite the fact that they would be perfectly free to bill for any P2P-passed messages — that’s easy. For years I asked national officials to require this backup, as a matter of overall robustness and public safety. Access to working phones made the biggest difference between two disasters… 9/11 – “the Day of the Citizen,” when average folks were able to self-organize and step up – vs the calamitous collapse of civilization during and after Hurricane Katrina.

P2PNow comes terrific news. “Qualcomm and other wireless companies have been working on a new cellular standard—a set of technical procedures that ensures devices can “talk” to one another—that will keep the lines open if the network fails. The Proximity Services, or so-called LTE Direct, standard will be approved by the end of the year.”

I am tempted to proclaim that “nagging eventually pays off!” But of course, there are lots of smart people out there who could see the same things that I did. When I gave a talk at Qualcomm about similar ideas, some years ago, I described how simple it would be to do this with packets, like text messages. The next time I spoke to some of their managers, I was stunned to learn they had not only made great strides in Peer to Peer, but were proposing a version that could even do P2P for real-time voice communication! Now that’s some ingenuity. That’s some company.

== Hey, you, get offa that cloud ==

cloud-dataOh, but trends are far worse on the business side of the Internet. Any company (or person) who tries to be “efficient” by entrusting crown jewel data to the Cloud has got to be crazy. Take this from Mark Anderson, one of the smartest tech-industry pundits:

“There are two chilling trends in Internet security that were underlined this week with the announcement by Hold Security of a Russian crime ring taking around 1.2 billion user names and password combinations from perhaps 420,000 different hacked websites. The first is a ramping of theft success on all scores, from personal IDs to nations stealing crown jewel intellectual property, which simply can no longer be tolerated if innovation and commerce are to continue. 

“The second is a massive movement to cloud computing, driven by financial requirements rather than security requirements, at a time when our internal sources indicate that clouds have already been hacked.”

disparity-transparency-brinThis is related to a another point I’ve made since 1995… and in The Transparent Society… that everything leaks, sooner or later. And we are better off making ourselves and our systems robust, able to shrug off and adapt to this inevitability, than whining and thrashing about, expecting the next “security” measure to work, at last.

It is disparities in transparency that threaten the health of freedom, markets, science and civilization.

Remember this.  Most villains (just like vampires) are fatally allergic to light.  Hence, the trick will be to expose them to it!  Lots of it. The solution is not to cower in the few remaining shadows hoping for concealment.  They are better at that, than you and I are. 

villains-light

== Transparency-related news ==

Here’s an algorithm that could use Facebook Likes alone to reliably determine six million users’ private traits like their sexual orientation, IQ, religious beliefs, life satisfaction, and personality traits—even when the Likes seemingly had nothing to do with the traits in question. Do not get outraged. This is absolutely inevitable! What you can do is shift your passion over to sousveillance.

DRONES-SURVEILLANCEAnother insightful article explores the many potential advantages, when civilians become empowered to fly their own drones. The ability to independently verify events, ensure accountability for public officials and police, provide situational awareness, deliver or fetch important items…. Yes there will be privacy concerns. But how better to catch that neighborhood voyeur than with a drone of your own, so that you can track the peeping tom and tell his mom!

And in the category of how do you plan to stop this? “By 2010, license-plate scanners had become standard equipment for most urban repo firms, and the number of plates stored in national databases was growing by tens of millions a month. … The richer the data gets, the easier it is to make predictions about a driver’s home address, workplace, gym, or favorite restaurant. Digital Recognition Network (DRN) has one of the largest plate-capture databases in the country, with a fleet of more than 2,000 affiliated trucks and upwards of 1.8 billion scans.”

omniveillanceAnswer: Any attempt to repress this – or face recognition – will only ensure that elites still have this power — governments, corporations, criminals — but such laws will make sure you and I have no access.   They will become gods and we will be permanent peasants. If this is inevitable, then let us all see. And then let’s learn – because of that light – to leave each other alone.

Oh, but then… artists are putting into practice my point about rendering surveillance visible to the rest of us. Some very interesting… and pointedly clever… innovations.

And finally, here’s something that’s simultaneously funny and deeply, deeply offensive. But also a clever way for a company to make its point… and that means it is likely they were all actors, after all, invalidating the whole thing. All told, a clever META view of where we are heading in the VR/AR holodeck world. Faked nuclear war….

Leave a comment

Filed under transparency

Media Challenge FAA Drone Ban — and drones conveying beauty?

MEDIA-DRONE-BANDrones have already been used on several occasions in the US to document the news. Last week, a storm chaser in Arkansas used a drone to record the havoc wrought by a tornado. But the U.S. Federal Aviation Administration (FAA) has been very slow to adopt rules for private and corporate drone use and has taken a draconian zero-tolerance policy on its interim ban on almost all such uses. Now, a number of media companies, including The New York Times and The Associated Press, accused the Federal Aviation Authority of violating the First Amendment.

Is this a difficult problem? Sure! Just imagine a future city scape abuzz with irritating mechanical vultures — delivery owls and snoopy eye-spies, swooping about, colliding with buildings and each other and power lines, causing blackouts and raining shattered, glowing parts on all below… at minimum city use should involve devices capable of situational awareness and detection of collision hazards and minimum separation rules. But dig it – we will only get there if the experiments can proceed in a few cities to see what really happens!

Start with Houston. They don’t give a darn anyway….

== Drones, androids and robots bring you the news! ==

ROBOTS-NEWSWill human journalists become obsolete? I participated in an online (HuffPost) panel discussion about the latest trend… robotizing the news media.  Here are just a few examples of the trend.

Japan Unveils  It’s First Android Newscaster. Not exactly uncanny, yet.  But they’re busy. With an expected 7% drop in population, their interest in automation is very high.

AP Will Use Robots to Write Some Business Stories.   – 4000 robo stories in the time it takes human writers to do 300.

Shades of Max Headroom! The following couch discussion of this is… fluffy and made me want to replace the panel with robots!  Another News Outlet Is Using Robots To Write Stories

Apparently most sports stories have come to us this way for several years.  (I suspect decades, even generations.)

== And more drones…  ==

Drones… everywhere!  Illustrating what has sometimes been called Brin’s Corollary to Moore’s Law… that cameras get smaller, faster, cheaper, more numerous and more mobile faster than ML. Now… watch how the flying cams are getting far more rugged, using a simple gimbal in a cage approach!  Watchbirds here we come, yippee.

Oh, but see the very end of this blog for one of the best links you’ll ever click, brought to you by a drone.

== The insurrectionary recourse? ==

citizen-uprisingAll the ructions and revolutions overseas raise an earnest question: could it happen here? Dialing in closer: is it still even theoretically possible for a mass citizen uprising to topple the government of the modern, western state? Mr. Harry Bentham makes an earnest effort and raises a few interesting points in “Does Modern Tech Render the 2nd Amendment Redundant?

Alas, his appraisal winds up being rather shallow, simply reiterating his arm-waved and evidence-free assertion that a mass uprising, armed with civilian rifles, could naturally and easily overcome forces of the modern state. Mr. Bentham leaves aside any discussion that:

– Any mass civil ruction will likely feature as many armed civilian “tories” as “rebels.”

– Local police have lately been heavily up-armed to close to military levels. Their loyalties in a crisis would complicate matters.

Jefferson-rifle   – Everything depends upon the morale and attitudes of the troops. If they retain strong connectivity and identification with the populace, they will be unreliable instruments of repression.

These and other factors were discussed in my own treatment on this issue — The Jefferson Rifle: Guns and the Insurrection Myth — where I appraise whether modern westerners — and Americans in particular — still retain an “insurrectionary recourse.”

And why attachment to that ideal is THE driver behind the refusal of the Gun Lobby to consider even modest compromises.

 

Fireworks== Finally… drones and sheer beauty 

I cannot recall when last an item of media so delighted me. I am… for once… speechless. Though proud to live in …
oh, just click this. Full screen. 

Leave a comment

Filed under science, society, technology, transparency

Everything leaks – get used to it.  Use it. Also: is Skynet coming?

== Will Wall Street give us Terminator? Others weigh in ==

AGI-artificial-general-intelligence A few years ago, I posed a chilling hypothesis, that AGI — or “artificial general intelligence” that’s equivalent or superior to human — might “evolve-by-surprise,” perhaps even suddenly, out of advanced computational systems. And yes, that’s the garish-Hollywood “Skynet” scenario leading to Terminator.

Only I suggested a twist — that it would not be military or government or university computers that generate a form of intelligence, feral and self-interested and indifferent to human values. Rather, that a dangerous AI might emerge out of the sophisticated programs being developed by Wall Street firms, to help them game (many might say cheat) our economic system.

Indeed, more money is being poured into AI research by Goldman-Sachs alone than by the top five academic centers, put together, and all of it helping to engender systems with a central ethos of predatory opportunism and parasitic amorality.Oh, and did I mention it’s all in secret?  The perfect Michael Crichton scenario.

Barrat-Final-INvention Now comes a book by documentary filmmaker James Barrat — Our Final Invention: Artificial Intelligence and the End of the Human Era — reviewed here on the ThinkAdvisor site — Are Killer Robots the Next Black Swan? — in which Barrat discusses a scenario sketched out by Alexander Wissner-Gross, a scientist-engineer with affiliations at Harvard and MIT, that seems remarkably similar to mine. Opines Wissner-Gross:

“If you follow the money, finance has a decent shot at being the primordial ooze out of which AGI emerges.”

Barrat elaborates: : “In other words, there are huge financial incentives for your algorithm to be self-aware—to know exactly what it is and model the world around it.”

The article is well-worth a look, though it leaves out the grand context — that “emergent-evolving” AGI make up only one category out of six different general varieties of pathways that might lead to AI. To be honest, I don’t consider it to be the most likely.

But that has not bearing on what we — as a civilization — should be doing, which is taking reasonable precautions. Looking ahead and pondering win-win ways that we can move forward while evading the most obviously stupid mistakes.

Secret schemes of moohlah masters — that’s no recipe for wisdom. Far better to do it all in the light.

== Everything leaks ==

Heartbleed: Yes It’s Really That Bad.  So says the Electronic Frontier Foundation (EFF). Heartbleed exploits a critical flaw in OpenSSL, which is used to secure hundreds of thousands of websites including major sites like Instagram, Yahoo, and Google. This article in WIRED also suggests that you can redouble your danger by rushing to trust fly by night third parties offering to fix the flaw… and meanwhile, “big boys” of industry aren’t offering general solutions, only patches to their own affected systems.

The crux? (1) change your passwords on sites where financial or other vital info is dealt-with, then gradually work your way through the rest, as each site offers you assurances. (2) try not to have the passwords be the same. (3) help ignite political pressure for the whole world of online password security to have a rapid-response component (not dominance) offered by a neutral agency… one that is totally transparent, neutral and separate from all law or espionage “companies.” And…

Everything-leaks…and (4) might I ask if you’ve noticed that this kind of event happens about twice a year? And it has been that way since the 1980s? Each of the events a scandal in its own right… hackers grab half a million Target card numbers… or Microsoft springs a leak… or Goldman Sachs… or Equifax… or Chelsea Manning and Julian Assange and Edward Snowden rip off veils of government secrecy… and pundits howl and the public quakes and no one ever seems to draw the correct conclusion —

that everything eventually leaks! And that maybe the entire password/secrecy model is inherently flawed. Or that there is another, different model that is inherently far more robust, that has only ever been mentioned in a few places, so far.

Here is one of those places.

Meanwhile, whistleblowers remain a vital part of reciprocal accountability. I would like to see expanded protections that simultaneously expand reciprocal accountability and citizen sousveillance… while allowing our intitutions to function in orderly ways.

Whistle-blower-lawsNow this announcement that the Project of Government Oversight (POGO) install SecureDrop… a new way for whistle blowers to deposit information anonymously and shielded from authorities trying to root out leakers. As author of The Transparent Society, I sometimes surprise folks by straddling this issue and pointing out that the needs of the bureaucracy should not be discounted completely! Or by reflex. Whistle blowing falls across a very wide spectrum and if we are sophisticated citizens we will admit that the revealers of heinous-illegal plots deserve more protection than mewling attention junkies.

Still, there is a real role to be played by those pushing the envelope. Read more about Pogo here.

Then again… Facebook can now listen in on your activities with a new audio recognition feature for its mobile app that can turn on smartphones’ microphones to “hear” what songs or television shows are playing in the background. Sounds cool… um, not.

== Brandeis the Seer ==

The famous dissent in Olmstead v. United States (1928)To , by Justice Louis Brandeis, is a vital mirror to hold up to our times. Take the most famous part of eloquent dissent, regarding a seminal wiretapping case:

Brandeis-criminal-law-olmstead“Our Government is the potent, the omnipresent teacher,” Brandeis concluded. “For good or for ill, it teaches the whole people by its example. Crime is contagious. If the Government becomes a lawbreaker, it breeds contempt for law; it invites every man to become a law unto himself; it invites anarchy. To declare that in the administration of the criminal law the end justifies the means — to declare that the Government may commit crimes in order to secure the conviction of a private criminal — would bring terrible retribution.”

Which brings us to Andrew O’Hehir’s article on Salon, recently, using Brandeis as a foil to discuss – and denounce – some recent polemics against Edward Snowden and his journalist outlet, Glenn Greenwald. To be honest, I found O’Hehir tendentious and sanctimonious, but there were some cogent moments that made the article worthwhile, especially when he shone some light on the incredible prescience Brandeis showed, in his 1928 dissent:

“If Brandeis does not literally predict the invention of the Internet and widespread electronic surveillance, he comes pretty close,” for Brandeis wrote, “The progress of science in furnishing the Government with means of espionage is not likely to stop with wire-tapping …Ways may someday be developed by which the Government, without removing papers from secret drawers, can reproduce them in court, and by which it will be enabled to expose to a jury the most intimate occurrences of the home.” Brandeis even speculated that psychiatrists of the future may be able to read people’s “unexpressed beliefs, thoughts and emotions” as evidence. O’Hehir notes, “…as far as I know we haven’t reached that dystopian nightmare yet. (But if that’s the big final revelation from the Snowden-Greenwald trove of purloined NSA secrets, you read it here first.)”

== Transparency media ==

Anyone care to review this for us? Post-Privacy and Democracy: Can there be Moral and Democratic Development in a Totally Transparent Society? by Patrick Held. It provides arguments why the end of privacy or at least secrecy might be inevitable given our individual demand for technology.

Leave a comment

Filed under society, technology, transparency

Must we hide behind masks?

== Hide from the Man? ==

hiding-behind-masks“Our world is becoming increasingly surveilled. For example, Chicago has over 25,000 cameras networked to a single facial recognition hub,” reads the URME (pronounced U R Me) site:

“We don’t believe you should be tracked just because you want to walk outside and you shouldn’t have to hide either. Instead, use one of our products to present an alternative identity when in public.” What product? A rubber mask bearing the likeness of URME’s founder Leo Selvaggio.

If lots of people go around wearing these masks the proto Big Brother system of all those cameras will be…

ever so slightly inconvenienced, while store-owners and bank guards and mere passers-by will have their tension levels ratchet up.

Yeah yeah, I’ve heard it all. This is a cool stunt and it draws attention to our decaying yadda yadda. And it accomplishes nothing else. Except to help promote the never ending chain of whining from those who think we can protect freedom by moaning “don’t look at me!” (I lived in Britain in the 1980s, where the cameras were already blooming everywhere, inspiring me to write The Transparent Society. In Kiln People I portray how masks will provide only slight and superficial anonymity, till someone is motivated enough to scrupulously backtrack images.)

surveillance-camera-streetYes, proto Big Brothers are all over the place! And yes, the camera networks could help bring us Big Brother! I fear the same outcome and I am just as militant in opposing it. More so!

Only there’s this. I know what works… what stands a chance of working. What has already worked well enough to give us the freedom that we do have….

…and it did not come from hiding…

…or whining “don’t look at me!”

== Wiretapping updated? ==

Strict-liability two-party consent eavesdropping laws seemed fair when they were passed in dozens of states, back in Stone Age days— like the 1960s — when the ability to record was unevenly possessed and when furtive recording seemed unfair. Today, it’s foolish for anyone to assume, at any point, that what they are saying has no chance of being played back, some other time. In particular, such two-party consent laws have been used to criminalize citizen recordings of their interactions with police and other government officials.

As reported here, the most important civil liberties matter in our lifetimes — certainly in thirty years — was hardly covered by the press. In 2013 both U.S. courts and the Obama Administration declared it to be “settled law” that a citizen has the right to record his or her interactions with police in public places. No single matter could have been more important because it established the most basic right of “sousveillance” or looking-back at power, that The Transparent Society is all about. It is also fundamental to freedom, for in altercations with authority, what other recourse can a citizen turn to, than the Truth?

(This was forecast in EARTH (1989) by the way.)

openness-accountabilityIt is important to take a balanced view… not to surrender all expectations of privacy, but to know that openness and accountability will let us both stay free and enforce a little privacy, or at least insist that we be physically left alone.

In particular, the recent rulings about citizen recordings of police absolutely eviscerate the snarky-stupid shrugs of cynics who proclaim that it’s all defeat and spirals into Orwellian hell.

Let there be no mistake. The cynics are enemies of freedom, not its defenders. Their tirades of gloom undermine the confidence and can-do spirit of problem solving that might get us across this transition era.

Indeed, sometimes “looking back” works! 

== Owning our data ==

haggling The Price of Haggling for Your Personal Data: This SLATE article discusses the notion that each of us might leverage and benefit from the economic value of our information.

It is one (absurd) thing to declare “I own all the info about me!” and to demand others not look. That’s a non-starter and if we pass laws to forbid the mighty from looking at us, that will only make them furtive about it and ensure we’ll get no benefit. As Heinlein said: “The chief thing achieved by privacy laws is to make the (spy) bugs smaller.”

But it is reasonable to say that people have “interests” and “value” in their information and a right to derive royalties or a fee for its use, especially if some commercial interest is making money off it. Moreover it is in an open society that we might be able to track who is using our data and insist on routine and proper payment for such use. The idea of people controlling and selling their data for personal and economic gain—as Jaron Lanier describes in Who Owns the Future? and Doc Searls elaborated on in The Intention Economy: When Customers Take Charge—is gaining traction.

In this interesting article on Slate, John C Havens asserts that it’s not just about money: “But it won’t take hold until we answer a more deeply fundamental question: What are we worth as a whole?”

jaron-lanier-who-owns-the-futureIndeed, a number of Internet mavens over the years, including Jaron Lanier, have prophesied that citizens will – at some point – demand to benefit from the commercial use that major entities and corporations are deriving from information about hundreds of millions of people.

Our data is being swapped about and – as author of The Transparent Society – I don’t find open information flows to be the problematic thing. It is the cutting out of us little guys from any participation in the value chain deriving from our data.

Indeed, the way our data is shuttled and sold is invisible to us!

An article by Gregory Maus — How Transparent Big Data Markets Could Better Protect Your Data…and Your Rights — suggests setting up transparent, privately-owned, but publicly-regulated markets for the data. “Imagine something like an Amazon, Alibaba, or New York Mercantile Exchange, focused on the purchase and licensing of Big Data. Suppliers could increase their markets, buyers could increase their options, and all transactions would be public record.”

Now comes the Hub of All Things (HAT) project. The HAT is building a database which will be owned by individuals who produce data in the first place. That includes social media data, energy use data and internet of things data from our homes, such as the goods you use or the medicines you take. Kind of vague, so far. Indeed, I am doubtful. But over time, we must as a society develop ways that each person benefits from a strong interest in his or her information.

 

cynicism-problem-solvingFor more on Transparency, Privacy and Accountability

 

 

 

Leave a comment

Filed under transparency