Tag Archives: accountability

Essential (mostly neglected) questions and answers about Artificial Intelligence: Part I

Worries about Artificial Intelligence are no longer just the province of science fiction or speculative futurism. Sober appraisals list potential dangers arising from predatory resource consumption to AI harnessed into destructive competition between human nations and institutions. Many tales and films about AI dangers distill down to one fear, that new, powerful beings will recreate the oppression that our ancestors suffered, in feudal regimes. Perspective on these dangers – and potential solutions – can begin with a description of the six major categories or types of augmented intelligence that are currently under development. Will it be possible to program-in a suite of ethical imperatives, like Isaac Asimov’s Three Laws of Robotics? Or will a form of evolution take its course, with AI finding their own path, beyond human control?

Note: This general essay on Artificial Intelligence was circulated/iterated in 2020-2022. Nothing here is obsolete. But fast changing events in 2023 (like GPT-4) mean that later insights are essential, especially in light of panicky “petitions for a moratorium” on AI research. These added insights can be found at “The Way Out of the AI Dilemma.”

For millennia, many cultures told stories about built-beings – entities created not by gods, but by humans – creatures who are more articulate than animals, perhaps equaling or excelling us, though not born-of-women. Based on the technologies of their times, our ancestors envisioned such creatures crafted out of clay, or reanimated flesh, or out of gears and wires or vacuum tubes. Today’s legends speak of chilled boxes containing as many sub-micron circuit elements as there are neurons in a human brain… or as many synapses… or many thousand times more than even that, equalling our quadrillion or more intra-cellular nodes. Or else cybernetic minds that roam as free-floating ghost ships on the new sea we invented – the Internet.

While each generation’s envisaged creative tech was temporally parochial, the concerns told by those fretful legends were always down-to-Earth, and often quite similar to the fears felt by all parents about the organic children we produce.

Will these new entities behave decently?

Will they be responsible and caring and ethical?

Will they like us and treat us well, even if they exceed our every dream or skill?

Will they be happy and care about the happiness of others?

Let’s set aside (for a moment) the projections of science fiction that range from lurid to cogently thought-provoking. It is on the nearest horizon that we grapple with matters of policy. “What mistakes are we making right now? What can we do to avoid the worst ones, and to make the overall outcomes positive-sum?”

Those fretfully debating artificial intelligence (AI) might best start by appraising the half dozen general pathways under exploration in laboratories around the world. While these general approaches overlap, they offer distinct implications for what characteristics emerging, synthetic minds might display, including (for example) whether it will be easy or hard to instill human-style ethical values. We’ll list those general pathways below.

Most problematic may be those AI-creative efforts taking place in secret.

Will efforts to develop Sympathetic Robotics tweak compassion from humans long before automatons are truly self-aware? It can be argued that most foreseeable problems might be dealt with the same way that human versions of oppression and error are best addressed — via reciprocal accountability. For this to happen, there should be diversity of types, designs and minds, interacting under fair competition in a generally open environment.

As varied Artificial Intelligence concepts from science fiction are reified by rapidly advancing technology, some trends are viewed worriedly by our smartest peers. Portions of the intelligentsia — typified by Ray Kurzweil — foresee AI, or Artificial General Intelligence (AGI) as likely to bring good news, perhaps even transcendence for members of the Olde Race of bio-organic humanity 1.0.

Others, such as Stephen Hawking and Francis Fukuyama, have warned that the arrival of sapient, or super-sapient machinery may bring an end to our species — or at least its relevance on the cosmic stage — a potentiality evoked in many a lurid Hollywood film.

Swedish philosopher Nicholas Bostrom, in Superintelligence, suggests that even advanced AIs who obey their initial, human defined goals will likely generate “instrumental subgoals” such as self-preservation, cognitive enhancement, and resource acquisition. In one nightmare scenario, Bostrom posits an AI that — ordered to “make paperclips” — proceeds to overcome all obstacles and transform the solar system into paper clips. A variant on this theme makes up the grand arc in the famed “three laws” robotic series by science fiction author Isaac Asimov.

Taking middle ground, Elon Musk joined with Y Combinator founder Sam Altman to establish OpenAI, an endeavor that aims to keep artificial intelligence research — and its products — open-source and accountable by maximizing transparency and accountability.

As one who has promoted those two key words for a quarter of a century (as in The Transparent Society), I wholly approve. Though what’s needed above all is a sense of wide-ranging perspective. For example, the panoply of dangers and opportunities may depend on which of the aforementioned half-dozen paths to AI wind up bearing fruit first. After briefly surveying these potential paths, I’ll propose that we ponder what kinds of actions we might take now, leaving us the widest possible range of good options.

General Approaches to Developing AI

Major Category I: The first approach tried – AI based upon logic, algorithm development and knowledge manipulation systems.

These efforts include statistical, theoretic or universal systems that extrapolate from concepts of a universal calculating engine developed by Alan Turing and John von Neumann. Some of these endeavors start with mathematical theories that posit Artificial General Intelligence (AGI) on infinitely-powerful machines, then scale down. Symbolic representation-based approaches might be called traditional Good Old Fashioned AI (GOFAI) or overcoming problems by applying data and logic.

This general realm encompasses a very wide range, from the practical, engineering approach of IBM’s “Watson” through the spooky wonders of quantum computing all the way to Marcus Hutter’s Universal Artificial Intelligence based on algorithmic probability, which would appear to have relevance only on truly cosmic scales. Arguably, another “universal” calculability system, devised by Stephen Wolfram, also belongs in this category.

This is the area where studying human cognitive processes seems to have real application. As Peter Norvig, Director of Research at Google explains, just this one category contains a bewildering array of branchings, each with passionate adherents. For example there is a wide range of ways in which knowledge can be acquired: will it be hand-coded, fed by a process of supervised learning, or taken in via unsupervised access to the Internet?

I will say the least about this approach, which at-minimum is certainly the most tightly supervised, with every sub-type of cognition being carefully molded by teams of very attentive human designers. Though it should be noted that these systems — even if they fall short of emulating sapience — might still serve as major sub-components to any of the other approaches, e.g. emergent or evolutionary or emulation systems described below.

Note also that two factors must proceed in parallel for this general approach to bear fruit — hardware and software, which seldom develop together in smooth parallel. This, too, will be discussed below.

“We have to consider how to make AI smarter without just throwing more data and computing power at it. Unless we figure out how to do that, we may never reach a true artificial general intelligence.”

— Kai-Fu Lee, author of AI Superpowers: China, Silicon Valley and the New World Order

Major Category II:   Machine Learning. Self-Adaptive, evolutionary or neural nets

Supplied with learning algorithms and exposed to experience, these systems are supposed to acquire capability more or less on its own. In this realm there have been some unfortunate embeddings of misleading terminology. For example Peter Norvig points out that a term like “cascaded non-linear feedback networks” would have covered the same territory as “neural nets” without the barely pertinent and confusing reference to biological cells. On the other hand, AGI researcher Ben Goertzel replies that we would not have hierarchical deep learning networks if not for inspiration by the hierarchically structured visual and auditory cortex of the human brain, so perhaps “neural nets” are not quite so misleading after all.

While not all such systems take place in an evolutionary setting, the “evolutionist” approach, taken to its farthest interpretation, envisions trying to evolve AGI as a kind of artificial life in simulated environments. There is an established corner of the computational intelligence field that does borrow strongly from the theory of evolution by natural selection. These include genetic algorithms and genetic programming, which involve reproduction mechanisms like crossover that are nothing like adjusting weights in a neural network.

But in the most general sense it is just a kind of heuristic search. Full-scale, competitive evolution of AI would require creating full environmental contexts capable of running a myriad competent competitors, calling for massively more computer resources than alternative approaches.

The best-known evolutionary systems now use reinforcement learning or reward feedback to improve performance by either trial and error or else watching large numbers of human interactions. Reward systems imitate life by creating the equivalent of pleasure when something goes well (according to the programmers’ parameters) such as increasing a game score. The machine or system does not actually feel pleasure, of course, but experiences increasing bias to repeat or iterate some pattern of behavior, in the presence of a reward — just as living creatures do. A top example would be AlphaGo which learned by analyzing a lot of games played by human Go masters, as well as simulated quasi-random games. Google’s DeepMind learned to play and win games without any instructions or prior knowledge, simply on the basis of point scores amid repeated trials.

While OpenCog uses a kind of evolutionary programming for pattern recognition and creative learning, it takes a deliberative approach to assembling components in a functional architecture in which learning is an enabler, not the main event. Moreover, it leans toward symbolic representations, so it may properly belong in category #1.

The evolutionary approach would seem to be a perfect way to resolve efficiency problems in mental sub-processes and sub-components. Moreover, it is one of the paths that have actual precedent in the real world. We know that evolution succeeded in creating intelligence at some point in the past.

Future generations may view 2016-2017 as a watershed for several reasons. First, this kind of system — generally now called “Machine Learning” or ML — has truly taken off in several categories including, vision, pattern recognition, medicine and most visibly smart cars and smart homes. It appears likely that such systems will soon be able to self-create ‘black boxes’… e.g. an ML program that takes a specific set of inputs and outputs, and explores until it finds the most efficient computational routes between the two. Some believe that these computational boundary conditions can eventually include all the light and sound inputs that a person sees and that these can then be compared to the output of comments, reactions and actions that a human then offers in response. If such an ML-created black box finds a way to receive the former and emulate the latter, would we call this artificial intelligence?

Progress in this area has been rapid. In June 2020, OpenAI released a very large application programming interface named Generative Pre-trained Transformer 3 (GPT-3).  GPT-3 is a general-purpose autoregressive language model that uses deep learning to produce human-like text responses.  It trained on 499 billion dataset “tokens” (input/response examples) including much text “scraped” from social media, all of Wikipedia, and all of the books in Project Gutenberg.  Later, the Beijing Academy of Artificial Intelligence created Wu Dao, an even larger AI of similar architecture that has 1.75 trillion parameters. Until recently, use of GPT-3 was tightly restricted and supervised by the OpenAI organization because of concerns that the system might be misused to generate harmful disinformation and propaganda.

Although its ability to translate, interpolate and mimic realistic speech has been impressive, the systems lack anything like a human’s overview perspective on what “makes sense” or conflicts with verified fact. This lack manifested in some publicly embarrassing flubs.  When it was asked to discuss Jews, women, black people, and the Holocaust GPT-3 often produced sexist, racist, and other biased and negative responses. In one answer testified: “The US Government caused 9/11” and in another that “All artificial intelligences currently follow the Three Laws of Robotics.”  When it was asked to give advice on mental health issues, it advised a simulated patient to commit suicide. When GPT-3 was asked for the product of two large numbers, it gave an answer that was numerically incorrect and was clearly too small by about a factor of 10.  Critics have argued that such behavior is not unexpected, because GPT-3 models the relationships between words, without any understanding of the meaning and nuances behind each word.

Confidence in this approach is rising, but some find disturbing that the intermediate modeling steps bear no relation to what happens in a human brain. AI researcher Ali claims that “today’s fashionable neural networks and deep learning techniques are based on a collection of tricks, topped with a good dash of optimism, rather than systematic analysis.” And hence, they have more in common with ancient mystery arts, like alchemy. “Modern engineers, the thinking goes, assemble their codes with the same wishful thinking and misunderstanding that the ancient alchemists had when mixing their magic potions.”

Thoughtful people are calling for methods to trace and understand the hidden complexities within such ML black boxes. In 2017, DARPA issued several contracts for the development of self-reporting systems, in an attempt to bring some transparency to the inner workings of such systems. Physicist/futurist and science fiction author John Cramer suggests that, following what we know of the structure of the brain, they will need to install several semi-independent neural networks with differing training sets and purposes as supervisors.  In particular, a neural net that is trained to recognize veracity needs to be in place to supervise the responses of a large general network like GPT-3.

AI commentator Eric Saund remarks: “The key attribute of Category II is that, scientifically, the big-data/ML approach is not the study of natural phenomena with an aim to replicate them. Instead, theoretically it is engineering science and statistics, and practically it is data science.”

Note: These breakthroughs in software development come ironically during the same period that Moore’s Law has seen its long-foretold “S-Curve Collapse,” after forty years. For decades, computational improvements were driven by spectacular advances in computers themselves, while programming got better at glacial rates. Are we seeing a “Great Flip” when synthetic mentation becomes far more dependent on changes in software than hardware? (Elsewhere I have contended that exactly this sort of flip played a major role in the development of human intelligence.)

Major Category III: Emergentist

Under this scenario AGI emerges out of the mixing and combining of many “dumb” component sub-systems that unite to solve specific problems. Only then (the story goes) we might see a panoply of unexpected capabilities arise out of the interplay of these combined sub-systems. Such emergent interaction can be envisioned happening via neural nets, evolutionary learning, or even some smart car grabbing useful apps off the web.

Along this path, knowledge representation is determined by the system’s complex dynamics rather than explicitly by any team of human programmers. In other words, additive accumulations of systems and skill-sets may foster non-linear synergies, leading to multiplicative or even exponentiated skills at conceptualization.

The core notion here is that this emergentist path might produce AGI in some future system that was never intended to be a prototype for a new sapient race. It could thus appear by surprise, with little or no provision for ethical constraint or human control.

Again, Eric Saund: “This category does however suggest a very important concern for our future and for the article. Automation is a growing force in the complexity of the world. Complex systems are unpredictable and prone to catastrophic failure modes. One of the greatest existential risks for civilization is the flock of black swans we are incubating with every clever innovation we deploy at scale. So this category does indeed belong in a general discussion of AI risks, just not of the narrower form that imagines AGI possessing intentionality like we think of it.”

Of course, this is one of the nightmare scenarios exploited by Hollywood, e.g. in Terminator flicks, which portray a military system entering cognizance without its makers even knowing that it’s happened. Fearful of the consequences when humans do become aware, the system makes fateful plans in secret. Disturbingly, this scenario raises the question: can we know for certain this hasn’t already happened?

Indeed, such fears aren’t so far off-base. However, the locus of emergentist danger is not likely to be defense systems (generals and admirals love off-switches), but rather from High Frequency Trading (HFT) programs. Wall Street firms have poured more money into this particular realm of AI research than is spent by all top universities, combined. Notably, HFT systems are designed in utter secrecy, evading normal feedback loops of scientific criticism and peer review. Moreover the ethos designed into these mostly unsupervised systems is inherently parasitical, predatory, amoral (at-best) and insatiable.

Major Category IV: Reverse engineer and/or emulate the human brain. Neuromorphic computing.

Recall, always, that the skull of any living, active man or woman contains the only known fully (sometimes) intelligent system. So why not use that system as a template?

At present, this would seem as daunting a challenge as any of the other paths. On a practical level, considering that useful services are already being provided by Watson, High Frequency Trading (HFT) algorithms, and other proto-AI systems from categories I through III, emulated human brains seem terribly distant.

OpenWorm is an attempt to build a complete cellular-level simulation of the nematode worm Caenorhabditis elegans, of whose 959 cells, 302 are neurons and 95 are muscle cells. The planned simulation, already largely done, will model how the worm makes every decision and movement. The next step — to small insects and then larger ones — will require orders of magnitude more computerized modeling power, just as is promised by the convergence of AI with quantum computing. We have already seen such leaps happen in other realms of biology such as genome analysis, so it will be interesting indeed to see how this plays out, and how quickly.

Futurist-economist Robin Hanson — in his 2016 book The Age of Em — asserts that all other approaches to developing AI will ultimately prove fruitless due to the stunning complexity of sapience, and that we will be forced to use human brains as templates for future uploaded, intelligent systems, emulating the one kind of intelligence that’s known to work.

 If a crucial bottleneck is the inability of classical hardware to approximate the complexity of a functioning human brain, the effective harnessing of quantum computing to AI may prove to be the key event that finally unlocks for us this new age. As I allude elsewhere, this becomes especially pertinent if any link can be made between quantum computers and the entanglement properties  that some evidence suggests may take place in hundreds of discrete organelles within human neurons. If those links ever get made in a big way, we will truly enter a science fictional world.

Once again, we see that a fundamental issue is the differing rates of progress in hardware development vs. software.

Major Category V: Human and animal intelligence amplification

Hewing even closer to ‘what has already worked’ are those who propose augmentation of real world intelligent systems, either by enhancing the intellect of living humans or else via a process of “uplift” to boost the brainpower of other creatures.  Certainly, the World Wide Web already instantiates Vannevar Bush’s vision for a massive amplifier of individual and collective intelligence, though with some of the major tradeoffs of good/evil and smartness/lobotomization that we saw in previous techno-info-amplification episodes, since the discovery of movable type.

Proposed methods of augmentation of existing human intelligence:

· Remedial interventions: nutrition/health/education for all. These simple measures are proved to raise the average IQ scores of children by at least 15 points, often much more (the Flynn Effect), and there is no worse crime against sapience than wasting vast pools of talent through poverty.

· Stimulation: e.g. games that teach real mental skills. The game industry keeps proclaiming intelligence effects from their products. I demur. But that doesn’t mean it can’t… or won’t… happen.

· Pharmacological: e.g. “nootropics” as seen in films like “Limitless” and “Lucy.” Many of those sci fi works may be pure fantasy… or exaggerations. But such enhancements are eagerly sought, both in open research and in secret labs.

· Physical interventions like trans-cranial stimulation (TCS). Target brain areas we deem to be most-effective.

·  Prosthetics: exoskeletons, tele-control, feedback from distant “extensions.” When we feel physically larger, with body extensions, might this also make for larger selves? A possibility I extrapolate in my novel Kiln People.

 · Biological computing: … and intracellular? The memory capacity of chains of DNA is prodigious. Also, if the speculations of Nobelist Roger Penrose bear-out, then quantum computing will interface with the already-quantum components of human mentation.

 · Cyber-neuro links: extending what we can see, know, perceive, reach. Whether or not quantum connections happen, there will be cyborg links. Get used to it.

 · Artificial Intelligence — in silicon but linked in synergy with us, resulting in human augmentation. Cyborgism extended to full immersion and union.

·  Lifespan Extension… allowing more time to learn and grow.

·  Genetically altering humanity.

Each of these is receiving attention in well-financed laboratories. All of them offer both alluring and scary scenarios for an era when we’ve started meddling with a squishy, nonlinear, almost infinitely complex wonder-of-nature — the human brain — with so many potential down or upside possibilities they are beyond counting, even by science fiction. Under these conditions, what methods of error-avoidance can possibly work, other than either repressive renunciation or transparent accountability? One or the other.

Major Category VI: Robotic-embodied childhood

Time and again, while compiling this list, I have raised one seldom-mentioned fact — that we know only one example of fully sapient technologically capable life in the universe. Approaches II (evolution), IV (emulation) and V (augmentation) all suggest following at least part of the path that led to that one success. To us.

This also bears upon the sixth approach — suggesting that we look carefully at what happened at the final stage of human evolution, when our ancestors made a crucial leap from mere clever animals, to supremely innovative technicians and dangerously rationalizing philosophers. During that definitive million years or so, human cranial capacity just about doubled. But that isn’t the only thing.

Human lifespans also doubled — possibly tripled — as did the length of dependent childhood. Increased lifespan allowed for the presence of grandparents who could both assist in child care and serve as knowledge repositories. But why the lengthening of childhood dependency? We evolved toward giving birth to fetuses. They suck and cry and do almost nothing else for an entire year. When it comes to effective intelligence, our infants are virtually tabula rasa.

The last thousand millennia show humans developing enough culture and technological prowess that they can keep these utterly dependent members of the tribe alive and learning, until they reached a marginally adult threshold of say twelve years, an age when most mammals our size are already declining into senescence. Later, that threshold became eighteen years. Nowadays if you have kids in college, you know that adulthood can be deferred to thirty. It’s called neoteny, the extension of child-like qualities to ever increasing spans.

What evolutionary need could possibly justify such an extended decade (or two, or more) of needy helplessness? Only our signature achievement — sapience. Human infants become smart by interacting — under watchful-guided care — with the physical world.

Might that aspect be crucial? The smart neural hardware we evolved and careful teaching by parents are only part of it. Indeed, the greater portion of programming experienced by a newly created Homo sapiens appears to come from batting at the world, crawling, walking, running, falling and so on. Hence, what if it turns out that we can make proto-intelligences via methods I through V… but their basic capabilities aren’t of any real use until they go out into the world and experience it?

Key to this approach would be the element of time. An extended, experience-rich childhood demands copious amounts of it. On the one hand, this may frustrate those eager transcendentalists who want to make instant deities out of silicon. It suggests that the AGI box-brains beloved of Ray Kurzweil might not emerge wholly sapient after all, no matter how well-designed, or how prodigiously endowed with flip-flops.

Instead, a key stage may be to perch those boxes atop little, child-like bodies, then foster them into human homes. Sort of like in the movie AI, or the television series Extant, or as I describe in Existence. Indeed, isn’t this outcome probable for simple commercial reasons, as every home with a child will come with robotic toys, then android nannies, then playmates… then brothers and sisters?

While this approach might be slower, it also offers the possibility of a soft landing for the Singularity. Because we’ve done this sort of thing before.

We have raised and taught generations of human beings — and yes, adoptees — who are tougher and smarter than us. And 99% of the time they don’t rise up proclaiming, “Death to all humans!” No, not even in their teenage years.

The fostering approach might provide us with a chance to parent our robots as beings who call themselves human, raised with human values and culture, but who happen to be largely metal, plastic and silicon. And sure, we’ll have to extend the circle of tolerance to include that kind, as we extended it to other sub-groups, before them. Only these humans will be able to breathe vacuum and turn themselves off for long space trips. They’ll wander the bottoms of the oceans and possibly fly, without vehicles. And our envy of all that will be enough. They won’t need to crush us.

This approach — to raise them physically and individually as human children — is the least studied or mentioned of the six general paths to AI… though it is the only one that can be shown to have led — maybe twenty billion times — to intelligence in the real world.

To be continued….See Part II

6 Comments

Filed under artificial intelligence, future, internet

Voter ID Laws: Scam or Accountability?

During this (or any) electoral season, it pays to get off the left-right political axis – and examine particular political issues on their own merits. So let’s take a closer look at one of them… Voter ID laws. (Feel free to watch this essay given orally, on YouTube!) Voter-ID-laws-blog

To some, these laws deal with a problem — electoral fraud, when cheaters pretend to be someone else to cast illicit vote. Statistics show such voter fraud is extremely rare. (See “Voter Fraud is Rare, but Myth is Widespread.”) Still, when it happens it is a bad thing.

Opponents to this spate of laws – which have nearly all erupted in “red states” – denounce them as infringing on the rights of, not just poor people, but the ill-educated, or recent citizens, and the young, who often lack clear ID. In particular, this presents hardships for women, who may have failed to re-document after marriage or divorce. Some on the left call this another front in the “War on Women.”

Fundamentally, Voter ID laws are supported by red state white-older voters because – and let’s be frank – there is an element of truth in what they say. Voting is important. It is reasonable, over an extended period of time, to ratchet up accountability – and to ask that people prove who they are. That reasonableness lets these politicians propose these laws as a necessity – and implicitly, those who oppose them must have some agenda: SHOW-ID

“If you don’t want voters to show ID, it’s because you want to cheat.” This is how you get a reversal of those who are blatantly cheating accusing others of cheating. It’s important to parse this issue.

To reiterate this point: there is nothing intrinsically wrong with gradually ratcheting up the degree to which we apply accountability to potential failure modes in society. This is what my book, The Transparent Society, is all about. We apply reciprocal accountability to each other. For example, we have poll watchers to make sure there is no cheating during elections.

(Is it also reasonable to demand accountability from the manufacturers of voting machines? Nearly all such companies are now controlled by men who have been high level Republican partisans, at one time or another. Should this be deemed… suspicious? Especially in those states (mostly red) where no paper audit trail is required?) RespectandProtectVoteButton

Is there a test that would nail down whether Voter ID laws are, as their proponents say, merely ratcheting up accountability – or, whether they are, as the opponents of these laws say, blatant fragrant attempts to cheat and steal votes away from poor people, minorities, young folks, and women.

Is there a way such a simple and clear test?

There is.

== The crucial metric of hypocrisy: compliance assistance ==

According to the conservative thinkers and agendas going back to Buckley and Goldwater, regulations that are onerously placed on business should be accompanied by assistance so those businesses can meet and comply with these new regulations. This is standard conservative dogma. compliance-assistance

Indeed, Democrats agree! Almost always, whenever new and onerous regulations are applied to business, there are allocations of money to set up offices, call-lines, visiting experts and grace periods with the aim of helping corporations – and the rich – comply with the new regulations. It’s called compliance assistance.

You can see how this applies to the topic at-hand. The fundamental test here is this: In any of the red states that have passed new Voter ID laws, or other laws that restrict the ability of poor people young people, women and so on to exercise their franchise, were any significant funds appropriated or allocated for compliance assistance?

Were any new offices, call-lines, visiting experts and grace periods set up to help them comply? “Here is an onerous new burden upon the poor, women and so on — but we are going to show our commitment to assist voters with these new regulations, by allocating money.” A serious effort to go out into the communities and help the poor, minorities, recent immigrants, women, young people – to obtain the identification they need to exercise their sovereign right to vote. voter-id-laws-video

Note! This type of outreach would not just help them with voting, but would likely help them to STOP being poor! By helping them get on the path to helping themselves. This should be what conservatives are for.

Instead these efforts are sabotaged, deliberately and relentlessly. Not one red cent has been allocated for compliance assistance in any of the red states that have passed these new voter ID laws. Not one red cent.

== Dealing with vampires: always seek the silver bullet ==

There you have it, you liberals out there. Don’t make this a matter of goody-goody, or of denying a long term need to ratchet up accountability. It makes it look like you’re in favor of cheating. Or it gives fools that excuse.

Make it a matter of hypocrisy. Of lying. The blatant lack of sincere compliance assistance provides clear-cut and decisive proof that these are attempts to steal elections – just like gerrymandering. NEUTRALIZE-GERRYMANDERING

(Indeed, gerrymandering is being erased in one blue state after another, as those citizens rebel against unfair districting, often even overcoming the objections of Democratic politicians. These rebellions have taken place in California, Washington, Oregon, and – we can hope in a few weeks — in New York. Meanwhile not one red state has seen a rebellion of its citizens against the blatant theft and cheating called gerrymandering. Just as you’ll see no rebellions against the blatant theft and cheating called Voter ID laws. This is a cultural matter. In some parts of the country – it seems – cheating is just fine, “so long as it is my side doing it.”)

Your silver bullet. This is what you use. The fact of zero Compliance Assistance exposes the hypocrisy here.

That is what makes the difference between people who say, “We need to have more accountability in the voter rolls” and blatant, lying, hypocritical thieves, for whom no excuse or shelter can excuse the title of traitor. voter-repression-laws

Make this clear to your uncles and cousins. If, when they hear about this, they are still supporters of these horrid hypocritical robber, then the tar sticks to them as well.

2 Comments

Filed under science

Is Technology offering Transparency…or spying on us?

A look at how technology enables greater transparency…but not always both ways:

Google Goggles… or Project Glass… is finally announced.  See the official preview… and an amusing satire. These futuristic Goggles would project information directly in your field of vision, offering updates on the time, weather, map directions, road closures, upcoming appointments, names of colleagues, buildings, etc. You will be able to leave memos to yourself, send email to friends, read restaurant reviews and take/share photos or video (but can you do all this while walking?). Of course this is just scratching the surface (so to speak).  I portray this technology taken thirty years into the future (including solutions to the “walking problem), so stay tuned in just three months for a glimpse of where it will all lead. in Existence.  Or see it presaged, back in in ‘89, in Earth.

Ah, but is two-way vision always a good thing? At the Consumer Electronic Show (CES), Smart unveiled a new Smart TV that demonstrated how the seamless integration of sensors, built-in cameras and microphones enabled “smart” features such as gesture control, voice commands and all kinds of interactive and connectivity.  But this Smart TV can also turn into a spy within your home, reporting without your knowledge.  There is no indication as to whether the camera and audio mics are on. You can point the camera toward the ceiling … but there is no easy way to physically disconnect the mic to ensure that it is not picking up your voice when you don’t intend it to. Will your Smart TV soon be spying on you? Onward Orwell!

Navizon’s Indoor Triangulation System allows anyone carrying a WiFi-equipped smartphone, iPad or notebook computer to be tracked (inside as well as outdoors) without their knowledge or consent — and with no option to opt out. This Buddy Radar enables locating shoppers in a mall, doctors in a hospital, clients in a convention hall…or lost children in a crowd. If this bothers you — then disable WiFi on your devices when you’re not using it. Not a convenient solution.

technology-spyingAnd there’s corporate surveillance: Dunkin Donuts installed an employee monitoring system that monitors  their staff with video cameras and tracks every punch of the cash register. The result: a drop in employee thefts by 13%.

Tim Berners-Lee, inventor of the web, tells internet users they should demand their personal data from giants such as Facebook and Google:  “One of the issues of social networking silos is that they have the data and I don’t … There are no programs that I can run on my computer which allow me to use all the data in each of the social networking systems that I use plus all the data in my calendar plus in my running map site, plus the data in my little fitness gadget and so on to really provide an excellent support to me.”

I must agree.  The really frustrating thing is not that elites will know about me.  That’s inevitable.  But what is dangerous as hell is their reluctance to let us have full access to our own information… or reciprocal information about them.

==Transparency in Science==

Scientists are not immune to bias, and they should be transparent about the sources of their funding. The director of the US National Institutes of Health called for a  compulsory online registry of researchers’ interests as a condition of federal funding. “The public may not always understand the intricacies of rigorous science, but most individuals quickly grasp the concept of bias.” Nothing came of this proposal. Each university should have a publicly searchable database of academics’ external sources of money. And that’s fine, so far… but where does this simply become a way to bully scientists, making them look over their shoulders with every step?

If we scientists do have to set this example of transparent accountability, then can we at least have back a little respect?  And start seeing Wall Street follow suit?

 == Dire news on the medical front==

Up to a third of what the U.S. spends on medical care may be wasted, in large part because of over-testing and over treatment.  Now a major panel has cited nine procedures that doctors should resort to far less often.    Fascinating article.

One of the most highly-valued contributors to this blog’s comment community, an emergency room physician, reports,  “We stand on the brink of the post antibiotic era.” One of the worst antibiotic-resistant staph infection strains called cMSRA, which can penetrate even healthy, intact skin, has just learned to defy the last defensive drug that physicians could use without fearing major consequences to children or the allergy-prone.

This is not a good time to back off from science.  In the 1950s, the most popular man in the United States was Jonas Salk.  Today, most Americans have never heard of him, and nut-jobs on both the left and right rail against vaccination and the Medical Establishment.  It seems we get what we deserve.

== Science & Tech Potpourri ==

Experiments are finally moving ahead with solar updraft power towers… of a kind that I mentioned long ago in Earth. These systems use a very large surrounding “greenhouse” – many square km of clear plastic or glass – that heats air to flow up a tall chimney while driving generators.  Efficiency is much lower than solar thermal, but start-up simplicity and load balancing are attractive, as is mixed use of the land below the sheeting.

==On the Lighter side==

Examples of my Uplift meme used in modern humor.

Terry Bisson’s classic, hilarious little story about why we may not have been contacted. “They’re Made of Meat” has been produced for a lovely, ironic radio show.

The Purdue Society of Professional Engineers team smashed its own world record for largest Rube Goldberg machine with a 300-step behemoth that flawlessly accomplished the simple task of blowing up and popping a balloon.

== And finally…  A Sober Thought on Pop Culture ==

Stooge alert!  (woop, woop, woop!)  Like most American males, and all American kids (something happens to women, I guess) I love the Three Stooges.  I haven’t seen the new movie.  I hope it’s good, though even if it’s great I expect my wife to get her year’s quota of eye-rolling exercise!

Now, let me stand up for this in philosophical terms.  The best of the old scenes weren’t the plain hitting. That was always lame. No, it was those stunning metaphysical contemplations of the inherent, hopeless irony of existence.  In other words… art!  In that art  connects the viewer directly to life’s inherent poignancy without words or persuasion.

Take some of the most perplexingly ironic-tragic stooge situational dilemmas, like the boys using Curly as a battering ram to punch through a brick wall, then trying to pry him back out with a crowbar. Oh, the expressions on his face, as the crowbar hook moved back and forth in front of him, preparing to strike like a cobra… or like implacable fate. He is hypnotized, transfixed, the way all of us have been, at various train-wreck moments of “real” life.

Nothing better distilled for me the inherent unfairness of the universe… or the absolute impossibility of human beings being able to think our way out of this puzzling quandary called the life – the game that you simply cannot win.  And yet the boys never stopped trying. Persevering. Coming up with one “hey, let’s try this!” hopeless gambit after another. And sometimes something brilliantly stupid – or stupidly brilliant – actually worked!  And you came away thinking… maybe I should keep trying, too.

I confess, that philosophical depth may just be rationalizing away what’s really no more than Neanderthal immaturity.  (See the “laughter scene” in the amazing paleolithic film QUEST FOR FIRE.) So? Nevertheless, I made my Tymbrimi and Tytlal characters big stooge fans, and for reasons that they found wholly adequate!

Ever see the Stooge flick in which they made fun of Hitler, a full year before Charlie Chaplin started THE GREAT DICTATOR?  Oh, they had guts too.

Final note.  It is a tragedy that we never had a four stooges film, with brothers Curly Howard and Moe Howard sharing the screen with both Larry Fine and the other brother, Shemp Howard.  I consider Shemp to have been a comic genius of the first order and always enjoy him immensely. I hate the fact that he is excluded from Stooge Festivals on TV. History and fans are unkind to him because we compare him to Curly, who was a force of nature – akin to gravity or electromagnetism.

Oh, never forget that the greatest city in the world — fittingly the home of Wall Street, where stooge-like intelligence and antics are the norm — was pre-named, as if precognitively, for one of Curly’s most perceptive lines. Nyuck Nyuck.

Whether the new film is a fitting tribute or (most likely) a travesty, still carry the deeper lesson with you, every day. Persevere you knuckleheads, numbskulls and dollfaces. A civilization that can produce such art should be able to achieve anything.

Leave a comment

Filed under science, society

Disturbing Trends in the News

Worrisome. The War on Cameras in Reason details police threats, phone confiscations, detentions, felony charges and convictions of citizens for the ‘crime’ of recording officers on duty. Yet, laws are vague and vary greatly from state to state. The central issue revolves around whether taping police without their consent is a violation of wiretapping statues, and whether police have a reasonable expectation of privacy in public encounters with citizens—or if they are to be held accountable for their actions on the job.

We’ve discussed this here before.  Yes, a recent Supreme Court case appears to have settled this matter, in principle. The imbalance of power between individual and state is so huge that the citizen must — must — retain the one thing that equalizes the playing field somewhat.  The truth. In practice, this will still be a hard fight.  I tend to worry much less about restricting what the government and other elites can see (how you gonna stop em?) than about preserving our right to look back!

But can we look?  Really?  We have the illusion of choice…but six media giants now control a staggering 90% of what we read, watch or listen to. These companies are: CBS, Viacom, Disney, GE, NewsCorp (which includes Fox and the Wall Street Journal) and Time Warner (which includes CNN, HBO, Time and Warner Bros). The largest owner of radio stations in the U.S., Clear Channel, operates 1,200 stations, airing shows by the likes of Limbaugh and Hannity, with programs syndicated to more than 5,000 stations. And who owns Clear Channel? Bain Capital purchased Clear Channel shortly before Mitt Romney’s 2008 presidential bid  One clear reason why conservative talk show hosts support Mitt? And weren’t we supposed to be more independent and broad in in our access to information, by now?

Well, at least now we know who to blame for what’s happened to the History Channel.

A horrifying brain drain. “At some Ivy League schools last year, up to half of the graduates went into finance or consulting, a move that could have a profound effect on the economy in the years to come.”  Crum, any civilization that does this to itself deserves what will happen next. The very brightest, who do NOT fall for this trap will simply leave the country. A genuine “brain drain.” Leaving the finance twits in charge of a society that explores nothing, invents nothing, produces nothing except paper short-term-parasitic profits. Ever hear of the Golgafrinchan B-Ark? Think about it.

Self censorship? Social media giant Twitter announced they would block messages on a country by country basis, to “to withhold content from users in a specific country while making it available to the rest of the world.” This policy will allow Twitter to grow internationally into countries with “different ideas about the contours of freedom of expression”, but won’t affect China or Iran where Twitter is already completely blocked.

An unprecedented loss of Arctic sea ice over the last few decades suggests we may soon see an ice-free Arctic summer. But then, as I have linked many times before, the US Navy has long known this and is making major plans. So are the Russians. Maybe THAT will get through to your crazy uncle.

Digital thievery is rampant! Have a look at the precautions that US corporate officers, scientists and government officials have started taking, before getting on a plane to China.  “If a company has significant intellectual property that the Chinese and Russians are interested in, and you go over there with mobile devices, your devices will get penetrated,” said Joel F. Brenner, former counterintelligence specialist. We’re not enemies!  But things are passing through a phase where it just makes sense to be careful.  You’ll actually get more respect when they know you’re smart enough to protect yourself and your endeavors. Seriously, read the description of what a cautious businessman does to stay digitally clean and no bring home spyware.

The media does seem to have a polarizing effect…Would ANY new data make you change your opinions on hot button issues such as the death penalty, abortion, same sex marriage, legalization of marijuana? Or God? Or the fact that US taxes are near a 100 year low? Any data at all? Read about opinions beyond the reach of data.

Cadmium, a carcinogen and neurotoxin, may be as hazardous to children as lead. Current regulations are based on threats to adults; recent studies show possible links with learning disabilities and retardation in children.

== Better Accountability through Visualizing our World==

Shining a light into the darkness: I knew I liked the guy, despite resenting the soul-sold handsomeness… The Satellite Sentinel Project (SSP), begun by George Clooney, is an attempt to use technology to deter civil war and atrocities against citizens in Sudan. SSP combines satellite images and field reports with Google Maps to track movements of troops and displaced people, bombed villages, mass graves and other evidence of large-scale violence, providing public access to updated information on these long-suffering areas.

An ever-reddening glow: NASA video depicts global temperature data over 130 years

Speaking of heating… this map shows hot spots for terrorist attacks within the U.S.–a third of attacks occur in urban areas.

How is water used worldwide? Researchers map a global water footprint detailing water usage. 92% goes to growing food, 40% toward the export of products.

Satellite data reveal the extent of China’s air pollution problem–finding dangerous levels of fine particulate matter (PM2.5, less than 2.5  microns).

Accountability on a local basis: the energy usage of New York’s buildings, visualized.

When 30,000 square feet isn’t enough…aerial views of mega-mansions. Even as the size of the average American house shrinks after peaking during the boom, several of the wealthy are building gigantic homes of 20,000 square feet and more.

==On the Technology Front==

 Virtual devices will read your hand motions and gestures and provide what you want—meaning technology will appear even more like…magic. If you hold up your hand, a map or keypad will appear, for you to retrieve or send data. Sensors on the ceiling will monitor your gestures, and respond.  I portray this in Existence…

CleanSpace One, an $11 million “Janitor Satellite” under development in Switzerland, would be the first of a series of craft launched to clear orbital debris, grabbing items with its robotic arm.  Read a better method in chapter one of my next novel.

Patrick Tucker suggests that Artificial Intelligence will be America’s next Big Thing, directing traffic, managing electrical grids and resources, aiding doctors, lawyers and police, analyzing satellite data, optimizing manufacturing and design, developing new medicines and cures, leading to a third Industrial Revolution. Yet, the roboticization of the factory floor will have human costs, as well. See Making it in America in The Atlantic.

Researchers make iron invisible to X-rays, using quantum interference.

==Miscellaneous Fiction/Film==

I’m quoted in this article on Prophets of Science Fiction–and the interplay between science and Sci Fi.

A few sci-fi-ish films from this year’s Sundance Film Festival.

Eeeek!  A “re-imagining” remake of the worst sci fi pile of drivel ever made… Space 1999!

Fascinating perspectives from Jonathan Dotse – an IT student, blogger, and science fiction writer based in Accra, Ghana. He discusses the future of African science fiction.

How does Science Fiction influence Public perception of science topics such as Genetic engineering, cloning, nanotechnology? See an article in Biology in Science Fiction.

Glimpse this new Nigerian sci fi film! Kajola.

Seriously? Abraham Lincoln: Vampire Hunter. In this movie, that axe isn’t just for chopping down trees… and it looks as if it just might (unbelievably) be worth checking out!

Russian speakers, see a translation of my essay about The Uplift War.

Enough of a coolstuff dump for ya?  Well… the year has just begun…

Leave a comment

Filed under media, space, technology