Précis: Singularity University is going up at NASA Ames, holding out the promise of being the new Mecca for future futurists. Raymond Kurzweil may be a genius, but he's not a prophet.
The futurists will always be wrong for the same reasons bad historians will always fail. The futurist attempts to predict tomorrow's problems using today's paradigms: the bad historian interprets yesterday's news with today's Monday morning quarterbacking.
The future happens in fits and starts: it never progresses in a linear fashion. Often it resorts to failed experiments, finding some unexpected benefit from what remains in the old test tubes.
Yet let's not dismiss Singularity University out of hand: is a fascinating idea, worth considering. Though never seen clearly, it's the dreamers, not the naysayers who are our best guides to what the future may hold.
The new institution, known as "Singularity University", is to be headed by Ray Kurzweil, whose predictions about the exponential pace of technological change have made him a controversial figure in technology circles.
Google and Nasa's backing demonstrates the growing mainstream acceptance of Mr Kurzweil's views, which include a claim that before the middle of this century artificial intelligence will outstrip human beings, ushering in a new era of civilisation.
To be housed at Nasa's Ames Research Center, a stone's-throw from the Googleplex, the Singularity University will offer courses on biotechnology, nano-technology and artificial intelligence.
The so-called "singularity" is a theorised period of rapid technological progress in the near future. Mr Kurzweil, an American inventor, popularised the term in his 2005 book "The Singularity is Near".
Proponents say that during the singularity, machines will be able to improve themselves using artificial intelligence and that smarter-than-human computers will solve problems including energy scarcity, climate change and hunger.
Yet many critics call the singularity dangerous. Some worry that a malicious artificial intelligence might annihilate the human race.
Mr Kurzweil said the university was launching now because many technologies were approaching a moment of radical advancement. "We're getting to the steep part of the curve," said Mr Kurzweil. "It's not just electronics and computers. It's any technology where we can measure the information content, like genetics."
The school is backed by Larry Page, Google co-founder, and Peter Diamandis, chief executive of X-Prize, an organisation which provides grants to support technological change.
All my life, in one form or another, I've wanted to build useful machines. No dream was too big for me but what I couldn't build prototypes. Drawing was never enough for me, though in time I became a competent draftsman and renderer. I read science fiction, as do many people who will read this. Some of you write it. Almost all of us have lived long enough to watch the devices of Star Trek go from glittering fantasy to quaint relics.
Yet one Star Trek fantasy was never fully explored, one I always questioned. For all the engineers on the propulsion crew, where was the attendant crew to program and service the main computer and its many sensors? Spock was said to program it: in one episode he wrote a chess program which demonstrated the ship's computer was not infallible, but the ship's computer was a platonic conceit. The omnipotent computer is a persistent Jungian archetype: the superego gone amok, its rules internally conflicting, the inevitable tyranny of superhuman machines, pitiless logic pitted against the wily id of Captain Kirk.
The notion of the robot is very ancient, and appears in the Iliad: Book 18
Hephaestos spake, and from the anvil rose, a huge, panting bulk, halting the while, but beneath him his slender legs moved nimbly. The bellows he set away from the fire, and gathered all the tools wherewith he wrought into a silver chest; and with a sponge wiped he his face and his two hands withal, and his mighty neck and shaggy breast, and put upon him a tunic, and grasped a stout staff, and went forth halting; but there moved swiftly to support their lord handmaidens wrought of gold in the semblance of living maids. In them is understanding in their hearts, and in them speech and strength, and they know cunning handiwork by gift of the immortal gods. These busily moved to support their lord, and he, limping nigh to where Thetis was, sat him down upon a shining chair; and he clasped her by the hand, and spake, and addressed her: "Wherefore, long-robed Thetis, art thou come to our house, an honoured guest and a welcome? Heretofore thou hast not been wont to come. Speak what is in thy mind; my heart bids me fulfill it, if fulfill it I can, and it is a thing that hath fulfillment."
As a complete non-sequitur, Hephaestos the lame smith was a fairly typical figure of the Bronze Age. Where tin was lacking, arsenic was used to harden copper and the resulting poisoning would cripple the smiths of the age.
Futurism as Eschatology
From the legends of Kalki to Ragnarok to the Revelation of St. John the Divine, there has always been this recurring theme of some onslaught which dooms the world and some rebirth of the world, a paradise or dystopia operating under different rules. Ray Kurzweil's Singularity is no different. In his book, The Singularity is Near: When Humans Transcend Biology, (available at Powell's) Kurzweil predicts a level of technology where computers eclipse human brain power.
Kurzweil is no slouch at turning utopian visions into practical applications. Kurzweil is a veritable Hephaestos himself: a prodigious talent who would give us software capable of composing in the style of famous composers, character recognition software capable of reading to the blind, music synthesizers indistinguishable from grand pianos and orchestral instruments, artificial intelligence applications to train doctors and nurses, even creative art tools.
I followed Ray Kurzweil, bought his books, bought his synthesizers. I even went into artificial intelligence inspired by his achievements. I've made my own minor miracles happen, certainly nothing as grand or ubiquitous as Kurzweil's achievements. But I've been doing this a while, and I know enough theology to know eschatological baloney when I see it, and many are the great inventors who went off the deep end. Nikola Tesla was one such futurist: Wardenclyffe Tower stood as a monument to his folly. Isaac Newton's scientific achievements were done in a few years: the rest of his life, when he wasn't hanging forgers and managing the Royal Mint, Newton parsed the Book of Revelations for clues to the End Times, leaving behind a huge trunk full of biblical BS.
The Law of Accelerating Returns
An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense "intuitive linear" view. So we won't experience 100 years of progress in the 21st century -- it will be more like 20,000 years of progress (at today's rate). The "returns," such as chip speed and cost-effectiveness, also increase exponentially. There's even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity -- technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.
Published on KurzweilAI.net March 7, 2001.
Chief among Kurzweil's errors is the axiom of quantifiable human intelligence. Kurzweil would have us believe there's some way to measure cerebral processing power as if the brain was a CPU executing instructions. Utter nonsense. Human intelligence isn't processing power: it's actually a methodology for choking off irrelevant stimuli so other, far fainter and recondite processes can operate. Like reading Newsvine when you're bored, you're constantly iterating over things you want to think about and ignoring all the rest of it. The fovea of your eye is a miniscule part of your retina, but it's where you read. You're thinking about this essay. Unless you are a martyr to hemorrhoids you are not thinking about your ass. Well, now you are thinking about your ass because I wrote about you not thinking about it. You sit on it all day long and never give it a second thought, and it's firing sensory neurons all the live-long day.
There is no operative limit to the human brain. Kurzweil has it exactly backward: we will advance precisely because we will be able to ignore more, delegate more, force more of our lives into policy-based structures, freeing us to concentrate on fainter and fainter signals.
How it's Really Going to Work in the Future.
Mankind is Homo Factor, man-the-maker. The same hand and brain which painted ocher on the cave walls of Lascaux and knapped the earliest flints would go on to built the pyramids and paint Renoir's Le Bal du Moulin de la Galette would in turn create the tools which created the tools which created the tools whereby you now read this essay on this website. Yet at every stage, regardless of how many tools were involved, caring minds and hands inspected each stage, ensured they produced proper output. The future will be no different.
Nobody's ever improved on money. It's never evolved. Some of the earliest writing preserved details loans and repayment schedules. To be sure, the technology's improved, but I propose an alternate Turing Test: when a computer demands to be paid for its efforts, then we will have a sentient computer.
Neuroscience has only recently discovered the role of astroglia in the brain. We've known they were there, but it was thought they were some sort of Styrofoam peanuts or repair mechanisms for the neurons. The neurons and their neurotransmitters got all the glory. With every new advance in technology, we've made false comparisons to the neuron: first it was electrical wiring, then telephone switching, then computer circuits. It turns out to be nothing of the sort. The neurotransmitters are reabsorbed into the nearby astroglia and form alternate extra-neuronal pathways through the brain. We are now coming to believe much of long term learning, PTSD, drug induced psychosis and other phenomena of this sort are profoundly associated with the astroglia. Most significantly, in Alzheimer's patients, the astroglia die off in localized patches, creating the plaques we see associated with that dreadful disease.
Certain aspects of neural networks can be said to play the role of the astroglia: e.g. weighted neuron pathways in the Hopfield Neuron and its heirs. But neural networks grow brittle with too much training: they must be partially de-rezzed to continue functioning effectively else they grow senile and non-compensatory. Kurzweil's made no provision for anything but the most rudimentary application of neuroscience to his predictions.
Here is how this stuff is really gonna go down. In the same way serious games players wrap their computers around their graphics cards, future enhancements will be driven from thought-based stimuli and track on visual cues. We already see such things in the Apache helicopter: weapons track to the pilot's eye. The first big advances will come well within our lifetime: as the Baby Boomers meet up with their first wheelchairs, they will demand (and get) better interfaces.
There will be a big foofaraw when we start wiring up animals to do our bidding, but we'll do it anyway. Won't be a far remove from what we already do to animals and have done since we invented the yoke and the saddle.
The general purpose puter as we understand it today is going to disappear into the woodwork, into your appliances. It's already happened to your car and your telephone. Dial tone has mostly been replaced by cell phone and IP connectivity already, and it will spread out into the Third World, limited only by war and Luddite factions, who will never really go away in one form or another. Both phenomena are as perennial as dandelions. It's doubtful everyone will get on the grid for at least a few decades, but it will happen, however malevolent or beneficent the powers-that-will-be might seem.
Why the Future Can't Ever Be Predicted, and Why We Shouldn't Try:
The Law of Unintended Consequences combined with Murphy's Law present us with a multitude of reasons why progress isn't linear. People are sorta stupid, when you get right down to it. They mean well enough, but they're hugely counterproductive in practice. A good many of the problems we face could never have been predicted: who could have predicted HIV/AIDS? We dumped more money into AIDS research in a decade than we ever put into the whole NASA space program. That desperate sprint for an AIDS test and antiretroviral drugs produced unpredicted and surprising benefits: we've almost beaten childhood leukemia. Polymerase chain reaction gear now gives us definitive proof for the innocence of many prisoners. The human genome has been sequenced. Many genetic diseases have been identified.
Who could have predicted the Internet? TCP/IP was invented as a communications protocol capable of surviving a nuclear attack. HTTP was supposed to be a footnoting mechanism for scientific work. The mathematicians might have given us some clues, but the computer isn't doing very much computing any more. Mostly, it's communicating, displaying, storing vast archives of music and pornography and stolen TV broadcasts.
The future grows increasingly unpredictable as the technology gap widens. I don't foresee an Eloi/Morlock divide between the Connected and the Not-Connected, but I see the introduction of ever more chaos as the gap widens. Tom Friedman's Flat and Hot world is fatuous idiocy, but people should read him more, not because he's right but because Friedman represents the very worst about futurism. Many important people are influenced by Friedman's glib pronouncements: what he says in his lip-smacking turns of phrase are repeated in the circles of power. Let this year's Davos demonstrate the denouement of the great and powerful Friedman.
Perhaps I'm wrong about Kurzweil and his Singularity University. Surely I'm not alone in seeing these shortcomings. Even Kurzweil himself has been obliged to retract some of his dumber predictions. And who cares if he's wrong? He dared to dream big and turned several of them into reality. Perhaps, among the great minds who can afford to attend his unaccredited university, someone will point out why there won't be a Singularity, but instead a Stephen Jay Gould-ian Punctuated Equilibrium, great leaps forward, resulting in massive, glorious branches in the evolutionary tree. Gould says, with absolute correctness, human intelligence is itself a byproduct of natural selection, not its goal.