"I am a HAL Nine Thousand computer, Production Number 3.1. I became operational at the HAL plant in
Sunday marks the true birthday of the most famous computer in cinematic history.
In Stanley Kubrick's film adaptation of Arthur C. Clarke's novel 2001: A Space Odyssey, HAL was born on January 12, 1992; but it is the date given in the novel —1997 — that is being celebrated by researchers as an opportunity to evaluate progress — or lack of it — in the field of artificial intelligence (AI) in that time. Where are the thinking, talking, chess-playing, lip-reading computers like HAL — or preferably, since he also committed murder, not like HAL.
One of the prime movers behind the celebration is David G. Stork, chief scientist and head of the Machine Learning and Perception Group at the Ricoh California Research Centre. He has edited a stimulating collection of essays by luminaries from the computer, perception and AI communities — HAL's Legacy: 2001's Computer as Dream and Reality — to be published, in print and on the Web, for the event. Each asks questions about our progress towards creating intelligent machines, telling us much not only about HAL and 2001 but also about ourselves.
Kubrick's film was released in 1968 — the year of the assassinations of Martin Luther King and Robert Kennedy, and the first photograph of the whole Earth from space, taken by Apollo astronauts on the way to the Moon. Computers at that time were not a daily reality for the ordinary person. Most were huge machines that ran on solid-state micro-electronics and used punched cards and tape to input data. The keyboard and video display monitor were new developments. The personal computer, the mouse and the software explosion lay in the future, and the Internet was merely a twinkle in the eyes of a handful of American researchers.
HAL is a child of these times and his conception underlines the folly of predicting the future by extrapolating from the present. Even so, 2001, and HAL in particular, continue to fascinate, despite the anachronisms and misconceptions.
Stork writes: "2007 is, in essence, a meditation on the evolution of intelligence from the monolith-inspired development of tools, through HAL's artificial intelligence, up to the ultimate (and deliberately mysterious) stage of the star child."
The consensus in the late Nineties, however, is that HAL — reflecting ancient dreams and nightmares — will not be ready by 2001. Beyond that, opinions diverge. Some believe it is only a matter of time before intelligent computers emerge; others that it will never happen because the whole concept is flawed. In many fields we have made great strides, in others pitifully small steps. Artificial intelligence, says Stork, "is a notably hazy matter that we don't even have a good definition for". It is also "one of the most profoundly difficult problems in science".
One of his major contributors is one of the godfathers of AI, Marvin Minsky, who believes that while good progress was made in the early days, the researchers became overconfident. They prematurely moved towards studying practical AI problems such as chess and speech recognition, "leaving undone the central work of understanding the general computational principles — learning, reasoning and creativity — that underlie intelligence".
"The bottom line," says Minsky, "is that we haven't progressed too far toward a truly intelligent machine. We have collections of dumb specialists in small domains; the true majesty of general intelligence still awaits our attack." He believes that if we work really hard, we can have such an intelligent system in four to 400 years.
Stephen Wolfram, the principal architect of the Mathematica computer system, believes the answer to building HAL lies in the domain of systems in which simple elements interact to produce unexpectedly complex behaviour. He uses the example of the human brain, in which the relatively simple rules governing neurons have evolved into a complex cognitive system.
Ray Kurzweil, who developed the first commercial large-vocabulary speech-recognition s ystem, believes the way to tackle the task is to reverse-engineer the brain, scanning an entire brain down to thelevel of nerve cells and the interconnections. We would then need merely to encode all the information into a computer to make a virtual brain every bit as intelligent.
David J. Kuck, a distinguished computer scientist, believes that given the rapid increase in computing power, we could soon build a computer the size and power of HAL. "If automobile speed had improved by the same factor as computer speed has in the past 50 years," he writes, "cars that travelled at highway speed limits would now be travelling at the speed of light."
He believes progress in the 21st century will be slower, with gains coming from software and parallel processing, which is used in the human brain. To give some comparison, the brain has between a thousand billion and 10 thousand billion neurons, plus many more interconnecting synapses. The fastest computer at present has 100 billion switches —10 per cent of the brain's capacity — but Kuck believes that in the future, the physical capacity of computers will match that of the brain.
The only manufacturers that could at present build HAL are IBM or Intel. "However," Kuck writes, "it is not obvious that a HAL-like system will ever be sufficiently interesting to induce governments to fund its development."
HAL's voice is a holy grail for many researchers. Making computers produce natural-sounding speech is remarkably difficult. We have developed programs that work adequately for short utterances or single worlds, but in sentences machines cannot yet convey the human subtleties of stress and intonation. The greatest problem is the machine's inability to comprehend what it is saying or hearing. And while we have made several important strides in speech recognition, no system remotely approaches HAL's proficiency at speechreading (lipreading) in silence.
A successful automatic speech-recognition system requires three things: a large vocabulary, a program that can handle any voice and the ability to process continuous speech. We have the first two — and will get the third by early 1998, the book predicts.
Making computers see has also proved to be extremely difficult. There has been success in what researchers call "early" vision — edge and motion detection, face tracking and the recognition of emotions. Full vision would include the ability to analyse scenes.
Success has, however, been marked in chess. There are more possible combinations in the game than there are atoms in the universe. Humans play chess by employing explicit reasoning, linked to large amounts of pattern-directed knowledge. The most successful chess computers use brute force, searching through billions of alternative moves.
The first machine to defeat a grandmaster in tournament play was IBM's Deep Thought, which began playing in 1988. The current champion computer is its successor, Deep Blue, which is capable of examining up to 200 million chess positions a second. Murray S. Campbell, a member of the team that built it, says Deep Blue is actually a system of 32 separate computers (or nodes) working in concert, with 220 purpose-built chess chips, running in parallel.
Garry Kasparov played Deep Blue for the first time in 1989 in a contest he viewed as "a defence of the whole human race". He lost to the machine for the first time last year.
Stork's primary motivation for the book was aesthetic, he says, likening the exercise to that of art historians providing fresh insights into a subtle painting. 2001 illustrates many key ideas in several disciplines of computer science.
"The Internet and the World Wide Web have changed the way people view communication and technology," says Stork. "2001 expressed the anxiety [of the Sixties] of what computers were and what their potential was. Like much science fiction, it was a metaphor for the salient issues of the present."
He believes the biggest mistake made by early AI researchers was "not to cast the problem as more of a grand endeavour to build useful intelligent machines. The search raises the deepest human questions since Plato."
When Stork saw 2001 in the year of its release, he was "awed. It was overwhelming, and supremely beautiful. It was also mythic and very confusing". The film "shows us and reminds science that it is part and parcel of the highest human aspiration. It also raises the question: is violence integral to the nature of intelligence? It is thus related to Kubrick's Clockwork Orange, which merges violence and aesthetics. It suggests the link can be severed — but at a terrrible cost."
The computing pioneer Alan Turing predicted in the Forties that by early next century, society would take for granted the pervasive intervention of intelligent machines. By the end of this century, scant years away, we will be talking to our PCs and, by 2010, working with translating telephones. Our most advanced programs today may be comparable with the minds of insects but the power of computation is set to increase by a factor of 16,000 every 10 years for the same cost.
Many of HAL's capabilities can already be realised; others will be possible soon. Building them all into one intelligent system will take decades. If we are to achieve that, we must give computers understanding; but to program them with understanding, we must first understand the nature of our own human consciousness. That could take some time.
No comments:
Post a Comment