For computers to become more human they have to exhibit a lot more intelligence than the technologies we have in place here at the end of the first decade of this 21st century. When Deep Blue, the IBM supercomputer defeated Garry Kasparov in 1997 the intelligence built into the computer’s ability to analyze 200 million positions per second represented artificial knowledge specific to the one task, playing chess. But to be truly human future computers must be multi-tasking.
Back in the 1990s when I was working for a large software company, they had developed neural agents. These were bits of code that could be added to a device or a network and sense patterns in the data flow or in the operation of the equipment. As the neural agents learned the “normal” patterns and became aware of what was normal, they could also be used to alert if abnormal patterns occurred. When abnormal happened the neural agents would send messages to human observers through a computer display or would engage other neural agents specifically designed to compensate for the abnormal and restore normal operations. This type of pattern intelligence is not the same as human intelligence but nonetheless it is intelligence.
When we think about our intelligence versus what I have described in the previous paragraph, what are the differences? Humans as well as many other animals exhibit many intelligence traits:
– the ability to reason
– the ability to acquire knowledge and retain it for later use
– the ability to solve problems
– the ability to plan
– the ability to communicate through vocalization and understand vocalizations from third parties
– the means, through mind to limb interaction, to manipulate objects.
These are key traits that define our intelligence.
Deep Blue fulfills the first trait of intelligence – the ability to reason. In part, Deep Blue also fulfills the second trait, the ability to acquire knowledge. But the knowledge that Deep Blue acquired is specific to the multivariable moves within the game of chess. Deep Blue, therefore, only meets a minimal standard when one is talking about creating a computer that is human even though this computer could outplay the World’s number one chess master.
If you think of the way we as humans gather knowledge, we do it through observation, interaction with other humans, reading, and trial and error. Sometimes we learn something in one task that we then can apply to another with very different circumstances.
As much as we can give a computer access to all of the content of the Internet in which to acquire knowledge, how do we give it the ability to apply that knowledge using computational intelligence? The field of computational intelligence focuses on developing computers that use fuzzy logic to solve problems. When we talked about quantum and biological computing in earlier parts of this multi-part article we described the attributes of these types of systems with their abilities to go beyond the logic of silicon-based computers. Programmers working in the field of artificial intelligence talk about algorithms that embrace techniques such as swarm intelligence, fractals, and chaos theory. Computational intelligence approaching our way of assimilating knowledge involves the creation of programs that combine learning and adaptation.
How close are we today to creating human-like intelligence in our computer systems? Ray Kurzweil, and David Gelernter, both noted authors and futurists, describe computing technology’s future and the rise of conscious, creative, volitional and even spiritual machines in a debate that occurred in December 2006 at MIT. The event was held on the 70th anniversary of a paper published by Alan Turing, the inventor of the Turing Machine and Ultra, the latter, the machine that broke the Nazi Enigma code in the Second World War. Turing is a key individual in the foundation of modern computing. In 1948 he published “Intelligent Machinery,” a paper that first described artificial intelligence in similar terms to what I have written here.
Kurzweil describes computing technology that has mastered human emotion and subjectivity. Remember Star Trek’s Data and his discovery of emotion. To Kurzweil emotion defines the most intelligent aspect of being human. Subjectivity or consciousness gives an artificial intelligence the means to learn from experience and relate the experience to self. For Kurzweil the technology to achieve this is just around the corner, a mere twenty years from now. Where Kurzweil sees consciousness as achievable in artificial intelligence, Gelernter does not. He argues that no software can be built to create consciousness and self-awareness. Kurzweil backs up his prediction by describing the acceleration of information technology and its exponential growth. He points out current experimentation by IBM in modeling the human cerebral cortex and discounts Gelernter’s definition of software based on what we see today.
An artificial intelligence would mimic our brains which when we brake them down, are massive parallel processors featuring over 100 trillion connections all simultaneously computing. Can we model and simulate a neuron? We are already well on our way. Can we design a machine with 100 trillion parallel processes? We have already seen in Parts 2 and 3 of this discussion the evolution of quantum and biological computing with the potential to approach if not exceed the capacity of our human brain.
Good article. Artificial intelligence is far more rampant in our lives then many think and includes everything from our search patterns in Google to the automation of ordering groceries. The question that is over looked is when will we begin imposing limits on artificial intelligence? Let’s face it we have grown accustomed to having decisions made for us – when in tuned to our thought process – and for many of us we have learned to use artificial intelligence to further enhance our efficiency and effectiveness. However, the remainder see it as an enabler to facilitate lackadaisical behaviour and, even worse, use it as an excuse for mistakes and poor judgement. Time will sort out whether we made the right decision or not.
Pattern recognition is the basis of crude artificial intelligence. It is the type of intelligence that we see in Amazon and other marketing sites that identify buying habits and make suggestions for future purchases.
Consciousness and self-awareness are an entirely different type of intelligence. When I was very young I had a pet parakeet named Twinkles. Twinkles spent hours looking in his mirror and pecking away at the image he saw. Was he aware that he was looking at himself the entire time? Biologists who study bird intelligence do not associate self-awareness with parakeets. However they do associate it with crows and other corvids as well as some breeds of parrots. Corvids, in particular, members of the raven, crow, and blackbirds show a high degree of intelligence that indicates self-awareness. Experiments with several of these birds show that when they look in a mirror and see something on their feathers, they then look at themselves and remove whatever they see. They also anticipate behaviour of other members in their group or outsiders who enter into their territory.
Can a computer today display that kind of intelligence? No but we are clearly on a path to developing the technology to build artificial minds that will more than exceed the cleverness of crows, approaching the intelligence of humans in many ways.
Should we limit the artificial intelligence we can create? In Isaac Asimov’s science fiction novels that feature robots he establishes a number of basic codes of conduct that govern robotic behaviour. Asimov’s laws include:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Will this make human beings lazy? If you remember when scientific calculators replaced slide rules, did it make us lazy? Not really. It contributed to an explosive growth in scientific research and discovery including the development of modern computers. So I don’t believe that we need to limit artificial intelligence other than conformity to laws that we can build into the algorithms we write and that our artificial intelligences adopt as their equivalent to the Ten Commandments.
It appears that breakthroughs in computer intelligence are moving us quickly towards human-like attributes.
At the Supercomputing show in Portland, Oregon, Jeffrey Burt of eWeek reports that IBM researchers announced yesterday that they have achieved two major milestones, the first almost real-time cortical simulation of the brain that goes beyond that of a cat cortex, and the development of an algorithm that makes use of IBM’s Blue Gene supercomputing architecture to map the connections between cortical and subcortical areas in the human brain.
A collaboration among researchers at IBM, Stanford, University of Wisconsin, Cornell, Columbia University Medical Center and University of California-Merced, the cognitive computing project team is striving to create a computer that evaluates and acts similarly to the human brain.
The research is part of the Defense Advanced Research Projects Agency’s (DARPA) initiative called SYNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics). This computer brain uses 147,456 processors and 144TB of main memory. The algorithm that the researchers have created lets them experiment on how brain structure affects function.
Dharmendra Modha, Manager of Cognitive Computing at IBM Research-Almaden, makes the following comments on his blog: “While we have algorithms and computers to deal with structured data….and semi-structured data, no mechanisms exist that parallel the brain’s uncanny ability to act in a context-dependent fashion while integrating ambiguous information across different senses (for example, sight, hearing, touch, taste, and smell) and coordinating multiple motor modalities. Success of cognitive computing will allow us to mine the boundary between digital and physical worlds where raw sensory information abounds. Imagine, for example, instrumenting the world’s oceans with temperature, pressure, wave height, humidity and turbidity sensors, and imagine streaming this information in real-time to a cognitive computer that may be able to detect spatiotemporal correlations, much like we can pick out a face in a crowd. We think that cognitive computing has the ability to profoundly transform the world and bring about entirely new computing architectures and, possibly even, industries.”
I want to add a postscript to this article because of a recent announcement from IBM. Hard on the heels of IBM’s Watson triumph on Jeopardy, the company has developed a cognitive computing architecture. See the link at: http://www.smartertechnology.com/c/a/Business-Analytics/IBM-Debuts-BrainLike-Cognitive-Computer/?kc=STNL08232011STR3.
What is a cognitive computer? It’s a computer that mimics the way our brain processes data. Traditionally computers separate programs from data stored in memory. IBM’s new architecture distributes processing and memory within the same circuits aping the brain’s neurons and synapses.
This is much better than parallel processing, the architecture behind Watson. The IBM announcement follows on the heels of a previous announcement about Blue Matter, a computer that simulates a complete cat brain.