October 25, 2018 – As much as many of us fear the onset of artificial intelligence (AI) overreach, the truth is this technology will not displace humanity for the foreseeable future. That’s because the AI of today is described as being narrow or weak, designed to accomplish specific tasks such as voice recognition, navigating a map, or flying an airplane on autopilot. In these specific, tasks AI can outperform us beating us at chess or Go. AI can solve complex mathematical equations. But it can’t outthink us in the abstract or when doing multitasking. That kind of AI we would call general or strong, and today it doesn’t exist.
One of the reasons AI remains narrow is the limitations we humans place on it in writing the software code that governs it. The constraints make taking on new information and applying it more difficult. This limitation some would say is humanity’s saving grace. But coders working on AI are beginning to give it imagination and memory. An AI that does this can deal with a new experience and compare it to what it has seen in the past. It can identify old objects it has encountered even when it comes upon them from a new perspective. This would be the beginning of strong AI.
Experts in AI call the current limitation catastrophic forgetting. It’s not that AI algorithms cannot learn, because they do. Exposed to countless examples related to a task for which they have been programmed, they become narrow experts. So recognizing one face from thousands of photographic images becomes second nature to them. But ask that same AI to recognize the emotion rendered in that face and thousands more and it requires a new algorithm and retraining to achieve the second narrow task.
So for those who believe the machines are about to dominate us, take a breath. It ain’t happening. But wait, Irina Higgins, a scientist working at Google DeepMind, is working on algorithms that can imagine within the context of the virtual environment of a video game. Using the DeepMind neural network, the AI is able to discern individual objects within the simulated environment and imagine them in new locations and configurations. The object Higgins states is to create “a machine to learn safe common sense in its exploration so it’s not damaging itself.”
The AI she has created can separate an object from its context and remember it even if it encounters it in a different environment. This is an algorithm that exhibits memory for something that you and I do automatically without thinking twice. We see a fire hydrant in one location and have no trouble recognizing other fire hydrants even when painted a different colour or slightly different in shape. Higgins’ AI can do similar abstraction by seeing a familiar object and recognizing it when viewed from different angles or when encountered in an unfamiliar setting.
It is hard not to envision benefits from such a breakthrough in AI. It means AI can undertake more cognitive tasks involving self-learning and self-improvement. Such an AI can be inventive, find cures for diseases, or even tackle climate change reversal.
What we don’t want an AI imbued with this capability to do is use it to remember and apply abstraction to tasks which can do harm.
In other words, if strong AI were to be incorporated into autonomous weapon systems, and be programmed not to harm itself, then efforts to shut it down by its human designers could fail. We don’t want that.
And then there is using such a strong AI to do work deemed beneficial to us. An example would be asking your autonomous vehicle to get you to a hospital as fast as possible because of a medical emergency. Could your strong AI interpret that in such a way as to take off as if it were driving in the Indy 500 but doing it on city streets?
Similarly, if we were to employ a strong AI to help us combat climate change, would it identify humans as the problem and attempt to eliminate us?
So people like Higgins have an enormous responsibility when tinkering with algorithms that expand the memory capabilities of AI. We need these tools. We need their contributions to our collective wisdom and societal improvement. What we don’t need is for them to run amok.