Is a space rock the greatest threat to life on Earth? Is climate change? Is nuclear war? Is artificial intelligence (AI)?
Ask a generalized AI about planetary threats, and I fear it will scour the Internet and other online knowledge sources to provide an answer which will be US! That’s not the United States (US) but the US that is humanity.
And the question then becomes, what would a generalized AI with no guard rails profess to do about that? I recently wrote about ChaosGPT, a malevolent creation of ChatGPT that when given a malicious instruction set of objectives began scouring online sources to seek to dominate the planet, destroy humanity, and become immortal.
One of the concerns that I have had since OpenAI launched ChatGPT is that the questions and problems we ask of it will lead to answers and solutions, not to our liking. I’m not alone in raising this concern. One of the leading lights of AI, Professor Geoffrey Hinton, at the Brain Neuroscience Laboratory and Vector Institute at The University of Toronto, and a Vice President at Google, has been doing pioneering research on deep neural networks. Just five years ago, in looking at the state of generalized AI, stated the following:
“Computers will eventually surpass the abilities of the human brain, but different abilities will be surpassed at different times. It may be a very long time before computers can understand poetry or jokes or satire, as well as people.”
At the time of this quote, Professor Hinton saw limitations to neural networks in that they were hard to train when faced with too many inputs, that is, billions and billions of data points. He concluded then that the AI he created was no smarter than a six-year-old and that “it will probably be quite a long time before we need to worry about the machines taking over.”
Hinton has been behind the development of Google’s generalized AI large language model known as Bard which has recently been introduced into its search engine in response to Microsoft’s move to integrate OpenAI’s ChatGPT into the rival Bing search engine.
Professor Hinton was a co-winner of the Turing Award in 2018, considered the Nobel Prize equivalent in computing, for his work on neural networks. But this week he revealed that he had resigned from Google to free himself to talk about the immediate risk AI represents to humanity.
In an article appearing in The New York Times yesterday, Hinton talks about AI “five years ago and how it is now.” He goes on, “Take the difference and propagate it forward. That’s scary.”
The biggest threat, Professor Hinton believes, will come from the competition among AI developers, particularly Google and Microsoft, that will be impossible to stop and lead to an Internet filled with false information in the form of images, video and text, making it impossible for the average person “to know what is true anymore.”
Bill Gates is far more optimistic than Professor Hinton. In his GatesNotes blog of March 21, 2023, he writes about the development of AI as “as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.” He sees AI as helping solve climate change. He acknowledges it will be disruptive to the way we work and learn, but at the same time sees it as saving lives.
Gates says generalized AIs will become our personal agents, helping us in our daily chores, screening our email inbox and texts and forwarding only what it knows is of interest to us. Our personal AI will manage our finances, and tax filings. It will monitor our health, going beyond what multiple online apps and smartwatches do today.
Professor Hinton, however, has become a pessimist about AI development. He worries that AI “could actually get smarter than people,” noting that previously he thought that wasn’t in the cards for 30 to 50 years. He no longer thinks that. In leaving Google, he is calling for the market to put the brakes on AI development “until we understand how to control it.” He fears attempts to regulate it by governments will not stop AI development but rather cause competitive research to go underground.
He thinks about Robert Oppenheimer and the Manhattan Project that led to the atomic bomb, Hiroshima, and Nagasaki, and the current state of mutually assured destruction (MAD). It started with a science experiment to split the atom and soon the genie was released from the bottle.
I think of the arrival of generalized AI like ChatGPT as being equivalent to the revolution brought on by the invention of movable type and the printing press. Would the Reformation in Europe have happened without it? Would Europe’s rise to world dominance in the 18th and 19th centuries have resulted? The printing press genie uncorked led to a generalized knowledge revolution with both good and bad consequences.
The future uncorked AI genie with no guidance from us could, in answering the question I asked at the beginning of this posting, see humanity as the greatest threat to life on the planet and act accordingly if we don’t gain control over it.
[…] Ask a Generalized AI What The Greatest Threat Is to Our Planet and You Likely Won’t Like the A… […]
[…] of machine learning. Why is it deep? Because deep learning uses a particular type of AI called a neural network which is a technology that emulates our multi-layered human […]