An open letter signed by more than 1,100 technology and business industry leaders calls for a six-month moratorium on the race to develop artificial intelligence (AI), and in particular large language models like ChatGPT-4.
Notable signatories include Steve Wozniak and Elon Musk. The group wants to stop training in AI laboratories of systems until protocols are developed around a common set of safety guard rails are put in place. The letter was published by the Future of Life Institute (FLI), whose declared mission is to steer transformative technology towards benefiting life and away from extreme large-scale risks.
The letter declares that AI with “human-competitive intelligence” poses a risk and needs to be planned and managed with care. That’s because AI could disseminate misinformation, automate jobs including even fulfilling ones, and inevitably produce non-human minds to “outnumber, outsmart, obsolete and replace us.”
When OpenAI released ChatGPT back in the fall of last year, I tried it. That was OpenAI’s 3.5 release. Since then the company has published ChatGPT-4. And it is the raison d’être and cause for concern that has led to the FLI letter being released calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” It states that should there be no agreement for a halt, then governments need to step in.
Will six months be enough to come up with a set of shared guard rails? And what should they be?
The FLI letter calls for protocols that ensure the systems “are safe beyond a reasonable doubt.” The call is for all AI developed going forward be safe, accurate, transparent, trustworthy, interpretable, aligned and loyal. It asks that a regulatory body “dedicated to AI oversight” be enacted with “robust auditing” and a “certification ecosystem.” The letter also believes public funding for AI safety should be put in place.
One of the biggest critics of OpenAI has been Musk. He once served on the organization’s board of directors. Musk’s concerns are that the company in its relationship with Microsoft has lost its “virtue.”
I haven’t yet had the opportunity to test ChatGPT-4. The encounter with version 3.5 was spooky enough as can be attested from what I wrote about it back in January of this year. So I can imagine that with the latest enhancements of this version of a large-language modelling AI, we are entering a realm where humans will find ourselves in competition with digital minds that creators like OpenAI may soon find have gone beyond their ability to manage and control.
The FLI letter’s call for a halt is to ensure that “powerful AI systems…be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Past Unintended Consequences From Lacking Guardrails
Think about the splitting of the atom and the invention of the automobile. The first led to the development of the atomic bomb used twice at the end of World War Two with the sword of Damocles today continuing to be held over us with visions of nuclear Armageddon.
And second, the unintended consequences that have led to the climate change crisis of today. Automobiles and particularly the internal combustion engine (ICE) that run them have been significant contributors to the environmental crisis of climate change that we face today.
Could we have halted research on the atomic bomb while facing the threat of Nazi Germany and Japan back in the 1940s? Could America have put a stopper in the atomic bottle and come to a common agreement with the Soviet Union in the post-war to halt all atomic research focused on bombs?
Could we have anticipated the need for guard rails at the birth of the automobile age not knowing that carbon dioxide tailpipe emissions would within a century cause anthropogenic global warming? With guardrails in place would the electric car or another green method of propulsion have prevailed?
[…] Why Are Many Technology Leaders Calling For An AI Halt? […]
[…] from ChatGPT, many working with AI are worried about where future advancements will take us. An open letter signed by more than 1,100 technology industry leaders has called for a temporary halt to development until safety guardrails and protocols are put in […]
[…] and economic implications, seen as both a blessing and a curse. In view of its potential dangers, many AI scientists have expressed concern over AI developments that border on technophobia. But there is a means of defending ourselves from […]