HomeTech and GadgetsArtificial IntelligenceAI Defence in Depth and AI Gatekeeper Projects Emerging

AI Defence in Depth and AI Gatekeeper Projects Emerging

In March of this year, the State Department of the United States commented on a report by Gladstone AI that described how rapidly evolving artificial intelligence posed a threat to Americans and the rest of the world. The State Department had commissioned the report, “Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI” which described AI as coming with great opportunity and great “potential for catastrophic risks fundamentally unlike any that have previously been faced by the United States.”

It singled out the threat of new categories of weapons of mass destruction as a prime concern as well as the competitive drive among the various industry players and institutional laboratories pushing the AI envelope. The AI race is unrelenting with those in it declaring their “intent or expectation to achieve human-level and superhuman artificial general intelligence” by 2030. Competitive pressures are the accelerator of AI investment and capabilities “at the expense of safety and security.” One researcher’s comments noted in the report state that “the risk from this technology will be at its most acute just as it seems poised to deliver its greatest benefits.”

The Action Plan is meant to be a blueprint for intervention and was developed through conversations with 200 “stakeholders” in Canada, the United Kingdom and the U.S. Actions proposed include:

  • establishing interim safeguards to stabilize advanced AI development,
  • export controls on the advanced AI supply chain,
  • development of basic regulatory oversight and the strengthening of the
    U.S. government’s capacity to deal with AI’s evolution,
  • creation of a legal regime for responsible AI development and adoption, safeguarded by a regulatory agency in the U.S. and other countries governed by multilateral agreement and common consensus.

The plan is to create a defence in depth with overlapping controls to ensure no single point of failure. The proposed U.S. agency, Frontier AI Systems Administration (FAISA), would draw up the rules and license AI development and deployment.

A criminal and civil liability regime would define responsibility for AI-induced damages by “determining the extent of culpability for AI accidents and weaponization across all levels of the AI supply chain, and defining emergency powers to respond to dangerous and fast-moving AI-related incidents which could cause irreversible national security harms.”

The plan also includes establishing an AI Supply Chain Control Regime (ASCCR) involving the U.S. and partner countries “to limit the proliferation of advanced AI technologies.”

Yesterday was the anniversary of the atomic bomb being dropped on Hiroshima, Japan. The Manhattan Project that created the bomb was a secret military program put together to beat Nazi Germany’s atomic weapons program and end the war in Europe by revealing the existence of the weapon and threatening to use it.

The war in Europe ended before the American bomb could be deployed there. The controversy that followed was whether the bomb should be demonstrated to the one remaining adversary, Japan, or dropped to end resistance before the need to invade the country’s home islands inflicting millions of deaths and casualties. The American government chose to use the bomb in combat, dropping the first on Hiroshima on August 6, 1945, and the second on Nagasaki on August 9, 1945. Japan surrendered shortly thereafter.

Regulating atomic research came only after the end of the war. As the saying goes, with the barn door left open the horse already had escaped and the Cold War of mutual assured destruction (MAD) became a reality for the planet from the mid-20th century onward. As in the development of atomic weapons, it would seem that the AI horse has already left the stable and regulatory control gatekeeping is coming to the party late.

One of the “godfathers” of AI and a Turing Award winner is Yoshua Bengio. He along with Geoffrey Hinton and Yann LeCun have been warning governments and the public about The threat of artificial general intelligence (AGI). An open letter published on March 22, 2023, requested that AI and AGI research be put on hold until “new and capable regulatory authorities” could be put in place. Bengio fears the development of autonomous AI systems without human values. He describes short and medium risks that include AIs manipulating public opinion through disinformation, or AI programs that turn out harmful because of the programmer’s malicious intentions.

Bengio is backing a United Kingdom-funded project called Safeguarded AI to develop safety standards, an AI gatekeeper with three specific objectives:

  • To build an interoperable platform to check on existing and future AI models and development,
  • To establish mathematical modelling and real-world application domain expertise for autonomous AI systems,
  • To unlock economic value while providing safety guarantees by building the Safeguarded AI gatekeeper for autonomous AI systems and for keeping AGI from running amok.

The UK government has budgeted £59 million to launch and develop Safeguarded AI and has requested parties interested in building it to apply by October 2, 2024.

Canada has been working on an act of Parliament, Bill C-27 for more than two years to support the responsible development and adoption of AI and AGI systems. It has yet to pass. In September 2023, it published a “Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.” So far, 30 organizations and businesses in Canada are signatories to the code of conduct. The barn door remains wide open.

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics