HomeTech and GadgetsArtificial IntelligenceIf You Fear Artificial Intelligence Taking Over the World Remember Who Controls...

If You Fear Artificial Intelligence Taking Over the World Remember Who Controls the Plug

With Geoffrey Hinton, the godfather of artificial intelligence (AI), issuing a warning in the last week that large language modelling algorithms could outsmart us in the near future, it should be pointed out who controls the on-off switch. AI consciousness is the big concern for many both in and outside the world of computing. But it is humans who can pull the plug, shut down the data servers, and end the threat.

So the focus of our concerns shouldn’t be whether computer intelligence can eventually dominate but rather should be on the companies that are driving the AI boat. Alphabet (Google), Meta, Microsoft (OpenAI), ByteDance and other players are into AI and large language models in a big way. All are churning out chatbots that emulate the recently launched ChatGPT. Elon Musk with X.AI has joined in the pursuit with a claim that the AI his company produces will be ethical.

Sounding the alarm about AI consciousness and super smarts is not where our focus should lie. Instead, we need to be regulating AI’s purpose and usage. There needs to be an AI rulebook.

Recently, the European Union (EU) launched an initiative to legislate guardrails to govern AI development and usage within its member states. It provides a harmonizing of rules with “socially and environmentally beneficial outcomes” for humanity and the planet.

The rules cover the storage, sorting, and manipulating of data. They focus on safe, lawful, and trustworthy development and usage that respects existing laws and fundamental human rights.

The rules methodology defines risks posed to human health and safety and obligates AI developers and users to follow and enforce its regulations throughout product lifecycles with enforcement to be done within all member states.

The EU defines high-risk AI systems and, for these lays out specific corporate human oversight responsibilities that cover codes of conduct, and reporting requirements and obligations including public access to source code and confidential information from AI developers and users.

The use of AI in the EU rulebook states that it cannot be incompatible with fundamental rights enshrined in the EU Charter that include: the right to human dignity, respect for private life and protection of personal data, non-discrimination and equality between women and men, freedom of expression, freedom of assembly, fair trial, the presumption of innocence and the right to defence, as well as the general principle of good administration. In addition, the rulebook ensures that no AI can interfere with special groups such as workers’ rights to fair and just working conditions, consumer protection, children’s rights, and the rights of those with disabilities. The rulebook also states that any AI development and use must respect the environment concerning the health and safety of people and nature.

AI deployment must include pre-testing to ensure compliance with all of the above and to minimize any risks that could come from datasets that lead to erroneous or biased AI-assisted decisions in critical areas such as education and training, employment, law enforcement, the judiciary, and social rights. Should infringements of any of the above fundamental rights, a method of effective transparent and traceable redress will govern the developers and users of all AI systems.

What is the definition of high-risk AI systems? According to the EU which lists “a limited number of AI systems whose risks have already materialized or are likely to materialize in the near future,” it includes the following:

  • Machine learning such as supervised, unsupervised and reinforcement learning that uses a variety of methods including deep learning (for example Alexa, Siri and Google Home).
  • Logic and knowledge-based AI that includes knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems (for example autonomous driving applications).
  • Statistical AI that includes Bayesian estimation, search and optimization (in other words, ChatGPT and its imitators).

Anytime a person interacts with an AI, the EU rulebook requires the latter to identify itself as such. If images, audio or video, are manipulated using AI the creator of the application is obligated to disclose this to viewers and listeners. This includes the use of AI by any law enforcement agency.

And finally, the EU rulebook obligates its member states to create within their boundaries a competent supervisory authority, and to share expertise with all EU members.

In the last week, the White House in Washington, DC, summoned the senior executives of Alphabet (Google), Microsoft, OpenAI, and Anthropic to discuss where their AI software development is going. The meeting was held by Vice President, Kamala Harris with President Biden dropping in during it to deliver a message which stated, “What you’re doing has enormous potential and enormous danger.”

But if you go back to the beginning of this article, humans still hold the power of the plug when it comes to how and when, and when not to use AI.

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

1 COMMENT

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics