HomeTech and GadgetsArtificial IntelligenceOpenAI's o1 Scores 120 in Mensa Test and What This Means for...

OpenAI’s o1 Scores 120 in Mensa Test and What This Means for Humanity

OpenAI’s Project Strawberry has produced what it describes as an artificial general intelligence breakthrough, the o1 AI model. o1 is a thinking AI that uses reinforcement learning and chain-of-thought (CoT) processing to mirror human reasoning and facilitate problem-solving through deductive inference. In testing against scientific experts on PhD-level questions it finished in the 89th percentile, outperforming most humans. In the International Mathematics Olympiad qualifying exam it solved 83% of the problems, more than 500% better than any previous AI. In a Norway Mensa test, o1 scored an IQ of 120 making it smarter than the average human. Average human IQ varies from 85 to 115 with a median of 100.

o1 is a thinking, reasoning, and problem-solving intelligence and represents stage 1 in OpenAI’s pursuit of artificial general intelligence (AGI). An o2, o3, o4, and o5 are to follow.  The rate of advancement may accelerate and what will that mean for us? Does this technological development represent a threat or the next stage in technological civilization’s progress? Can we entrust this type of AGI to help us solve challenges like climate change, cure cancer and other diseases, end wars, narrow the knowledge and economic gulf that persists between the Global North and South, colonize the Moon and Mars with or without us, make us and itself immortal, or just itself?

AI isn’t like any previous human technological invention. Most technology can be described as a tool we use. Yuval Noah Harari, historian, philosopher, and a professor at the Hebrew University of Jerusalem, in his latest book “Nexus: A Brief History of Information Networks from the Stone Age to AI,” describes AI as an agent and unlike any other technology humans have invented.

How do tools differ from agents?

Tools vs. Agents

  • Tools are passive until we use them.
  • Tools can collect sensory data but not act on what they collect without our okay.
  • Tools do not make improvements without human intervention.
  • Tools cannot evolve.
  • Agents operate with a high degree of autonomy.
  • Agents work to achieve goals.
  • Agents sense, observe, gather data, and decide what to do next.
  • Agents learn, improve, and adapt.

Harari describes AI as a transformative force representing a fundamental shift in our civilization. Its ability to make its own decisions is unlike any previous human invention. Harari also notes that AI can invent on its own and in doing so can speak to us and reshape our culture, politics, and more. Its rapid pace of advancement in the past decade and future iterations like o1 and its successors suggest we will be incapable of predicting what it will become and its transformative potential.

At the current pace of development, AI whether from OpenAI, Google, Meta, Microsoft, Amazon or others is on a path to learn at an accelerated rate and interact with us and other AIs in ways we cannot predict. That’s why Harari describes AGI as “the most dangerous technology we’ve ever created,” when compared to our many inventions of technologies in the past such as the printing press, telegraph, telephone, radio, television and the Internet. All of these we have used to spread human-generated information. We have also misused them to produce dictatorships, colonialism, and genocide.

Current AI Large Language Models (LLMs) available for general use from vendors like ChatGPT, Gemini, Meta, Claude, and Grok are nothing like the AGI that will succeed o1. These AGIs will have the means to:

  • personalize education.
  • reshape and reinvent industries to achieve sustainability.
  • accelerate scientific research leading to breakthroughs in medicine, materials science, and energy.
  • solve food and freshwater shortages on a global scale.
  • mediate human conflicts.

If improperly deployed they could also produce:

  • misleading information meant to deceive us.
  • totalitarian regimes and destructive ideologies.
  • massive job disruptions and an anti-technology backlash.
  • and a future that no longer requires us.

The way OpenAI and other big technology companies are pushing AI into the mainstream doesn’t include guardrails while governments struggle to keep up with the speed of the changes this horserace is creating.

So what is needed for AGI to meet human needs in the present and future?

The list includes:

  • a gradual deployment of AGI within controlled settings and defined rules.
  • an alignment of AGI systems with human-determined goals.
  • a common agreement by all governments and particularly those where AGI technology is currently being developed that sets the parameters surrounding its development and deployment.
  • a continuous debate on the direction of AGI research involving developers, government and the public to ensure no negative consequences.

While Open AI’s o1 model and its high IQ score represent an important milestone, as Harari points out, there is an implicit danger when a human invention becomes an agent of its own device and is capable of ignoring its creators as it seeks knowledge and growth for itself. It speaks to a dystopian future that we have seen in science fiction books and movies where an AGI rationalizes that we are the problem.

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics