HomeTech and GadgetsArtificial IntelligenceConversations About AI: What are Its Current Limitations and is it Living...

Conversations About AI: What are Its Current Limitations and is it Living Up to All the Hype?

I started writing this blog post back in 2019 when artificial intelligence (AI) machine learning was being hyped. Then, Aleksander Madry, an MIT computer scientist stated at an AI conference, “Almost every aspect of machine learning today is broken if you look at it through the lens of robustness.”

The best that could be said back then was AI was “mostly accurate on average.” Madry asked, would you want to trust an AI to do something critical like drive a car, fly an airplane, or operate a nuclear powerplant? Based on the state of AI at the time, the answer would have been “no.”

Back then you could fool AIs that were using neural network technology into making mistakes when trying to identify images. Humans could alter an image and watch the AI struggle. For example, when showing the AI a picture of a teapot with a superimposed golfball surface (see image below) in ranking its conclusions, it chose golfball first and teapot second. The AI in this example prioritized texture over shape. Four years later how far has AI come? In terms of image recognition AI has been getting a lot better. Why? Because of sampling. AI when presented with a thousand images makes a fair number of errors. That’s why a golf-ball dimpled teapot is difficult to identify either as one or the other. Expose the same AI to a million images and the error rate dramatically drops. Expose it to a billion, and the AI will likely never get it wrong.

Fei-Fei Li is a leading authority on AI, machine learning, and deep learning computer vision. She created ImageNet and the ImageNet challenge that used crowdsourcing to expose an AI neural network to enormous image datasets to overcome the limitations that Madry observed a mere four years ago.

Li created a how-to guide course at Stanford University that teaches how computer vision learns to understand images whether they come from cameras used in a self-driving car, or the diagnostic images used in medicine to detect cancer, injury and disease.

The AI used by Stanford students is a neural network that simulates the behaviour of a human brain. But unlike our brains, the learning is focused on a specific task. In this case, it involves two steps: classification first followed by recognition.

The only impediment to an AI accurately identifying an image is the inherent bias represented in the dataset to which it is exposed. That’s why a facial recognition neural network that is exposed to millions of images of people classified as Caucasian, struggles when trying to identify the faces of Asians and Africans.

And where humans can look at an object within context, an AI may not understand the importance of the relationship between what it is looking at within the environment in which it is located.

ImageNet is the current benchmark dataset for training AIs. It contains almost 15 million annotated images parsed into 22,000 categories with 1,000 images per category. ImageNet is a great tool for generalized AI learning.

There are other image datasets that AIs can use to train for a specific purpose. These include specialized datasets designed for AIs to learn to do preliminary diagnoses in medicine by looking at X-rays, bloodwork, MRIs, CT scans, and the like. AIs are studying early fire and smoke imagery to be able to provide an early warning system before a fire gets out of hand. AI is being trained on weather imagery, barometric readings, and marine and atmospheric data to accurately predict the onset and impact of extreme phenomena like tornados and hurricanes. AI trained on seismographic data in the future will help to predict the likelihood of an earthquake or tsunami.

AIs today can be trained to generate early warnings of a coming pandemic by surveying vast quantities of structured and unstructured data and seeing patterns where no human can. It was just such an AI used by BlueDot that back in March 2020 predicted the coming COVID-19 pandemic. BlueDot reviewed foreign language news reports, government proclamations, online network reports, and airline ticket data to predict the breakout of the virus from Wuhan to elsewhere in China, then on to Thailand, South Korea, Taiwan, Japan, and the United States.

As Fei-Fei Li will tell you, the limitation of any AI comes from its training dataset. The data can be classified or unclassified, structured or unstructured. But the more the AI sees, the more effective it becomes. The more it sees the less likely human bias will enter into its analysis. That’s why today we are no longer dealing with “mostly accurate on average” which was the state of AI a mere four years ago.

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

2 COMMENTS

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics