HomeTech and GadgetsRobots & RoboticsConversations About AI: When Will Robots Rule?

Conversations About AI: When Will Robots Rule?

Artificial General Intelligence (AGI) is coming to the world of robotics. Let’s hope the outcome is better than the future painted in Isaac Asimov’s Foundation Trilogy. Asimov created his three basic laws of robotics to protect us from them and ensure that they were a force for good in human society. But robots even in Asimov’s world evolved to become smarter than those they served to protect, eventually guiding human existence. Free will in Asimov’s world took second place to robots shaping history facilitated by Hari Seldon’s psychohistory mathematical formulas.

Reality is likely to be different but only if we imbue robots with AGI that lets them tackle intractable problems where humans fail while shaping them to fit within instructional guidelines that include deference to us.

Today’s robots cover a wide spectrum from arms used on manufacturing assembly lines, to drones and autonomous vehicles. Autonomous and adaptability, however, are the ultimate goal for robotics companies in pursuing a perfect substitute for us.

Most programming in robots today remains task-specific. AGI integrated into robots would turn more robots into generalists capable of achieving autonomy and adaptability. Robots could learn from humans and other robots. They could analyze tasks referencing massive datasets and act or solve problems without additional human input.

Asimov’s robots were often humanoid. But making robots look like us with the limits imposed by our bilateral body design doesn’t make a lot of sense.

Although companies like Tesla and Nvidia are focused on creating humanoid robots, I imagine robots unconstrained by our human form, capable of multitasking using more than four limbs or eight like an octopus. Instead of right-angle and obtuse-angle joints, my robots could move their limbs in ways impossible for us. My robots wouldn’t be limited to two eyes and two ears, or five fingers when six or more could provide added dexterity.

I imagine a robot at a keyboard equipped with more than two hands and more than five fingers on each tackling Bach’s Brandenburg Concertos, Modest Petrovich Mussorgsky’s Pictures at an Exhibition, or riffing to Duke Ellington’s Satin Doll and playing all of the orchestral parts.

Is bipedalism the ultimate way for robots to move, or should autonomous and adaptable robots be given multiple options such as more than two feet and wheels when needed, traction systems for climbing walls, and be multisegmented like a caterpillar or designed to squeeze into small spaces?

Then there is the question of the robot’s neural network, the equivalent to our human brain. Would it need to be encased in a head or could it reside elsewhere to better serve functionality and protection? Would the entire neural network reside in one place or could it be structured as nine like in an octopus with one central and eight distributed, one for each arm?

As for the inner workings of the brain, today’s neural networks are trained using large amounts of generic data mined from the Internet. These “brains” learn through trial and error. Foundation models provide structure for these brains to absorb the massive amounts of examples available and combine them with coded instruction sets to develop optimal outcome performance when presented with a problem or task.

Google DeepMind is one of those foundation models. The company is developing it for Everyday Robots, an associate company of the search engine behemoth as well as others. Everyday Robots has developed a mobile robot arm called Robotic Transformer 2 (RT-2). RT-2’s programming includes instruction and command sets as well as observational exposure to the Internet and video data of robotic operations. From this, it is operating beyond the limit of its instructions and capable of performing functions not addressed directly by its programming.

Franka Robotics is another builder of a robotic arm called Franka Panda 7DoF. It uses DROID, an open-source foundation model combined with data to perform tasks beyond the observed video and online information it gets for training. The 7DoF in the name refers to the robot arm’s 7 degrees of freedom and torque control capabilities. Watch the video to see flexibility no human arm can match.

Nvidia has developed GROOT as a foundation model for humanoid robots. It understands natural language and emulates a human’s coordination, dexterity and navigation skills to work in the real world like a person. Its dataset contains billions of human-performing tasks in videos stored on the Internet. Combined with code that includes commands and provides instructional context, GROOT is inside the mind of its humanoid robot demonstrator developed by the company. Nvidia is working with companies developing humanoid robots that can work in a world where tools, furniture, stairs, appliances and vehicles have been designed for humans. In this world, a robot in human form can easily fit in.

The robots in our immediate future will include lots of smart arms. Others will walk, talk, and act like us. Their many iterations will feature neural network brains with AGI. Increasingly they will exhibit true intelligence, capable of interacting with the world around them and their human creators. Eventually, they will understand complexities greater than any individual human. Their reasoning will surpass ours having been exposed to our collective works as their creator. That inflection point is fast approaching.

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics