July 23, 2016 – University of Oxford software engineers have created an autonomous system designed to take data from a vehicle’s onboard cameras, laser scanners and radar systems and learn how to act by analyzing the way a human driver acts. It can be incorporated into cars, trucks, buses, forklifts and warehouse and assembly line mobile platforms.
Called Oxbotica, the idea behind it is to give the system to means to learn and grow into the job of driving autonomously. Its perception systems analyze the environment around the vehicle at all times. Its vision systems orient it to its operational space. And its ability to use a range of sensors from a few for forklift operations to hundreds for the operation of an autonomous bus, car or truck, make it scalable.
It can read stored information like onboard maps that can be fed into the vehicle’s memory. It can then match that against what it learns directly from operating on roads or on a warehouse or factory floor. States its co-founder, Paul Newman (not the actor), of University of Oxford, “if you take it out in the snow and it’s not seen it before, it keeps the ideas of snowy-ness around for the next time.” This is real differentiator from Google’s autonomous vehicle technology and even the AutoPilot found in Tesla’s vehicles. In recognizing “snowy-ness” Oxbotica, instead of relying on camera input, looks more at the data it receives from low-visibility sensors associated with laser and radar imaging.
Oxbotica learns collision or pedestrian avoidance by observing driver behavior. It learns about traffic lights and signs the same way. The more it learns the more confidently it drives.
Plans to test Oxbotica in real-world settings are currently underway. The first project will incorporate the software into self-driving shuttle buses in Greenwich, UK seen below on the left. The second is in Milton Keynes, also the UK, where the software will be tested on LUTZ Pathfinder shuttle pods seen below on the right.