Oxford’s driverless destiny

Josephine Pepper talks to CEO of Oxbotica, Graeme Smith, about the future of autonomous vehicles

452

Could you start by explaining what Oxbotica is and what it’s doing?

Oxbotica is a software development company, developing software that’s going to enable autonomous vehicles and self-driving vehicles. It is a spin out from Oxford University’s Mobile Robotics Group. Through this we have licensed more than ninety pieces of intellectual property from the University, representing around 160 years of previous development by the University.

We work to develop the software ourselves and then we work with customers to help them access our software and integrate it into their own products. Some customers are interested in licensing our complete autonomy software while others may be more interested in licensing small parts of it.

I understand you are also working on a project called Gateway…

The Gateway Project is in Greenwich. We are working with other companies to pull together a fleet of seven or eight autonomous shuttles. These vehicles will be in operation sometime in the second quarter of 2017 and open for public demonstration for two or three months, ferrying people around the back of the Dome and around the Greenwich peninsular.

It is a research project to understand how the vehicles will mix with pedestrians, how people react to them, and also how the vehicles will react to people.

What is the public attitude to driverless technology?

By and large we have found people to be very accepting of driverless technology. Already existing is the driverless Docklands Light Railway, with no real negative feedback. The Heathrow Terminal Five pods are completely driverless and again there has been huge customer acceptance of that. These are the same pods that we now bring to Greenwich.

If well received and working well, what are your visions for this technology? Will we see lots more of it in the near future?

There’s definitely a market for autonomous vehicles in the smart cities of the future. The advantages of these vehicles include that they are by and large electric, meaning less output pollution, and the energy can be generated away from the city itself, creating less urban pollution. Another advantage of autonomous vehicles is we are able to schedule where they are, where they go, what they do—such planning allows us to cut down on congestion as well. We see brand new cities in China being built around the concept of autonomous technology like this, which certainly represents a huge potential export market for the UK.

One of the big focuses in this industry is around improving road safety and reducing avoidable accidents. We know that over 95 per cent of road accidents are caused by driver error and driver inattention.

Do you find yourself in competition with the autonomous vehicle projects in Silicon Valley?

Our model is very much one of allowing companies to licence and access our intellectual property and then integrate it with products. We see these companies working in this area as potential customers and have very good relationships with most of them.

Where are the main obstacles with this type of development?

Probably one of the major challenges has been finding the right number of suitably qualified engineers to work on the project for us—they are in very short supply in the UK, and in fact worldwide as this is such an explosive technology. The other blockage to large scale production at the moment is the cost of sensors. We are waiting for the cost to come down in 2020 or 2021 as the mass market kicks in.

Are there security concerns or threats of malicious hacking?

We have built cyber security into the software, both into physical access and remote access. The software is very secure—as secure as your bank account. We don’t really see any particular issues from that perspective, although of course lots of people like dreaming about it.

Could you elaborate about the software?

Our software is needed to give a vehicle intelligence to understand three things: where it is in the world, what’s around it, and then what to do next. So the first part is all about navigation and localisation, understanding where the car is. The second part is about understanding things around the car (people, roads, traffic lights). The third part, which is by far the most difficult, is planning how to get to where we need to be, given that we understand where we are and what is around us.

Presumably this involves a degree of teaching and learning responses by the car?

Yes some of our software is self-learning and over a period of time it will improve, for example in its recognition of pedestrians or cyclists, and it will start to recognise specific situations and be able to share this learning of situations across multiple vehicles.

Typically we use two or three different types of sensors. We use cameras, particularly stereo cameras, lasers ([a type] more commonly called lidars), and radar as well. We fuse all the data from the different sensors together, so we have one big picture. Each sensor has its own strength. The cameras are very good at the angle of separation between things whereas lasers are good at telling you exactly how far away something is.

From this information the [engineers] try to create models of how the vehicle is moving through the world. From this model they start coding and developing software we can put into the vehicles which we can then start to test. It is very complex and we rely on some of the cleverest people in the world that come to us from Oxford’s PhD programmes to make it possible.