About our writer
Naomi Foster is in her second year studying Engineering Science at St. Anne’s College, Oxford. Naomi was a Mentor for Cambridge Immerse in 2017.
We take for granted the flood of information our senses supply to our brains every second, helping us to understand what’s happening around us. Without this, how would we be able to cook dinner, do our jobs, or drive a car? But the car we drive is blind to the world we see – so how could it drive itself? With driverless cars currently being tested in cities all over the world, let’s take a look at some of the tech they use to get from A to B.
Driverless cars typically have over 10 different types of sensor – from laser range finders to near view cameras and radar to GPS. These sensors help build up a picture of what the world looks like, so the car can navigate itself safely. But how do they all work together to create a driver as good as (or better than) you or me?
Arguably the most important sensor on a self-driving car is its laser range finder, or LIDAR, which uses pulses of laser light to build up a 3D picture of the surroundings. Google’s self-driving car has no fewer than 64 laser beams with an impressive 200m range. Over a million laser beams are sent out per second and reflect from objects around the car. The reflections are picked up by sensors in the car and the time for the pulse to travel out and back is calculated (as laser beams travel at the speed of light). This means the distance to the nearest object can be worked out and therefore the car can build up an image of the 200m surrounding it.
On the smaller scale, radars on each wheel use the same method to keep an eye on the cars in front and behind it, making sure there is a 2-4 second gap between each car. A near vision camera mounted on the front of the car looks out for unexpected obstacles like pedestrians and cyclists, and can even recognise hand signals and road signs. But the most impressive part of a driverless car is how it takes these signals and somehow understands what to do with them.
The computer sitting inside an autonomous car contains huge amounts of data. Maps covering huge swathes of the world, hundreds of road signs and signals, shape and motion descriptors and plenty more are all stored in the car. These are used in conjunction with the data received from the sensors to help the machine make decisions.
Neural networks are programs which recognise patterns in data, loosely based on the human brain. Cars use these networks to understand the data from their sensors. Inputs are fed into the first layer of the network and weighted according to how important the algorithm thinks they will be. Some function is performed on the data and the outputs are then fed into the next layer and so on. This process allows the computer to classify and cluster data to look for patterns, which can then be compared to the huge bank of data sitting in the computer.
But these dream machines are far from infallible. Some things cause a real problem for driverless vehicles. Roadworks regularly block roads but always look different, use different signs and signals, and don’t have any database or schedule for cars to refer to. Roadworks are an ongoing problem for designers and a range of solutions have come forward from different quarters. Some have resorted to a call centre approach, suggesting that humans could help to guide confused vehicles around tricky situations. But can you really call a car which operates like this “autonomous”?
Another idea is based on communication between the car and the construction area itself. If the two could communicate with one another, the construction site could act as a broadcasting beacon, telling the cars to reroute or adjust their course. However a lot of information would need to be given to the car – how many lanes are left open, exactly where is the construction happening (to a very fine degree of accuracy) etc.
A further problem with this is authentication – how can the car tell the difference between a real-life construction site and a hacker rerouting your journey for their own dubious reasons. Yet another idea for solving this problem is simply to stay well away from any problem zones. If a database of roadworks can be created, cars can simply create new routes which bypass these areas completely. Although this seems like a credible solution, it shows us that at the moment, cars are nowhere near achieving the important, yet elusive, ‘common sense’.
It has been shown that 90% of car accidents are caused by human error – surely robots will then be much safer than a lift from mum or dad. But with autonomous vehicles come a new risk – hacking.
With such a focus on cybersecurity nowadays, it is surprising how easily autonomous cars can be hacked. In a study led by researchers at the University of Washington, it was found that by simply putting stickers on road signs, they could cause a misclassification of the sign with a 100% success rate. Through this, malicious hackers (or just bored graffiti artists) could cause huge problems and many accidents through changes in speed limits or changes in the meaning of important signs. 3M, the company which gave us Post-its, have come up with a resolution – creating bar codes which are invisible to the naked human eye, but can be read by autonomous vehicles and provide information like GPS location or where the next set of traffic lights will be.
Driverless cars will play a huge role in our future and hopefully, have the capacity to save many lives which would otherwise be unnecessarily lost. Humans as drivers have many flaws and shortcomings. However, the way we adapt to unexpected circumstances and make split-second decisions is unparalleled – it remains to be seen if any robot can live up to old-fashioned common sense.
University Insights Guide
This expert guide contains over 200 pages of premium content, written by top university students. Featuring two guides Top 20 UK Universities and Top Books to Read Before Applying to University.
Download our free expert guide to the books you need to succeed!