Autopilot: Are Self Driving Cars Safe?


On May 7th of 2016, Joshua Brown was using the autopilot mode on his Tesla Model S in Williston Florida. It was a particularly bright day and the sensors of the Model S didn’t register an 18 wheeler truck crossing the freeway. The autopilot instead drove at full speed into the truck, killing Joshua Brown.

This is a landmark incident in the self driving car industry because this is the first fatal crash where the autopilot function in a car was at fault.

In response to this accident, Elon Musk has said they are not going to recall the autopilot function.

“A lot of people don’t understand what it [Autopilot] is and how you turn it on, we knew we had a system that on balance would save lives.” – Elon Musk, CEO of Tesla

Normally hailed as a safer than manual cars, the self driving cars that Google and Tesla have released have logged over 100 million hours of driving and sent reports whenever there is a minor collision. (Note: Google’s cars are mostly self driving, whereas Tesla’s cars are manual and come with an autopilot mode.)

Google’s self driving cars have approximately $150,000 worth of tech in them, including a Velodyne 64-beam laser which generates a 3D map of the environment. Using this 3D map and combining it with high resolution maps of the world, the self driving car can create an accurate map of its surroundings and drive itself.

There are of course other sensors, like the four radars mounted on the front and rear bumpers that allow the car to see fast traffic on freeways. The camera mounted near the rear-view mirror detects traffic lights, the GPS measures the inertia of the vehicle, and finally the wheel encoder determines the vehicle’s location and keeps track of its movements.

Some of the issues with the Google self driving car are issues that humans have trouble dealing with, such as whether or not to accelerate through a yellow light or brake, how to respond to debris in the roadway (is that trash or a large rock?), and what to do at a four way stop when no one is going.

In response to security concerns, Germany has introduced legislation that will make it compulsory for self driving cars to have a black box that will report who was at fault for incidents like Joshua Brown’s.


Fatalities like this one bring up a lot of existential questions for consumers and manufacturers. A recent article in Popular Mechanics, discussed the moral dilemma of letting your self driving car kill you in order to save the lives of others. What is the value of a human life compared to a school bus full of children? And is it worth the 1 or 2 fatalities if thousands of people are saved because drivers can physically no longer drive drunk?

After the Joshua Brown incident, Tesla was handed a list of things to modify about their product by Consumer Reports, including completely changing their autopilot system. Consumer Reports has said that the name autopilot is “misleading and potentially dangerous” and they want it gone. Elon Musk and Tesla are still standing by the technology however and it doesn’t seem likely they will change the name.

One of the responses to this is that self driving cars will have the ability for human intervention in emergencies if the car is not responding safely. If a tree falls in the roadway and the car is still accelerating towards it, the driver can take control and save themselves. However there have been criticism to this method. PBS ran an article stating that humans are unreliable when things become automated.

“Decades of research shows that people have a difficult time keeping their minds on boring tasks like monitoring systems that rarely fail and hardly ever require them to take action. The human brain continually seeks stimulation. If the mind isn’t engaged, it will wander until it finds something more interesting to think about. The more reliable the system, the more likely it is that attention will wane.” –

Meaning that even if humans can intervene, they might not simply because they aren’t paying attention. Air France Flight 477 crashed into the Atlantic Ocean in 2007 because of an autopilot malfunction. Everything was fine, but the two pilots became confused and crashed an otherwise fine plane, killing 228 people.

In other words, removing the human error from driving might not always work if they have the option to intervene at any moment. This will give the car designers and manufacturers a way to dodge blame, but will it really result in less loss of human life?

Through all these hiccups, Tesla and Google are still planning to release the self driving cars on the world. By 2020 Google hopes to have all of the bugs fixed and the self driving cars out for purchase. Tesla already has a self driving option.

Have anything to add to this story? Let us know in the comments.

BitNavi is a blog conceived by Karl Motey in the heart of Silicon Valley, dedicated to emerging technologies and strategic business issues challenging the industry.

Kaya Lindsay is a local Santa Cruz contributor who spends her time globetrotting, surfing the web, and writing for the BitNavi team.

Follow her on Twitter: @KayaSays


Leave a Reply

Your email address will not be published. Required fields are marked *