3 Technical Issues That Need to Be Addressed about Self-Driving Cars

There are multiple ways you can look at self-driving cars. The most important fact to keep in mind is they have a huge potential. But before they are widely adopted, people need to see them as reliable and trustworthy.

It’s one of the reasons why countless car manufacturers and big corporations are investing millions of dollars into autonomous vehicles.

In the following lines, you’ll be able to read about the current state of driverless cars, where they are headed and what are three of the most important technical aspects which need improvement as soon as possible.

Self-Driving Cars Are Almost 90% Automated

It is widely accepted that autonomous vehicles on public roads – such as Uber’s fleet in Pittsburgh – have reached an 85-90% level of automation. It means they’re only 10% away from full autonomy.

However, that last 10% will actually be the hardest to achieve. Work still needs to be done when it comes to perfecting the hardware, guidance systems, and software to make vehicles that can reliably and safely drive themselves.

The final steps involve some really unique challenges that need to be overcome. Some of them are technical, but others are… moral.

1. Mapping Needs Ongoing Work

Mapping is way more complex than basic GPS location services. The latter can pinpoint locations of phones and cars within about 2 meters, roughly 95 percent of the time.

That’s quite impressive and accurate enough to navigate in traffic. But it’s far from being good enough when it comes to cars driving themselves.

In fact, all the tests are done on roads with high-definition 3D maps which are accurate within a few centimeters. They spot locations of trees, fire hydrants, buildings, stop signs, and traffic lights – anything within 200 meters of the moving car.

As you can imagine, mapping all roads will take time – years, maybe even decades – and should be an ongoing project. And as soon as a fallen tree or a construction site appears, it needs to be mapped as well.

Then you need to take into account the high amounts of storage and processing power needed to generate such maps.

2. Sensor Tech Needs Improvements

A human driver uses eyes as sensors which gather data and sends it to the brain for analysis. Our vision is limited and can be affected by fog or lack of proper lighting at night.

Self-driving cars work in the same way. Most of them use an array of multiple cameras which act just like the human eye does, except they’re more than just two. It means a car can see more information than a human driver.

Then we’ve got the radar and Lidar. They use radio waves or light pulses to scan the road ahead for potential obstacles. They offer both extra vision and depth perception.

While the two technologies can make up for each other’s shortcomings, researchers and automakers are still working out what combination of sensors creates the best balance between capability, complexity, and cost.

Lidar, for example, costs up to $7,500 per car and can be easily flummoxed by rain or snow.

3. The Software Needs Machine Learning and Artificial Intelligence

The software behind self-driving cars has to understand what is happening on the road ahead. As you can imagine, there are countless situations which would need to be programmed.

It’s why machine learning and artificial intelligence should be implemented to process the environment and make safe decisions even about things the car is encountering for the first time.

An autonomous vehicle’s software system has to be one that “sees” pedestrians, bicyclists, and lanes and understands driver behavior.

The trick is getting rare situations, those exceptions which might only happen once in a lifetime, written into the code as well.

Developers basically try to replicate the human brain. Driverless cars must be able to respond to something on the fly, to handle the unexpected.

Philipp Kandal