Our Story

Business Partners from Day One
Concannon Business Consulting was founded to address the growing need for experienced project and program management teams across a variety of industries. Our team is comprised of experienced resources that deliver immediate project impact and value for our clients, with the mission of 100% customer satisfaction. Our company has grown from two business partners to dozens of consultants servicing clients in the automotive, financial, high-tech, hospitality, retail, and consumer packaged goods industries.

Locations

» Los Angeles
» Dallas
» Seattle

Latest News

inquire@concannonbc.com

+1 949 419 3801

Top

A Difficult Drive: Autonomous Vehicle Sensor Limitations

Concannon Business ConsultingAutonomous Vehicles A Difficult Drive: Autonomous Vehicle Sensor Limitations

A Difficult Drive: Autonomous Vehicle Sensor Limitations

With all of the headlines and buzz around the rapid development of autonomous vehicles, it is easy to forget just how difficult it is to develop a system capable of driving a vehicle without human intervention.  But exactly why is it difficult for computers when driving is a task that over a billion humans perform on a regular basis? In this article I will break down some of the tech behind autonomous vehicles and the limitations that make autonomous driving so difficult to get right.

 

It Starts with Sensors

For human drivers, although several senses are engaged while driving (ex., vibrations from the road, the sound of horns) we primarily rely on vision to observe the world around the car.  It’s easy to assume that the electronic equivalent – cameras – might be sufficient for autonomous driving, but in reality humans still have quite a few advantages in the vision department over our electronic counterparts:

  • Flexible Movement – We humans can move our eyes and heads quickly to get a 360 degree view of our surroundings.  In vehicles, multiple cameras around the vehicle are required to accomplish the same thing.
  • Distance Tracking – Thanks to having two eyes, we can estimate distances to objects fairly easily.  Stereoscopic cameras and algorithms are required to accomplish the same feat electronically.
  • Dynamic Range – Human eyes are remarkably good at adjusting to a huge range of lighting conditions, from moonless night darkness to noon sun.  Cameras have a hard time duplicating this range while maintaining image quality.
  • Rapid Classification – People are very good at recognizing objects quickly in the world with little effort and determining what to do with them (for example, hitting a plastic bag in the road is ok, hitting a deer is not).  Even with advances in machine learning models, specialized hardware is typically needed to accomplish the same task digitally with any degree of accuracy.

 

Additionally, there are often situations where camera images simply aren’t enough for a computer to make a good decision about what to do.  Take a look at the image below – how easy is it to tell where the sky stops and the trailer starts?

With your human vision you can infer where the trailer starts because you know what a tractor-trailer looks like, but camera-based computer vision systems often look at differences in color and contrast to determine the boundaries of objects.  In situations like this where contrast is low and object colors are similar to the background, algorithms can easily fail. This is likely the problem that resulted in the fatal Tesla Model S crash in Florida in 2016. Reliance on cameras for autonomous driving also becomes more dangerous at night when image processors meant for daytime lighting need to pick out details in dark objects or make decisions about path of travel.  Take a look at the image below – which road line should the vehicle follow?

The left line would lead you directly into a divider, which is likely what happened in a fatal Tesla autopilot accident earlier this year.

 

This is why most developers of autonomous vehicles believe that additional sensors are needed to have a safe driving system.  First among these is radar.

 

Radar has been in wide use for tracking aerial objects since the 1940s.  However, it wasn’t until fairly recently that radar units were made small enough and power efficient enough to be practical for use in vehicles.  In-vehicle radar brings a number of capabilities and benefits that can work together with in-vehicle cameras:

  • Distance Tracking – Radar is exceptionally good at accurately determining the distance to objects at medium to long range, even if those objects are themselves moving.
  • Speed – As anyone who has gotten a speeding ticket knows, radar is also excellent at determining the speed of objects relative to the radar device.
  • Low Light/Fog – Unlike cameras, radar systems typically have no trouble “seeing” in low light conditions and are still effective when fog or rain partially obscure the road.

 

However, vehicle radar also faces challenges in specific situations that can make it difficult to rely on 100% of the time while driving.  Let’s take the example of a typical forward-facing radar in a vehicle. On straight roads the radar will work as expected with no issues.

But when a vehicle is on a curve, the radar can become ineffective.

 

If there is a solid divider on the outside of the curve, the radar can actually be counterproductive since it “sees” the divider as a stationary object directly ahead of the vehicle, requiring the driving system to rely on other data to determine if the road ahead is actually safe.  Many autonomous driving engineers have decided that the best source of this “other data” is Lidar.

 

Roughly speaking, Lidar is a type of sensor that uses thousands of laser pulses to create a three dimensional map of an object.  You can think of it as a low-fidelity 3D scanner that can be mounted in vehicles to create a 3D representation of the world around a vehicle.  In addition to most of the benefits of Radar, Lidar has two key advantages that make it attractive for use in autonomous vehicles:

  • 3D Representation – Lidar systems generally return a “point cloud” that can be used to create a true three dimensional picture of the environment.
  • 360 Degree Tracking Ability – Depending on the type of Lidar setup used, a map of a full 360 degrees around the vehicle can be created, rather than relying a narrow field of view.

 

However, Lidar is also not completely foolproof.  Since the sensors rely on the reflection of laser pulses, heavy rain or fog can significantly lower the effectiveness of Lidar and make it difficult to get an accurate picture of the vehicle’s surroundings.  Take a look at the image below, showing a clear point cloud on the left and a simulated distortion due to rain of the same image on the right.

As you can see, rain and fog conditions can significantly erode the level of detail returned by Lidar, making it difficult to distinguish lines and accurately assess objects.

 

Stringing It Together

From the quick run-through of autonomous driving sensors above, it should be clear that an autonomous vehicle can’t rely on a single type of sensor to decide what it should do at all times.  Cameras can fail in low light or low contrast environments, radar can fail with certain vehicle and road orientations, and Lidar can fail in rain or fog conditions. In order to operate safely, autonomous vehicles need to rely on all the sensors available to them and combine many points of data to generate an accurate view of the world.

 

However, combining gigabytes of different kinds of data in real-time to categorize, understand, and act upon the world is an enormously complex task that relies on a combination of specialized hardware, software, and testing.  In a future article, I’ll take a look at the tools currently being used to tackle this challenge, as well as some of the new legal issues that are arising as a result.

Michael Dorazio