Our Story

Business Partners from Day One
Concannon Business Consulting was founded to address the growing need for experienced project and program management teams across a variety of industries. Our team is comprised of experienced resources that deliver immediate project impact and value for our clients, with the mission of 100% customer satisfaction. Our company has grown from two business partners to dozens of consultants servicing clients in the automotive, financial, high-tech, hospitality, retail, and consumer packaged goods industries.

Locations

» Los Angeles
» Dallas
» Seattle

Latest News

inquire@concannonbc.com

+1 949 419 3801

Top

Legal Challenges with Autonomous Vehicle Technologies

Concannon Business ConsultingAutonomous Vehicles Legal Challenges with Autonomous Vehicle Technologies

Legal Challenges with Autonomous Vehicle Technologies

In a previous article I discussed some of the challenges facing autonomous vehicle development due to sensor and data limitations.  In this article I will take a look at the new software tools behind autonomous driving, the legal challenges that arise from using these tools in real-world situations where human lives are at stake, and three possible solutions to these challenges.  

 

An Overview of AI-dependent Autonomous Driving

It’s important to first understand the core technology behind many of the advancements in autonomous driving capabilities over the last few years: machine learning.  You have probably heard machine learning mentioned in the news recently since it has become a very popular method for developing software to solve problems that are very difficult for traditional algorithms, such as recognizing objects in pictures and providing human-like responses to natural language questions.  But how does it actually work and what makes it so different from normal software development methods?

 

Machine learning is a complicated topic that relies on a combination of statistics, neural networks, training models, and data processing to achieve results.  At a simplified level, you can think of it as feeding a large number of examples to a computer, telling it which examples are “good” and which are not, and then telling the computer to try and determine if a new example is “good” or not.  Over many repetitions, the computer will basically come up with its own method for achieving the desired outcome reliably based on the data you give it (note: this type of machine learning is called Supervised Learning, but there are some other methods as well).  For example, if you want to create a program that can accurately find a stop sign in a picture you give it, you would give the program a million pictures and tell it which ones have stop signs (and where they are in the picture) and which ones do not. Based on parameters you give it, the program would then run many cycles and figure out on its own what attributes in a picture match with “stop sign” and create a method for finding stop signs in future pictures based on these attributes.

 

This is why it is called “machine learning” – the computer is effectively learning on its own how to achieve a desired outcome based on a large amount of data.  This differs significantly from traditional software development methods that require a human developer to write code to achieve desired outcomes based on specifications and logic.  To use the same stop sign example, if a human developer was tasked with creating a program to find stop signs in pictures, he or she would need to code a series of checks such as “are there red pixels? If there are red pixels, do they form roughly the shape of an octagon? If there are red pixels in roughly the shape of an octagon, are there also white pixels inside the octagon?  Do those white pixels roughly look like “STOP”? Etc.”. Each of those logic steps is actually quite difficult to code individually and might still be prone to errors even though finding a stop sign is relatively straightforward in comparison to, for example, correctly identifying pedestrians.

 

Machine Learning and Legal Black Boxes

While machine learning has made it possible to rapidly develop the software needed to enable vehicles to drive themselves in complex environments, it has also created a significant challenge in the legal space.  In a traditional software environment, fault is often assigned based on an audit of code. An audit will reveal if errors existed that led to a problem, and whether those errors could (and should reasonably) have been caught prior to deployment in the real world.  For example, as part of the investigations during the Toyota “unintended acceleration” scare from the late 2000s, investigators audited electronics software and firmware to determine if these systems were partially responsible for reported events (they were not).  

 

Audits are made possible by the fact that traditional software is comprised entirely of code that can be read and understood by humans to determine the exact sequence of steps taken at any given time.  However, with a system based on machine learning an audit at the same level becomes difficult, if not entirely impossible. To see why this is the case, let’s use the stop sign example again. In the machine learning system, an auditor can see which images the system thinks have a stop sign, but there is no real way to determine why it chose those images.  Unlike traditional software, machine learning creates systems that are black boxes – you provide an input and it returns an output with no way to know what went on to create that output.  

 

When a code audit is impossible, it becomes significantly more difficult to assign blame in an accident situation.  For example, if an autonomous vehicle strikes a stationary object, it might not be possible to determine exactly why the machine learning-based system did not avoid the object or brake.  It might be even more difficult to determine who was at fault – a vendor, a software engineer, a QA person, etc. Without being able to determine exactly what failed and who was responsible, companies operating autonomous vehicles will face liability concerns not seen before in the automotive world.  These concerns necessitate the development of policies and business models specifically for autonomous driving.

 

Possible Paths Forward

Based on the state of autonomous technology today and the scenarios likely to arise as autonomous vehicles begin entering cities en masse, there are several possible paths that automotive OEMs will need to consider.

 

Blanket Liability Acceptance

The first option is to assume full liability for any accident caused by autonomous vehicles operating in autonomous mode.  This is a position publicly announced by Volvo, which fits well with its safety-first brand, but other automotive OEMs have so far shown only limited interest in pursuing similar policies.  Many OEMs fear the level of liability this opens up and question the the efficacy of a business model in which manufacturers also act as de facto insurance providers. This leads to the second option.

 

New Insurance Categories for Autonomous Vehicles

Since virtually anything, from houses to body parts, can be insured, it makes sense that autonomous vehicles will also carry insurance under some business models.  However, entirely new categories of insurance backed by technical analysis and in-depth statistics will need to be created to meet the demands of these vehicles. Insurance will likely need to be provided for unavoidable accidents (ex. Road damage, deer collisions, etc.) as well as for “technical fault” accidents potentially caused by machine learning black boxes as discussed above.

 

New Liability Determination Methods

One scenario not yet discussed is what happens in an accident between two autonomous vehicles from different manufacturers.  In such a case, determining fault and corresponding liability may become very tricky and require extensive review of sensor logs, pre-crash data, etc. for many weeks.  A possible solution to more complex accident situations like this has been proposed by Intel’s Mobileye team recently: develop a fault determination model that can be used to both help avoid autonomous vehicle accidents and quickly determine fault if they do occur.  Such a model would need to be evaluated and agreed upon by both manufacturers and regulators, but could help remove some legal hurdles for autonomous vehicles.

 

Regardless of which solution is pursed, it is becoming increasingly clear that even though the technology for autonomous vehicles is nearing market readiness, the legal landscape has yet to catch up.  The next few years will be critical in developing the legal frameworks necessary to ensure that the autonomous vehicle future remains bright.

Michael Dorazio