Who Is Responsible When AI Fails?

Gautham Thomas
JUNE 27, 2018
elaine herzberg,ai healthcare,ai malpractice,hca news


Determining Who Is at Fault When Things Go Wrong

Healthcare applications of AI occupy a critical position at the nexus of several important concerns for providers and patients, like privacy and human life, according to Lucas Bento, MS, an associate at Quinn Emanuel Urquhart & Sullivan, a large international law firm. Bento advises companies on the development stages of AI applications and technologies.

“The biggest question,” Bento said, “is how to allocate responsibility when the chain of causation is breached by, say, unexpected software error or hacking. Some theories of strict liability may be applicable.”

Strict liability is a legal claim that does not require a particular finding of fault, like negligence or intention. Product liability draws on this principle.

“In other instances, a court may take a deeper dive into what or who actually caused the error,” Bento said. “These are all novel issues that, barring legislative intervention, will be resolved via successive litigations across the country.”

But if litigation for these liability cases does not reach court, as in Herzberg’s case in the March Uber crash, the question of how to adjudicate liability for errors or injury involving AI cannot be established.

There have been other fatalities with AI systems involved in which drivers were killed, unlike Herzberg’s death in Arizona. Tesla’s Autopilot driving system was faulted by the NTSB for a fatal crash in Florida in 2016. The family of the deceased driver and their lawyer ultimately released a statement clearing Tesla of responsibility, but they refused to comment on any settlement.

>> READ: How AI Is Shaking Up Healthcare, Beyond Diagnostics

On March 23, shortly after Herzberg’s death, Walter Huang died in a fiery crash while driving a Tesla Model X and using its Autopilot system. B. Mark Fong of Minami Tamaki, the law firm representing Huang’s family, claims Tesla’s Autopilot feature is defective and caused the driver’s death. The firm is exploring a wrongful-death lawsuit on grounds that include product liability and defective product design.

Tesla has argued that its Autopilot system is not self-driving and that its user agreement requires the driver’s hands to be on the wheel when the system is engaged.

A. Michael Froomkin, JD, a law professor at the University of Miami and the author of Robot Law, calls the human in the AI system the “moral crumple zone.”

“If there’s no human in the loop, it’s not controversial that whoever designed the tool is liable,” Froomkin said. “The AI [portion of the system] doesn’t impose anything special on that. ...The special bit is where the human is in the loop.”

Allocating responsibility between a human and AI in a system that relies on both parties is an unsettled and controversial issue, he said.

Designers of AI systems can claim they were not the last in the chain of responsibility, that AI is only advisory, and that the human is the decision maker.

Developers might also argue that a doctor or insurance company bears ultimate responsibility for negative outcomes or damages.

“If you have a human in the loop, if the system is advising a person, the person is going to take the fall,” Froomkin said.

As he put it, when linking this question to healthcare applications of AI: “When is it appropriate to blame the doctor using the AI? The easy case is where there’s no person. The harder case is when there’s a person between [the AI and the patient].”

AI doesn’t have legal standing because it does not have sentience. But “if [AI learns] on the job, you have an interesting liability problem: Was it taught improperly? Those are hard questions that are very fact dependent.”

According to Bento, product liability theories seem well suited to address questions of liability arising from the use of AI products and services. “Some challenges exist as to attribution of liability due to computer code errors,” Bento said.


When AI Shows Bias

Both Bento and Freed brought up other issues in expanding the use of AI, such as bias, and their implications for liability.

“Bias is another big issue for liability,” Bento said. “Eligibility for products and services is increasingly dictated by algorithms. For example, some consumer finance companies run algorithms to decide who is eligible for a loan or other financial product. Biased outcomes could create litigation exposure.”

Freed raised the issue of using algorithms in criminal sentencing to predict recidivism. In 2016, the nonprofit news organization ProPublica released a detailed investigation into how machine-driven “risk assessments” used to inform judges in sentencing showed bias against black defendants. One company, Northpointe, which created a widely used assessment called COMPAS, does not disclose the calculation methods used to arrive at final results. (Northpointe disputes ProPublica’s report.)

“Especially in healthcare, we need to know why the decisions are being made the way they are,” Freed said. “We need to know it’s the right cause rather than something that’s more efficient or something that’s the right call but for the wrong reasons.”

AI and machine-learning systems use data sets, which require careful thought and vigilance in their own right, Freed said.

>> READ: Holding Public Algorithms Accountable

SOPHiA Genetics and its SOPHiA AI genetics testing system, with its 200,000 patients tested, has a large data pool at this point to draw from and refine its assessments and predictions. Its international user base also gives it geographic range. Other machine-learning applications applied recently to analyze images must rely on a relatively small data pool to train the system’s clinical predictions.

“What’s interesting for healthcare [is that] a lot of what we’re doing is based on training data,” Freed said. “It seems [as if] your algorithms are only as good as your training data. A lot of the training data, if you don’t have a ton of resources, are coming from sources that are easily obtainable or free or not great sources of data.”

Recently developed machine-learning applications built to analyze medical imaging often rely on a relatively small data pool to train the system’s clinical predictions.

“What’s interesting for healthcare [is that] a lot of what we’re doing is based on training data,” Freed said. “It seems [as if] your algorithms are only as good as your training data. A lot of the training data, if you don’t have a ton of resources, are coming from sources that are easily obtainable or free or not great sources of data.”

SHARE THIS SHARE THIS
1
Become a contributor