The Dystopian Concerns of AI for Healthcare

Ryan Black
JANUARY 18, 2018


John Mattison, MD, likes science fiction. In a speech at the AI in Healthcare Summit today in Boston, he brought up Niven’s laws:
  • Never fire a laser at a mirror.
  • Giving up freedom for security is beginning to look naïve.
  • It is easier to destroy than to create.
  • Ethics change with technology.
  • The only universal message in science fiction: There exist minds that think as well as you do but differently.
All (except the first) played a part, but the fourth in particular anchored his talk about the ethical challenges that artificial intelligence (AI) creates in healthcare. There are 4 major concerns, as he sees it: That AI will take jobs and dignity away from humans; that autonomous devices could be harnessed by bad actors and become a physical threat to humans; that unintentional bias becomes a part of AI systems; and that bias will be intentionally programmed into the technology.

The luddite replacement scenario is well known, and some groups are trying to harness blockchain to avoid the human-extermination Terminator scenario. Mattison placed a particular concern on bias.

Predictive technologies meant to determine who was most likely to commit a crime have already been shown to unfairly flag African Americans. He said in healthcare, similar problems could be manifested. Technologies could incorrectly filter out certain people from receiving certain treatments based on previous and potentially biased evidence, denying them access to interventions that might have actually helped them.

Intentional bias, according to Mattison, is “really quite scary.” Similar to the “Nosedive” episode of the television show Black Mirror, he said, some Chinese companies are already having their employees engage in behavioral rating system. This is like a credit score, encouraging them to display certain behaviors to receive certain privileges and keep their jobs. It’s a form of social control, determining a person’s worth by the expectations of someone who making the rating algorithm, and if applied to healthcare the outcomes could be even crueler than the denials of unintentional bias.

“If we don’t consciously, deliberately, actively, explicitly, vocally look out for the rights of the marginalized, they will suffer in incredible and unrecognized ways,” he said. “The only way to overcome it is to be aware.”

SHARE THIS SHARE THIS
24
Become a contributor