Employing Eye Experts, Google Sharpens Its Disease-Detecting AI

Jack Murtha
MARCH 14, 2018
google diabetic,google ai,google machine learning,hca news

When Google first published promising findings of its artificial intelligence’s ability to spot diabetic retinopathy, healthcare-technology observers began rubbing their palms, eager to see how the Jack of All Tech might advance bleeding-edge medicine. Since then, it has unveiled other high-tech healthcare efforts, but perhaps the most exciting came this week. In a study, Google researchers published findings that suggest its disease-detecting machine learning algorithm has grown even stronger.

In fact, the AI’s ability to identify diabetic retinopathy, a vision-threatening complication of diabetes, is now equivalent to that of board-certified ophthalmologists and retinal specialists, according to the study, published yesterday in the journal Ophthalmology. The improvement came with help from the algorithm’s human counterparts, who judged “a small subset” of retinal images, which further trained the technology and resulted in fewer errors, according to the study and corresponding announcement.

“We believe this work provides a basis for further research and raises the bar for reference standards in the field of applying machine learning to medicine,” Lily Peng, MD, PhD, a product manager for Google AI Research Group who worked on the study, said in a statement.

Peng’s motivation to fine-tune the algorithm goes beyond her professional life. She aims to build a diabetic retinopathy-detecting AI system that would “be good enough for her grandmother,” according to the American Academy of Ophthalmology, the publisher of the journal.

“For my grandma, I would love to have a panel of subspecialists who actually treat the disease, to sit and debate her case, giving their opinion,” Peng said. “But that is really expensive, and it’s hard to do. So how do you build an algorithm that gets close to this?” Her solution, it turned out, was to harness expert input and funnel it into the AI.

The advance builds upon Peng and her team’s prior research, which saw them develop pattern-finding neural networks to pinpoint diabetic retinopathy. The effort required uploading “thousands” of retinal scans into the networks, which then learned to spot hemorrhages and lesions that are indicative of the complication.

Results suggested the AI did nearly as well as human experts, prodding Peng to further refine the algorithm. She solicited consensus decisions by eye experts, who destroyed some of the nuance of the grading process by correcting prior errors, adding specificity, and clearly defining gray areas prone to uncertain diagnoses. The superior set of graded images then worked to improve how the machine learning model operates, according to the research.

Experts warn that the more than 29 million Americans with diabetes could face diabetic retinopathy, an issue that doesn’t immediately alter vision but can lead to irreversible blindness.

Google isn’t the only innovator eyeing this life-changing complication. Other start-ups have set their algorithms after diabetic retinopathy, in part because images have proved a valuable training ground for AI. A similar diagnostic tool developed by IDx, for example, has sped through the FDA approval process.

Related
Can Google's Cloud API Solve Healthcare's Disparate Data Problem?
Google's Foreign Pharmacy Ad Ban Cut Traffic to Uncertified Drug Websites
State-Level Influenza 'Nowcasting' Remains a Thorny Issue

SHARE THIS SHARE THIS
73
Become a contributor