The Covid-19 pandemic has not shown any signs of stopping, causing many countries’ hospitals to be overloaded, even experts worry that the local health system may collapse when the disease is out of control. . It is not uncommon for many to begin to place expectations on artificial intelligence, that it can help speed up testing and reduce the burden on health workers.
However, a new study conducted by Google Health, which is also the first to track the impact of deep learning technology on professional clinic environments, shows that the most accurate AI system also makes serious mistakes. if not familiar with the working environment.
Current regulations for the application of AI to the clinic are focused on ensuring the accuracy of the device, without any other requirements related to helping patients get better because no such test has ever taken place. . Researcher Emma Beede from Google Health identified how the application of AI in treatment needs to change.
“We need to understand how artificial intelligence helps people in certain situations, especially in the field of health care, before we apply them widely.”Said Ms. Beede.
The opportunity to try AI tools in the first real-world scenario comes from Thailand. The Ministry of Health has set a goal of diagnosing 60% of people with diabetes to see if they have retinopathy, in an effort to help reduce the incidence of diabetes-related illness. But only about 200 eye specialists examine 4.5 million people with diabetes, which is not enough, so it is difficult for Thailand to meet the government-set goals without technology backing.
While still awaiting FDA approval, Google is still allowed to install artificial intelligence systems to help Thailand combat the effects of diabetes. Ms. Beede and her colleagues went to 11 rooms across Thailand, installing a trained deep learning system to detect eye disease in diabetics.
Usually the nurse takes pictures of the patient’s eye and sends it to a specialist – a process that can take up to 10 weeks. The Google Heath AI System can confirm ocular disease with an accuracy of up to 90%, which the team deems to be equivalent to the expert level; The diagnosis process takes less than 10 minutes.
Sounds impressive, but the machine achieves the aforementioned precision in laboratory conditions. Because it is unclear how machines work in the real world, Google Health tries to find the truth.
After months of follow-up, doctor interviews about the experience of using the AI system, Google experts received negative results.
When things go smoothly, AI speeds up the diagnosis. But sometimes the machine refuses to give results. Like many image recognition systems, they are accustomed to high-quality images and will refuse to analyze images with lower-than-standard quality. It declined to accept more than 20% of the images that were mediated by doctors.
Patients whose images have been rejected by AI will have to revisit another clinic; Not everyone has the ability to travel like that. Nurses taking pictures also had a lot of troubles, they said that many photos clearly showed that the eyes of the patients were healthy, without any follow-up visits. Many times taking pictures again, or sitting and editing them in a clear way, also takes a lot of valuable time.
Nurse photographed eyes for patients.
Because the system requires the physician to upload images to the cloud storage to process images, slow Internet speeds can delay the diagnosis. “Patients enjoy the results immediately, but the slow pace makes them complain a lot. They had to wait here from 6 am, and in the first two hours of work we were only able to diagnose 10 patients”Said a nurse.
The Google Health team is in discussion with local health workers to develop a new employment plan. There will be times when the nurse can make a self-diagnosis with easy cases / easy images, and a Google specialist will tailor the system so that it better recognizes low-quality photos.
However, the risk is still there
“This is a very important research for anyone who wants to get into the industry [nhận dạng hình ảnh] and want to apply AI solutions in real situations”, Says researcher Hamid Tizhoosh from the University of Waterloo, Canada. Tizhoosh is an expert in medical imaging, and he criticized the rapid adoption of AI tools in Covid-19 diagnostics. Tizhoosh said that the new Google Health study is a wake-up call, showing that AI is still not ready, achieving high accuracy is just the first step.
Michael Abramoff, an ophthalmologist and computer scientist from the University of Iowa, has been developing an AI system for eye disease recognition for years. Abramoff is also the CEO of eye disease startup IDx Technologies, in cooperation with IBM Watson. Dr. Abramoff gave a warning about rushing the application of AI in diagnosis, as well as negative feedback from people who have bad experiences in diagnosing AI with AI.
We cannot expect AI to make a case in the Covid-19 pandemic.
“I’m glad Google is ready to track every step of the clinic. Because in terms of health care, it’s not every single algorithm that matters”, Said Dr. Abramoff.
He also questioned the comparison of AI accuracy with experts. Humans can argue, but a system only rejects when machines identify two values ”right” and “wrong”. Dr. Abramoff believes that an AI system needs to be involved in the discussion of unknown points.
When everything went smoothly, the efficiency of the work skyrocketed. Researcher Beede and colleagues witnessed firsthand how the AI system helps talented doctors. “There are nurses who do it alone and can diagnose up to 1,000 patients, with this new tool, she will be able to work even more smoothly.”Said Ms. Beede.
“The patient is not interested in whether the AI or the real person makes the diagnosis. They want to know what the diagnostic experience will be like”.
Refer to Technology Review