In 2015, Google’s research team set out to explore how AI could improve people’s health. They met with doctors who had a vision of improving access to eyesight-saving patient care, leading them to look into identifying diabetic retinopathy (DR) as a leading cause of preventable blindness. In 2016, they built an AI model that performed on par with eyecare doctors and in 2018, their partner team at Verily received CE mark for this tool, Automated Retinal Disease Assessment (ARDA). The first patient was screened with ARDA in Madurai, India. Since then, ARDA has screened over 200,000 patients in clinics around the world, from urban cities in the EU to rural communities in India. They share their key lessons in an article published in Nature Medicine.
The first key lesson they learned was that AI tools should be evaluated in real-world settings before being implemented. While they developed ARDA in a research setting, they recognized that it was essential to validate it in a clinical setting before implementation. They also found that involving multiple stakeholders, including medical experts, regulators, and patients, helped them to understand the diverse perspectives and priorities of those who would be impacted by the tool and enabled their product to be adaptable to different healthcare systems.
The second lesson was the importance of engaging patients and clinicians, ensuring that they had a clear understanding of the tool and its potential benefits. One way that they engaged clinicians was by holding training sessions to help them better understand how to incorporate the tool into their clinical workflows. They also prioritized meeting end-users’ needs by creating a user-friendly interface and incorporating feedback from patients and clinicians.
Thirdly, they emphasized the importance of maintaining transparency in the development process. Because medical AI is still a relatively new field, they made sure to provide clear explanations of the technology’s limitations and potential pitfalls. This allowed stakeholders to have an informed understanding of what the technology could do and how it was being developed and enabled them to make informed decisions about implementation.
The fourth lesson they learned was the value of starting with a specific use case. By focusing on diabetic retinopathy, they were able to develop a clear mission and set of goals for ARDA’s development. This helped them maintain focus and move towards their objective of developing a useful, clinically-validated tool. They also recognized that building relationships with healthcare providers, patients, and other stakeholders helped them tailor their AI tools to specific medical needs and improve the likelihood of successful implementation.
Lastly, they stressed the importance of regulatory compliance. Implementing medical AI tools requires a rigorous regulatory process to ensure that they are safe, effective, and can be integrated into existing healthcare systems. Adhering to regulatory guidelines and getting the necessary approvals can be a lengthy and challenging process, but it is essential to ensure that the AI tool is safe, effective, and beneficial for patients in the long term.
In conclusion, these key lessons can help guide the development and implementation of medical AI tools. It is crucial to involve multiple stakeholders, engage clinicians and patients, maintain transparency, focus on specific use cases, and adhere to regulatory guidelines. By doing so, we can develop safe and effective medical AI tools that benefit patients and healthcare providers alike.