How Google thinks about health equity when building generative AI models

The author of this content, Google’s Chief Health Equity Officer, shares their personal motivation for becoming a physician: the desire to improve the quality of healthcare based on their family’s experience advocating for better care for their father. They emphasize the importance of ensuring that healthcare is accessible to everyone and provided with dignity and respect.

The author acknowledges the potential of AI technologies to address biases in healthcare and promote health equity, but also warns that if not used responsibly, these innovations can exacerbate existing inequities. To prevent this, the author outlines four ways in which they embed health equity into their work with AI.

Firstly, they integrate foundational approaches to health equity, such as Community-based Participatory Research (CBPR), into their design and evaluation methods. They consider the social context of their users, including cultural, historical, and economic circumstances, in order to build solutions that are more inclusive and effective. An example of this is their work in using AI to recognize and capture diverse skin tones, resulting in camera features that work for everyone.

Secondly, the author emphasizes the importance of diverse representation in data. Historically, clinical trial research has lacked diversity, thereby excluding historically marginalized groups from important medical advancements. To address this, the author strives to make their data collection and curation processes inclusive and equitable. They also work with the broader AI research community to identify best practices for diverse representation in data. Furthermore, they collaborate with the National Institutes of Health (NIH) on the Pangenome project, aiming to expand the genomic sequencing data available to better understand and treat diseases across diverse ancestries.

Thirdly, the author discusses the need to consider health equity in real-world use cases of AI systems. Using incomplete and biased data can perpetuate harm and bias among marginalized populations. Therefore, the author emphasizes the importance of grounding the evaluation of AI models in specific real-world use cases that reflect the experiences of marginalized populations.

Lastly, the author highlights the importance of fostering inclusive collaboration. They recognize that social factors, such as where a person lives, works, or goes to school, can impact their health. To effectively recognize and understand these social drivers, collaboration with experts from various fields, such as social and behavioral science, policy, and education, is essential. The author mentions their partnership with Google’s Responsible AI Team and their Equitable AI Research Roundtable (EARR) Program as examples of how they incorporate a multidisciplinary approach to addressing the impacts of AI on historically marginalized communities.

Overall, the author emphasizes that their work at the intersection of AI and health equity is an ongoing journey that requires responsibility and accountability. They strive to center their efforts around marginalized populations, aiming to build solutions that make healthcare more equitable and address historical biases. The author emphasizes the need to prioritize accuracy and fairness over speed, as getting it wrong is not an option.


There are no comments yet.

Leave a comment