Protecting people’s privacy on health topics

In the past few decades, Google has been utilizing artificial intelligence (AI) and machine learning to enhance its products and make them more helpful to users. From features like Smart Compose in Gmail to providing faster routes in Maps, AI has had a significant impact on improving user experience. Furthermore, Google recognizes the potential of AI to address major global challenges such as advancing medicine and combating climate change. As the company continues to integrate AI, particularly generative AI, into more Google experiences, they understand the importance of adopting a bold and responsible approach.

One way Google ensures responsible implementation of AI technology is by incorporating protections into their generative AI features from the beginning. They have developed tools and datasets to identify and mitigate unfair bias in their machine learning models. They actively research this area and have published several papers on the subject. Furthermore, Google seeks third-party input to consider societal contexts and evaluates training datasets for potential unfair bias. They also engage in red-teaming programs with internal and external experts to test for vulnerabilities and areas of abuse, including cybersecurity and societal risks. By conducting adversarial testing, Google can proactively identify and address emerging risks. Additionally, they have implemented policies that outline prohibited uses of generative AI, mitigating potential harm, inappropriate or illegal content. Their system of classifiers helps detect, prevent, and remove content that violates these policies. Furthermore, Google prioritizes the safety of teenagers by implementing additional safeguards to protect them from potential risks associated with generative AI.

Google aims to provide additional context for generative AI outputs to help users evaluate the information they encounter. For instance, they have introduced tools like “About this result” in generative AI Search to support users in evaluating the information they find. They have also updated their election advertising policies, requiring advertisers to disclose digitally altered or generated material in campaign ads. This allows users to have a better understanding and context when viewing election advertising on Google platforms.

To protect user privacy and personal data, Google builds AI products and experiences that prioritize privacy. The privacy protections implemented for other Google services also apply to their generative AI tools. Users have control over their data, with the ability to pause, save, or delete it at any time. Google has a longstanding policy of not selling personal information, including for advertising purposes. Privacy safeguards specific to generative AI products have also been put in place. For example, if users choose to use Workspace extensions in the Bard feature, their content from Gmail, Docs, and Drive is not seen by human reviewers, used for ads in Bard, or used to train the Bard model.

Google recognizes that addressing the complexities of AI requires collaboration. They actively engage with various stakeholders, including companies, academic researchers, civil society, governments, and others. They collaborate with organizations like Partnership on AI and ML Commons and have launched the Frontier Model Forum with leading AI labs to promote responsible development. Google also emphasizes transparency and regularly publishes research papers to share their expertise with the industry. They remain committed to working with the industry, governments, researchers, and others to harness the opportunities presented by AI and effectively address associated risks.

TagsNone
Comments

There are no comments yet.

Leave a comment