Google has developed an AI Red Team that specializes in carrying out complex technical attacks on AI systems. They work closely with traditional red teams and leverage the expertise of Google Threat Intelligence teams to simulate realistic adversary activities. The team identifies opportunities to improve the safety of AI systems by adapting relevant research and testing a range of system defenses. The report provides a list of tactics, techniques, and procedures used by real-world adversaries and in red teaming exercises, including prompt attacks, training data extraction, backdooring the model, adversarial examples, data poisoning, and exfiltration.
source update: Google introduces AI Red Team