Google’s Senior Vice President of Global Affairs, Kent Walker, highlights the need for sound government policies to promote progress while reducing risks of abuse in the field of AI. He emphasizes the need to build on existing regulation, adopt a proportionate, risk-based framework focused on applications, promote interoperability in AI standards and governance, ensure parity in expectations between AI and non-AI systems, and promote transparency that facilitates accountability. Walker suggests the use of existing authorities to manage challenges, pursuing new rules where needed. He also mentions collaborative efforts like the US National Institute of Standards and Technology AI Risk Management Framework and the OECD’s AI Principles and AI Policy Observatory that can provide clear guidelines for developing responsible practices in AI.
source update: A shared agenda for responsible AI progress