EPIC Urges OMB to Strengthen Draft Regulations for Government Use of AI

EPIC submitted comments to the White House Office of Management and Budget commending their strong step toward regulating AI use by federal government agencies, and recommending several ways to strengthen it.

Last month, the agency took a major step toward ensuring the federal government uses new AI technologies responsibly: it released draft guidance outlining federal agencies’ obligations and suggested actions around the responsible development, use, and procurement of AI technologies.

The draft guidance comes on the heels of President’s Biden Executive Order 14110, entitled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” and incorporates previous federal efforts to manage the risks and impacts of AI technologies like the White House’s Blueprint for an AI Bill of Rights and the National Institute for Standards and Technology’s AI Risk Management Framework.

It has three overarching purposes: (1) established new agency roles, resources, and processes for managing new and existing government AI systems, including a new Chief AI Officer (CAIO) role to lead each agency’s implementation of the OMB draft guidance; (2) requires agencies to build internal processes to foster responsible AI innovation and adoption; and (3). sets out minimum AI risk management practices that most executive agencies are expected to follow when developing, procuring, or using AI systems that impact individuals’ rights or safety. These pra ctices include ongoing AI impact assessments covering an AI system’s intended purpose, potential risks, and relevant data; real-world performance testing to ensure reliability and risk mitigation in practice; independent evaluations of AI performance; annual AI monitoring; and consultations with affected and underserved communities.

EPIC recommends that OMB (1) enforce agency compliance with its AI guidance; (2) refine responsible AI provisions like AI impact assessments and AI use case inventory reporting to increase transparency and accountability for more types of AI systems; (3) increase agency data management practices through additional data minimization provisions and reinvigorated privacy impact assessment reporting requirements; (4) encourage AI adoption only where AI can serve as a curated tool to meet predefined agency needs; and (5) monitor national security systems and mandate AI risk management practices when they are used for other purposes covered by OMB’s AI guidance.

Leave a Reply

Your email address will not be published. Required fields are marked *