By Ben Winters, EPIC Counsel
Rens Dimmendaal / Better Images of AI / Man / CC-BY 4.0
Last week, the White House Office of Science and Technology Policy released a “Blueprint” for an “AI Bill of Rights.”The five major principles are Safe and Effective Systems; Freedom from Algorithmic Discrimination; Data Privacy; Notice and Explanation; Human Alternatives, Consideration, and Fallback. EPIC published an Op-Ed in Protocol outlining specifically how the White House can act to enact the principles from the blueprint.
In their own words, “The Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or defense, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person, nor does it constitute a waiver of sovereign immunity.”
However, the Office of Science Technology Policy did outline several expectations of how people should be able to experience automated decision-making systems and how entities should act when developing and using automated decision-making systems.
EPIC will continue to push for laws to ensure these and many more protections are legally enshrined and protected. The specific actions are isolated in this post below (emphasis added by EPIC)
-[During development of a system] Consultation should directly engage diverse impacted communities to consider concerns and risks that may be unique to those communities, or disproportionately prevalent or severe for them. Concerns raised in this consultation should be documented, and the automated system developers were proposing to create, use, or deploy should be reconsidered based on this feedback.
–Systems should undergo extensive testing before deployment. “Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.”
–Outcomes of these protective measures ( pre-deployment testing, risk identification and mitigation, and ongoing monitoring) should include the possibility of not deploying the system or removing a system from use
-Should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems.
–Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.
-Expansive set of classes that should not face discrimination by algorithms and systems should be used and designed in an equitable way: Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.
-Protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for…
Continue reading