FTC’s Strong Rite Aid Enforcement Order Is a Warning to Companies Using Biometric AI Systems

In December, the Federal Trade Commission announced a settlement with Rite Aid over the pharmacy’s discriminatory use of facial recognition technology in its stores. Between 2012 and 2020, Rite Aid deployed facial recognition surveillance systems to identify individuals who may be shoplifting—yet did so without assessing the accuracy or bias of the technology. Rite Aid also used facial recognition technology disproportionately in stores in plurality non-white neighborhoods.

While the use of facial recognition surveillance can be harmful in any context, Rite Aid failed to implement even the most basic safeguards, validation studies, or trainings for employees required to “enforce” the match alerts issued by the system. As a result, “Rite Aid employees recorded thousands of false positive match alerts,” the FTC explained. This led to profiling, harassment, and embarrassment from people simply trying to shop.

This example, while particularly well documented and egregious, is not much of an outlier. AI systems, particularly systems that used to conduct biometric surveillance, have a long track record of causing racially disparate harms.

In addition to placing a five-year ban on Rite Aid’s use of facial recognition, the settlement requires the company to delete any images of consumers collected with the technology and any algorithms developed using such images. Rite Aid must notify consumers when their biometric information is processed by a surveillance system in the future or when any action is taken affecting them because of such a system. The company is also required to implement strong data security and provenance practices.

This enforcement order requires Rite Aid to institute essential measures such as meaningful notice to consumers, independent third-party assessments, and commonsense data deletion practices. As the FTC has made clear through its enforcement actions—as well as through its blog posts and public statements—entities using AI systems that implicate sensitive types of personal data or in sensitive contexts must:

  • Institute basic data minimization requirements, only collecting, keeping, and using the data required to perform a legitimate specific purpose;
  • Institute data security practices that reflect the risks of unnecessary and invasive data collection and use practices;
  • Proactively notify people being surveilled or processed by an automated system;
  • Require specific training and limitations on access; and
  • Be consistent, fair, and forthcoming in privacy policies.

Rite Aid is the latest in a line of FTC orders that require model or algorithmic deletion, also known as disgorgement (more on that here). This remedy rests on the concept that a business should not be able to enjoy further profit and innovation off of models and algorithms created or enhanced with illegally obtained data. 

Deletion of products developed from ill-gotten personal data is also required in other recent FTC orders. This includes the Commission’s recent X-Mode consent decree, which requires the data broker to “[d]elete or destroy all the location data it previously collected and any products produced from this data unless it obtains consumer consent or ensures the data has been deidentified or rendered non-sensitive[.]”

Though the path toward a comprehensive AI regulatory framework is a long and slow…

Legal Dive: State data privacy laws called toothless by public interest groups 

Of the 14 states that have enacted comprehensive data privacy laws, only California gives people robust protections and even its law is weak in key areas, the Electronic Privacy Information Center and U.S. PIRG Education Fund say.   

“Weak, industry-friendly laws allow companies to continue collecting data about consumers without meaningful limits,” say the groups in The State of Privacy, released February 2. “Consumers are granted rights that are difficult to exercise, and they cannot hold companies that violate their rights accountable in court.” 

Read more here.

Bangor Daily News: Maine lawmakers close in on nation-leading data privacy bill 

California passed a comprehensive consumer data privacy law in 2018, with 13 states since enacting varying legislation that followed models initially drafted by industry giants such as Amazon, per the Electronic Privacy Information Center and U.S. PIRG Education Fund

Those two groups released a report Monday giving “D” or “F” grades to nine of the 14 states — including Maine’s New England neighbors, New Hampshire and Connecticut — and a “B+” to California for privacy laws. O’Neil’s bill would earn Maine an “A” and become the strongest data privacy law in the nation, the groups said. 

The electronic privacy group, EPIC, and Consumer Reports have been among national groups backing O’Neil’s bill, along with Attorney General Aaron Frey, a Democrat. Big Tech firms and the Maine State Chamber of Commerce said last year they preferred Keim’s bill. After facing criticism it was too industry-friendly, Keim amended it to require firms receive “opt-in” consent for data collection and eliminate a small business exception, among other changes. 

Read more here.

StateScoop: Maine could have strongest data privacy law in nation if bill passes   

O’Neil told StateScoop last year that it’s on states to enact privacy protections as long as the federal government does not. Her proposed legislation is modeled largely after the federal data privacy bill that failed in Congress in 2022, the American Data Privacy and Protection Act. StateScoop reached out to O’Neil for comment on her new bill, but did not heard back in time for publication. 

…O’Neil’s bill also features a private right of action, allowing individuals to recoup $5,000 in damages from companies that violate the bill. This legal protection, along with its “data minimization” obligations that prevents companies from collecting unnecessary information, is why the bill could go on to be the only privacy law on the books to earn an “A” grade from the Electronic Privacy Information Center and the U.S. PIRG Education Fund, two research groups that last week released a report finding most states’ privacy laws offer limited protections for consumers. 

They evaluated 14 data privacy laws for their strengths and weaknesses and assigned letter grades to each. The highest scoring law, the California Consumer Protection Act, earned a “B+”, while the Connecticut Data Privacy Act, which Keim’s bill is most similar to, earned only a “D.” 

Read more here.

DHS Disregards Internal Policies and Avoids Fourth Amendment Protections to Track Your Location

The Department of Homeland Security’s Office of the Inspector General (OIG) published a report on DHS’ use of Commercial Telemetry Data (CTD). CTD, in brief, is data collected from mobile devices by private entities that can form a detailed timeline of the device’s location over the period of time contained in the dataset. The OIG report found that Customs and Border Patrol (CBP), Immigration and Customs Enforcement (ICE), and the United States Secret Service (USSS) bought and used CTD in violation of their own meager internal privacy policies. Additionally, the report found a lack the internal controls to make sure privacy policies were followed in the first place. The OIG report is a damning account of the ways in which DHS’ oversight mechanisms have failed to curtail repeated privacy abuses and hold DHS components accountable for violations of its own policy. What’s more, the buying of CTD has allowed DHS and its components to make an end-run around the Fourth Amendment warrant requirement and is just the latest example of the need to reform government surveillance to protection Americans’ privacy.

DHS Fails to Follow its Own Internal Policies at a Systemic Level

The OIG report details how the DHS components refused to follow appropriate technology procurement protocols and showcases the agency’s startling lack of internal oversight during the actual use of the products that provide CTD, allowing DHS agents to engage in detailed surveillance without credentials or guidance. For example, the report details a troubling pattern of disregard for privacy impact assessments (PIAs). Under the E-Government Act, the federal government must complete, review, and publish a privacy impact statement “before . . . initiating a new collection of information” in an identifiable form that “will be collected, maintained, or disseminated using information technology” from ten or more persons. [1] The assessment must address, among other things, what information will be collected, why the information is being collected and the agency’s intended use for the information, with whom the information will be shared, and how the information will be secured. [2] The Office of Management and Budget regulations dictate that PIAs must be completed “from the earliest stages of” and continuously throughout “the information life cycle.” PIAs also provide an analysis of privacy concerns posed by the acquisition and steps that the government agency will take to mitigate the impacts on privacy.[3]

DHS’ own Privacy Policy and Compliance instructions require a Privacy Threshold Analysis (PTA) to occur when an IT system, technology, etc. involves PII to determine whether further privacy compliance documentation (such as a PIA) is required. The OIG report defines PII as “any information that permits the identity of an individual to be directly or indirectly inferred, including other information that is linked or linkable to an individual.” While these PIAs may look like a valid safeguard on paper, DHS has systematically failed to use them appropriately, if it fills one out at all.

To avoid conducting a PIA, ICE erroneously denies…