A police reform bill introduced in the House of Representatives Monday by prominent Democrats in response to weeks of protest over racist policing practices would do just that. But some privacy advocates say its restrictions aren’t tight enough and could legitimize the way police use facial recognition today.
“We’re concerned,” says Neema Guliani, senior legislative counsel for the ACLU in Washington, DC, citing evidence that many facial recognition algorithms are less accurate on darker skin tones. She urges a federal ban on facial recognition “unless and until it can be used in a way that respects civil liberties;” Guliani says it’s not clear that that is possible. Last year several cities, including San Francisco, banned use of the technology by government agencies.
The proposed Justice in Policing Act would, among other things, tighten the definition of police misconduct and ban chokeholds like the one that killed George Floyd in Minneapolis last month. It is sponsored by senators Cory Booker (D-New Jersey) and Kamala Harris (D-California), and representatives Karen Bass (D-California) and Jerrold Nadler (D-New York). A five-page summary of the bill’s main provisions doesn’t mention that it includes what could become the first federal restrictions on facial recognition technology.
None of those rules directly limit what a sheriff’s office or city police department could do with facial recognition.
One part of the bill requires that federal uniformed officers, such as FBI agents, wear bodycams and use dashcams in marked vehicles. It states that facial recognition cannot be built into these devices, or used to scan bodycam video in real time, for example to spot persons of interest in a crowd. To apply facial recognition to bodycam footage federal agents would need to secure a warrant after convincing a judge the information is “relevant to an ongoing criminal investigation.”
Another provision specifies that police departments using federal grants to buy or rent bodycams must adopt policies on the use of facial recognition on footage from the devices, including securing a judge’s approval and only deploying it in cases of “imminent threats or serious crimes.”
Jameson Spivack, a policy associate at Georgetown’s Center on Privacy and Technology, says those restrictions wouldn’t affect many of the ways facial recognition is used by US law enforcement. The technology is more commonly applied to footage from sources other than body or dash cams, such as surveillance cameras, sometimes solicited from private citizens or businesses. “If Congress passes this legislation that barely touches facial recognition at all, companies could go right back to selling to the police and not much will change,” Spivack says.
Civil rights groups that campaign on surveillance and facial recognition say that would be concerning because the technology is unreliable and expands police powers—effects that burden communities of color most of all.
“I’m very disappointed that Congress would take this sort of regulatory approach,” says Albert Fox Cahn, founder of the nonprofit Surveillance Technology Oversight Project and a fellow at NYU School of Law. “This is incredibly biased technology that will put Americans of color at higher risk of wrongful arrest than white Americans.”
IBM and Amazon didn’t respond to requests for comment on the Justice in Policing Act; Microsoft declined to comment. IBM has halted sales of facial recognition permanently; Microsoft and Amazon both paused sales only to US police until federal regulation is in place, with Amazon saying its hiatus will last 12 months.
Facial recognition providers have come under growing scrutiny from researchers and privacy advocates in recent years over evidence that the technology often makes more errors on darker skin tones than lighter ones.
A report by the National Institute of Science and Technology published in December found that when analyzing mugshots, many commercial facial recognition algorithms reported more false positives for American Indian, black, and Asian people, although some of the software tested exhibited minimal differences.
Two influential academic studies found that services offered by IBM, Microsoft, and Amazon that try to identify a person’s gender from their face were very accurate on people with light skin, but very inaccurate for people with darker skin tones, especially women. All three companies say they have since improved their technology.
MIT researcher Joy Buolamwini, a coauthor on those studies and founder of campaign group Algorithmic Justice League, likes that the Justice in Policing Act blocks real-time facial recognition on bodycams. Recent research has shown the unsteadiness of such footage reduces accuracy. But she says federal rules on facial recognition should also cover the many other uses of the technology.
“Regardless of the accuracy of these systems mass surveillance enabled by facial recognition can lead to chilling effects and the silencing of dissent,” Buolamwini says. Last month she coauthored a paper suggesting the creation of a new federal agency to regulate facial recognition, modeled on the Food and Drug Administration.
The shockwaves from George Floyd’s death suggest such novel ideas might win more serious consideration than they would have previously. Civil rights groups also expect additional state and local ordinances restricting facial recognition, like those passed in San Francisco and Oakland in California, and Somerville, in Massachusetts, that ban all government use of facial recognition.
The three big tech companies that asked for federal facial recognition rules this week are expected to take an active role in trying to shape them. IBM, Amazon, and Microsoft have all mentioned facial recognition on Senate lobbying filings this year.
Microsoft also has been an active player on legislation around facial recognition in states, including its home of Washington. It supported a facial recognition bill approved in March that takes effect July 2021 and requires police to obtain a warrant for any use of the technology. The ACLU and other privacy groups say the measure is too weak, criticism that contributed to a similar bill, also supported by Microsoft, failing in California’s legislature last month.
Updated, 6-12-20, 6:10pm ET: This article has been updated to more clearly reflect the stance of the ACLU.
More Great WIRED Stories
- Who discovered the first vaccine?
- How to protest safely: What to do and what to avoid
- As health care moves online, patients are getting left behind
- Walmart employees are out to show its anti-theft AI doesn’t work
- The confessions of Marcus Hutchins, the hacker who saved the internet
- ? Is the brain a useful model for AI? Plus: Get the latest AI news
- ??♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones