No other city in the U.S. is as closely associated with technological advancements than San Francisco, which, for the most part, has embraced both tech companies and their services in civic life. But on Tuesday, San Francisco passed an ordinance that would ban the use of one very specific technology—facial recognition—by local police departments, making it the country’s first major city to do so.
Cited in the anti-surveillance ordinance were concerns about furthering racial bias and increasing the government’s ability to surveil citizens.
“The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits,” reads the San Francisco “Stop Secret Surveillance” ordinance, which passed on an 8-1 vote by the San Francisco Board of Supervisors, adding, “the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring.”
According to CNN, the facial recognition technology ban is part of a larger anti-surveillance ordinance approved by the Board on Tuesday. All 53 of San Francisco’s police departments would be forbidden from using facial recognition technology, and use of any surveillance technology by the city would require board approval.
“This is really about saying: ‘We can have security without being a security state. We can have good policing without being a police state,’” Supervisor Aaron Peskin, one of the authors of the new legislation, told the Associated Press. “And part of that is building trust with the community based on good community information, not on Big Brother technology.”
But there are carve-outs: federally-controlled facilities like San Francisco International Airport and the Port of San Francisco can still use these surveillance technologies, as can businesses and residents. “It also doesn’t do anything to limit police from, say, using footage from a person’s Nest camera to assist in a criminal case,” writes CNN.
The legislation is groundbreaking, considering San Francisco’s size and influence; the ordinance may serve as an example for other cities similarly concerned with protecting citizens’ privacy in the face of ever-expanding surveillance tools. Neighboring Oakland and Santa Clara County—home of social media behemoth, Facebook—have passed surveillance technology laws of their own, with Oakland also considering a facial-recognition tech ban.
But police usage of facial recognition is especially problematic for black Americans, for whom the technology is significantly less accurate.
Just last year, the American Civil Liberties Union (ACLU) tested Amazon’s facial recognition software and found it incorrectly misidentified 28 black members of Congress as criminals (Amazon currently sells this technology to police departments across the country).
It gets worse. From Vox:
Researchers at MIT found that, overall, the software returned worse results for women and darker-skinned individuals (in both cases, Amazon has disputed the findings). And in places like Maryland, police agencies have been accused of generally using facial recognition technology more heavily in black communities and to target activists — for example, police in Baltimore used it to identify and arrest protesters of Freddie Gray’s death at the hands of law enforcement.
Police departments across the country use facial recognition software for forensic purposes: taking pictures from driver’s licenses and mugshots and matching them against criminal databases. The San Francisco ordinance is more of a preemptive measure, however, since the San Francisco Police Department doesn’t use currently use the technology. But as Vox notes, this kind of tech has been used elsewhere by police and private citizens to monitor crowds at protests, shopping malls and concerts.
While some data suggests black people will be more likely to be mistakenly identified as criminals with this new technology, the accuracy of this software isn’t the only concern civil liberties advocates have. Given how police surveillance historically and presently targets black people at disproportionate rates, it’s near impossible to imagine a scenario in which black communities aren’t caught in a high-tech dragnet by local police departments.
Writer Zoé Samudzi expanded upon this in a piece published earlier this year in the Daily Beast, asking, “in a country where crime prevention already associates blackness with inherent criminality, why would we fight to make our faces more legible to a system designed to police us?”
“This is not simply a problem of “racist code” that can be fixed with diversity initiatives to add more black coders and software engineers to correct technical deficiencies within an unchanged structure technology,” Samudzi continued. “The problem is that black people are simply not human enough to be recognized by the racist technology created by racist people imbued with racist worldviews.”
It remains to be seen how many other cities and states will be willing to follow the Bay Area’s lead. According to CNN, there are presently no federal regulations dictating how AI should be used by the government.