Imagine a world where, as you drive into — or even walk through — New York City, your face is scanned and compared to a list of suspected terrorists or other serious criminals. Would this make you feel safe?
Now, imagine that the technology is error-prone, and may misidentify innocent people as suspects. What about now?
These are not rhetorical questions. New York is in the early stages of acquiring state-of-the-art face recognition technology to scan the faces of all drivers commuting between the boroughs of New York City.
The New York Police Department is already using face recognition to investigate crimes, and has done so since 2011. As of last year, its facial identification unit had conducted “more than 8,500 facial recognition investigations, with over 3,000 possible matches, and approximately 2,000 arrests,” according to a former NYPD official who helped establish the unit.
And though there are as yet no plans to put the technology aboard the body-worn cameras coming to all uniformed officers in the near future, don’t expect that to be very far behind. A 2016 study commissioned by the Department of Justice found 10 body-worn camera companies that advertise current or future face recognition integration with their systems.
Face recognition is a powerful tool. It allows police to identify people from a distance and in secret. The technology works by comparing a photo or video of an unknown face to a database of known faces-such as mugshots, driver’s license photos or a watchlist of wanted individuals — and providing a list of possible candidate matches. So long as it captures your face, a police surveillance video or a photo posted to a social media page can be used to identify you in a matter of seconds.
Use of the technology will provide real public safety benefits. It may allow the police to catch terrorists or dangerous criminals. But without appropriate oversight, its use by police also poses real threats to the privacy, civil rights and civil liberties of millions of innocent New Yorkers.
And right now, no real oversight is in place.
For one thing, face recognition makes mistakes — far more than fingerprinting, iris scanning or other forms of biometric identification. A test of the FBI’s system when it was launched found that it included the correct suspect in a list of possible matches only 86% of the time. In 2015, a New York Supreme Court judge suggested that face recognition is “not sufficiently reliable” to be admissible in court.
Inaccuracies have real-world consequences. A spokesperson for the NYPD acknowledged in 2015 that the technology had “misidentified” five people. These are five people who were investigated, if not arrested and charged, because they were thought to be a suspect by a computer algorithm.
What’s worse, it’s likely these errors will not affect everyone equally. Research suggests that face recognition more often misidentifies African Americans than people of other races — and African Americans are disproportionately likely to come into contact with law enforcement.
Despite these risks, the NYPD has implemented its face recognition largely in secret.
The department has refused to provide information about its system in response to requests filed under New York’s Freedom of Information law. First, the agency argued that the records were exempt from disclosure. Then it claimed it was “unable to locate” the documents requested. Finally, it reverted to the claim that the documents are exempt from disclosure because they discuss trade secrets or confidential police procedures.
Face recognition is too powerful to be secret. It raises too many concerns to operate without transparency into how it is being deployed and what controls have been placed on its use. Communities have a right to know how they are being policed, particularly when the technology used is as invasive — and error-prone — as face recognition.
Yet New York, without proper oversight, is doubling down on the technology. Right now, The Metropolitan Transportation Authority is in the early stages of acquiring a face recognition system to scan the occupants of the 800,000 cars per day at the nine bridges and tunnels connecting New York City’s boroughs.
For these commuters, the routine act of entering the city will soon come with a criminal biometric search.
Gov. Cuomo, who announced the MTA program last October, should not impose real-time face scanning on New Yorkers unless appropriate regulations are in place to safeguard privacy and civil liberties. New Yorkers also have a right to be informed about, and to weigh in on, how the technology will be used. After all, they’re the ones bearing the risks, and footing the bill.
http://www.nydailynews.com/opinion/smile-identified-face-recognition-article-1.3008512
Articles like this often seem to suggest that the inaccuracy of face recognition systems is a bad thing. I say it’s a silver lining. If these intrusive, tyrannical systems are going to be in use, then I want them to be as inaccurate and ineffective as possible.
Nevertheless, the usual advice applies. If your image is caught on a camera of sufficient resolution, you can possibly be identified, even WITHOUT the use of face recognition (e.g., the pigs could simply put your photo on the evening news or on the Web).
So wear a full-head mask if you really need to stay anonymous (e.g., at a political protest, if that’s your thing). Make sure the region around the eyes is especially well covered. The distance between the pupils of your eyes is one of the key measurements taken by these systems.
For routine travels where a mask wouldn’t be acceptable, dark/mirrored sunglasses and a brimmed hat could help, like some celebrities have always done when they were out in public but didn’t want to be mobbed by fans seeking autographs. And as for me, I’ll never set foot in any place like NYC that’s being turned into a prison.