Transparency Advocates Win Release of NYPD “Predictive Policing” Documents

The NYPD is building a system to predict crimes — and trying to keep the public unaware of what it’s doing.

NEW YORK, NY - APRIL 23:  Police and private security personel monitor security cameras at the Lower Manhattan Security Initiative on April 23, 2013 in New York City. At the counter-terrorism center, police and private security personel monitor more than 4,000 surveillance cameras and license plate readers mounted around the Financial District and surrounding parts of Lower Manhattan. Designed to identify potential threats it is modeled after London's "Ring of Steel" system.  (Photo by John Moore/Getty Images)
Police and private security personel monitor security cameras at the Lower Manhattan Security Initiative on April 23, 2013 in New York City. Photo: John Moore/Getty Images

Late last month, a Manhattan judge ordered the New York City Police Department to release documentation about the department’s use of secretive and highly controversial “predictive policing” surveillance technology, scoring a win for advocates of transparency on police policy. The documents came to light as part of a lawsuit against the city filed by the Brennan Center for Justice, a New York-based policy institute.

Little is known about how the largest domestic police force in the United States uses crime-forecasting software, which works through analysis of historical crime data like arrest records, incident reports, gang documentation, and “stop and frisk” encounters to generate individual or geographic predictions of crime. In July 2015, then-NYPD Commissioner Bill Bratton branded predictive policing as “the wave of the future” and entered into a trial program with at least one predictive policing company, the Philadelphia-based Azavea.

“The ‘Minority Report’ of 2002 is the reality of today. There are no secrets.”

“The ‘Minority Report’ of 2002 is the reality of today,” said Bratton. “There are no secrets. There are none. If two people share a piece of information, it is no longer secret.”

While Azavea’s pilot program with the NYPD was publicly announced, documents obtained by the Brennan Center through its lawsuit and shared with The Intercept show two more companies were brought in to try out their crime-forecasting approaches — the Bronxville, New York-based KeyStat Inc. and PredPol, a Santa Cruz, California-based company. PredPol has positioned itself as an early market leader in predictive policing with a highly publicized — but dubiously effective — geographic prediction model derived from battlefield research in Iraq. All three companies were granted 45-day trials to show the NYPD what their predictive policing software could do.

The original request for documentation was filed by the Brennan Center in June 2016 under New York’s Freedom of Information Law. After the NYPD rejected the initial request and subsequent appeal, the Brennan Center filed suit on August 30, 2017. The NYPD is notorious for its intransigence on open records requests from the press and the public, particularly concerning documentation about the department’s extensive use of surveillance technology. In recent years, lawsuits have been filed to disclose information about the department’s network of surveillance cameras, its use of X-ray scanners in public, and the deployment of facial recognition technology.

In a December 22, 2017, order, Judge Barbara Jaffe rejected the NYPD’s arguments that disclosure of documentation related to the forecasting trials was exempt due to laws in New York that allow the government to shield investigative and trade secrets. “That the venders entered into nondisclosure agreements with NYPD does not prove that disclosure would cause substantial injury to any of the vendors’ competitive position,” Jaffe wrote in her order.

Jaffe also rejected the NYPD’s argument that disclosure of training and output data from its predictive policing trials would endanger the department’s computer security. She mandated the production of notes on the predictive policing system maintained by Evan Levine, the NYPD’s assistant commissioner of data analytics.

“Absent expert advice that the disclosure of the output data and Levine’s notes would jeopardize the NYPD’s capacity to guarantee the security of its information assets, respondents fail to sustain their burden as to the applicability of this exemption to such data and notes,” the order reads.

The Brennan Center will also file a new freedom of information request to obtain the source data used by the NYPD companies to train the department’s current, in-house predictive policing algorithm. In other cities, researchers have used such information to re-engineer crime-forecasting programs, revealing that they disproportionately impact communities of color.

The release of the documents will be the public’s first chance to learn about the NYPD’s application of the crime-forecasting program in day-to-day police work.

Rachel Levinson-Waldman, a staff attorney at the Brennan Center who filed the original freedom of information request, said in an interview that the release of the documents, thanks to the court’s decision, will be the public’s first chance to learn about the NYPD’s application of the crime-forecasting program in day-to-day police work. Even though the NYPD ended up building its own in-house predictive policing algorithm, the release of correspondence and training data from its trials with KeyStat, PredPol, and Azavea will still offer insight into the department’s objectives.

“One reason it’s important to get those materials, even though NYPD built own predictive policing program, is it tells us a lot in terms of what NYPD wanted to accomplish,” Levinson-Waldman said.

Through the litigation process, the Brennan Center has also learned that no privacy or use policy exists that governs the NYPD’s use of predictive policing. In response to the request for policies and oversight regulations, NYPD released a 9-year-old privacy policy governing the Domain Awareness System, a massive, citywide network of CCTV cameras, gunshot and explosive detection sensors, and license-plate readers.

“The Domain Awareness System is a network of surveillance technology that covers all of New York City, especially Manhattan,” said Levinson-Waldman. “It has nothing to do with predictive policing.”

Moreover, even if the Domain Awareness System’s guidelines are the governing regulations for the NYPD’s predictive policing program, the department has never conducted an audit of the software, as required by the policy. “We didn’t receive any audits in response to our record request, which seems incredibly inappropriate since this program has been in place for several years,” said Levinson-Waldman.

The NYPD did not respond to requests for comment on Jaffe’s decision or the department’s oversight of its use of crime-forecasting software.

The disclosure of information about the NYPD’s crime-forecasting initiatives comes at a moment of increased scrutiny for the surveillance programs of the country’s largest police department. Late last year, the city council passed legislation to create a task force charged with reviewing municipal programs that used algorithms or machine learning for decision-making. A separate bill would have established a transparency and public comment process for the NYPD’s acquisition of surveillance.

Top photo: Police and private security personnel monitor security cameras at the Lower Manhattan Security Initiative on April 23, 2013, in New York City.

Join The Conversation