We’ve all read the news stories: study after study shows that facial recognition algorithms are not always reliable, and that error rates spike significantly when involving faces of folks of color, especially Black women, as well as trans and nonbinary people. Yet this technology is widely used by law enforcement for identifying suspects in criminal investigations. By refusing to disclose the specifics of that process, law enforcement have effectively prevented criminal defendants from challenging the reliability of the technology that ultimately lead to their arrest.
This week, EFF, along with EPIC and NACDL, filed an amicus brief in State of New Jersey v. Francisco Arteaga, urging a New Jersey appellate court to allow robust discovery regarding law enforcement’s use of facial recognition technology. In this case, a facial recognition search conducted by the NYPD for NJ police was used to determine that Francisco Arteaga was a “match” of the perpetrator in an armed robbery. Despite the centrality of the match to the case, nothing was disclosed to the defense about the algorithm that generated it, not even the name of the software used. Mr. Arteaga asked for detailed information of the search process, with an expert testifying the necessity of that material, but the court denied those requests.
Comprehensive discovery regarding law enforcement’s facial recognition searches is crucial because, far from being an infallible tool, the process entails numerous steps, all of which have substantial risk of error. These steps include selecting the “probe” photo of the person police are seeking, editing the probe photo, choosing photo databases to which the edited probe photo
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: