facial recognition Archives | FedScoop https://fedscoop.com/tag/facial-recognition/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Mon, 20 May 2024 20:34:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 facial recognition Archives | FedScoop https://fedscoop.com/tag/facial-recognition/ 32 32 Login.gov’s upcoming biometric pilot aims to focus on equity, usability https://fedscoop.com/login-govs-upcoming-biometric-pilot-aims-to-focus-on-equity-usability/ Mon, 20 May 2024 20:11:37 +0000 https://fedscoop.com/?p=78408 The General Services Administration is working with internal technology equity experts for the site’s facial recognition pilot.

The post Login.gov’s upcoming biometric pilot aims to focus on equity, usability appeared first on FedScoop.

]]>
Ahead of Login.gov’s biometric validation pilot this month, General Services Administration officials are working with internal tech equity experts as part of an effort to reduce algorithmic bias in light of concerns that advocacy groups have raised about the technology.

While facial recognition, a type of  biometric validation, is commonly used with law enforcement agencies, GSA sees the Login.gov pilot as a way to further defend against sophisticated fraud and cyber threats. The work with tech equity experts will “incorporate learnings, as applicable” into the pilot, a GSA spokesperson said in an email to FedScoop, and comes after the agency conducted an equity study on remote identity proofing to “improve outreach practices, user testing and user experience for underserved communities in civic tech design.”

The goal of the upcoming pilot, which will run through the fall, is to evaluate overall user experience throughout the new workflow and to find where individuals become stuck or confused throughout the process so the “team can iteratively make improvements,” the agency spokesperson said.

“Login.gov is committed to leveraging best-in-class facial matching algorithms that, based on testing in controlled environments, have been shown to offer high-levels of accuracy in reduced algorithmic bias,” they added. 

The equity study on remote identity proofing included 4,000 participants, as of April, who were tasked with testing five different vendors for this technology. GSA plans to release a report with the results from the equity study in a peer-reviewed publication this year. 

GSA recently concluded a procurement process that expands the set of “identity vendors” that Login.gov has access to, the spokesperson said. The agency shared plans to evaluate how and when to integrate new solutions. 

“The general availability launch timing is not dependent on this integration process,” the spokesperson said. 

Candice Wright, director of the Government Accountability Office’s Science, Technology Assessment and Analytics team, said in an email to FedScoop that the GSA’s equity study on remote identity can assist the agency in ensuring that the biometric validation technology is “more accurate for all demographic groups.”

“The accuracy of biometric identification technologies is improving overall, but there are still issues with technologies that can perform less accurately for certain subgroups, such as people with darker skin,” Wright said, pointing to a recent GAO report that found comprehensive evaluations of technology as a key consideration to assist in addressing differential performance.

The biometric validation tool, the GSA spokesperson said, uses a “privacy-preserving” approach that compares a selfie that a user takes against their photo identification. The spokesperson emphasized that the data provided by the user is “protected by ensuring it will never be used for any purpose unrelated to verifying your identity” by Login.gov or the vendors with whom it works. 

Login.gov’s biometric technology will be provided by a commercial vendor that, according to the spokesperson, employs an algorithm that is considered proprietary but is one of the leading options as measured by the National Institute of Standards and Technology’s Face Recognition Vendor Test (FRVT).

“Agencies could achieve more comprehensive testing by providing guidance to technology vendors so that they design their products in ways that support more standardized testing,” Wright said.

NIST’s test for vendors, which last year was split into the Face Recognition Technology Evaluation (FRTE) and Face Analysis Technology Evaluation (FATE), measures the performance of facial recognition tech as it is applied across a variety of applications, such as visa image verification, identification of child exploitation images and more. 

The GSA noted last month that the biometric validation technology is compliant with NIST’s digital identity guidelines for achieving “evidence-based remote identity verification” at the IAL2 level, or the standard that “introduces the need for either remote or physically-present identity proofing.”

The post Login.gov’s upcoming biometric pilot aims to focus on equity, usability appeared first on FedScoop.

]]>
78408
How often do law enforcement agencies use high-risk AI? Presidential advisers want answers https://fedscoop.com/summary-reports-facial-recognition-high-risk-ai/ Tue, 30 Apr 2024 16:50:53 +0000 https://fedscoop.com/?p=77809 A national AI advisory group will recommend this week that law enforcement agencies be required to create and publish annual summary usage reports for facial recognition and other AI tools of that kind.

The post How often do law enforcement agencies use high-risk AI? Presidential advisers want answers appeared first on FedScoop.

]]>
A visit with the Miami Police Department by a group of advisers to the president on artificial intelligence may ultimately inform how federal law enforcement agencies are required to report their use of facial recognition and other AI tools of that kind.

During a trip to South Florida earlier this year, Law Enforcement Subcommittee members on the National AI Advisory Committee asked MPD leaders how many times they used facial recognition software in a given year. The answer they got was “around 40.” 

“That just really changes the impression, right? It’s not like everyone’s being tracked everywhere,” Jane Bambauer, NAIAC’s Law Enforcement Subcommittee chair and a University of Florida law professor, said in an interview with FedScoop. “On the other hand, we can imagine that there could be a technology that seems relatively low-risk, but based on how often it’s used … the public understanding of it should change.” 

Based in part on that Miami fact-finding mission, Bambauer’s subcommittee on Thursday will recommend to the full NAIAC body that federal law enforcement agencies be required to create and publish yearly summary usage reports for safety- or rights-impacting AI. Those reports would be included in each agency’s AI use case inventory, in accordance with Office of Management and Budget guidance finalized in March.

Bambauer said NAIAC’s Law Enforcement Subcommittee, which also advises the White House’s National AI Initiative Office, came to the realization that simply listing certain types of AI tools in agency use case inventories “doesn’t tell us much about the scope” or the “quality of its use in a real-world way.” 

“If we knew an agency, for example, was using facial recognition, some observers would speculate that it’s a fundamental shift into a sort of surveillance state, where our movements will be tracked everywhere we go,” Bambauer said. “And others said, ‘Well, no, it’s not to be used that often, only when the circumstances are consistent … with the use limitations.’” 

The draft recommendation calls on federal law enforcement agencies to include in their annual usage reports a description of the technology and the number of times it has been used that year, as well as the purpose of the tool and how many people used it. The report would also include total annual costs for the tool and detail when it was used on behalf of other agencies. 

The subcommittee had previously tabled discussions of the public summary reporting requirement for the use of high-risk AI, but after some refinement brought it back into conversation during an April 5 public meeting of the group. 

Anthony Bak, head of AI at Palantir, said during that meeting that the goal of the recommendation was to make “the production” of those summary statistics a “very low lift for the agencies that are using AI.” Internal IT systems that track AI use cases within law enforcement agencies “should be able to produce these statistics very easily,” he added.

Beyond the recommendation’s topline takeaway on reporting the frequency of AI usage, Bak said the proposed rule would also provide law enforcement agencies with a “gut check for AI use case policy adherence.”

If an agency says they’re using an AI tool “only for certain kinds of crimes” and then they’re reporting for a “much broader category of crimes, you can check that very quickly and easily with these kinds of summary statistics,” Bak said.

Benji Hutchinson, a subcommittee member and chief revenue officer of Rank One Computing, said that from a commercial and technical perspective, it wouldn’t be an “overly complex task” to produce these summary reports. The challenges would come in coordination and standardization.

“Being able to make sure that we have a standard approach to how the systems are built and implemented is always the tough thing,” Hutchinson said. “Because there’s just so many layers to state and local and federal government and how they share their data. And there’s all sorts of different MOUs in place and challenges associated with that.”

The subcommittee seemingly aimed to address the standardization issue by noting in its draft that summary statistics “should include counts by type of case or investigation” according to definitions spelled out in the Uniform Crime Reporting Program’s National Incident-Based Reporting System. Data submitted to NIBRS — which includes victim details, known offenders, relationships between offenders and victims, arrestees, and property involved in crimes — would be paired with information on the source of the image and the person conducting the search.

The Law Enforcement Subcommittee plans to deliver two other recommendations to NAIAC members Thursday: The first is the promotion of a checklist for law enforcement agencies “to test the performance of an AI tool before it is fully adopted and integrated into normal use,” per a draft document, and the second encourages the federal government to invest in the development of statewide repositories of body-worn camera footage that can be accessed and analyzed by academic researchers.

Those recommendations serve as a continuation of the work that the Law Enforcement Subcommittee has prioritized this year. During February’s NAIAC meeting, Bambauer delivered recommendations to amend Federal CIO Council guidance on sensitive use case and common commercial product exclusions from agency inventories. Annual summary usage reports for safety- and rights-impacting AI align with an overarching goal to create more comprehensive use case inventories. 

“We want to sort of prompt a public accounting of whether the use actually seems to be in line with expectations,” Bambauer said.

The post How often do law enforcement agencies use high-risk AI? Presidential advisers want answers appeared first on FedScoop.

]]>
77809
Peaceful protests, lawful assembly can’t be sole reason for DOJ facial recognition use under interim policy https://fedscoop.com/doj-shares-interim-facial-recognition-policy-details/ Wed, 27 Mar 2024 15:17:08 +0000 https://fedscoop.com/?p=76858 The Justice Department shared details of its interim facial recognition technology policy in testimony to the U.S. Commission on Civil Rights, which is looking into federal use of that capability.

The post Peaceful protests, lawful assembly can’t be sole reason for DOJ facial recognition use under interim policy appeared first on FedScoop.

]]>
Activities protected under the First Amendment, such as peaceful protests and lawful assembly, “may not be the sole basis for the use of” facial recognition technology under the Justice Department’s interim policy governing its deployment of the technology, the agency told a civil rights panel. 

In written testimony submitted to the U.S. Commission on Civil Rights last week, the DOJ shared details of its approach to using facial recognition technology, or FRT, including its interim policy, which it issued in December but hasn’t shared publicly. The testimony came a couple of weeks after the civil rights panel held a briefing on federal use of facial recognition technology that the DOJ didn’t testify at in-person or submit testimony for in advance.

“Notably, the Interim FRT Policy mandates that activity protected by the First Amendment may not be the sole basis for the use of FRT,” the DOJ said in its testimony. “This would include peaceful protests and lawful assemblies, or the lawful exercise of other rights secured by the Constitution and laws of the United States.”

Additionally, the interim policy states that “FRT results alone may not be relied upon as the sole proof of identity,” the DOJ said. It also requires that facial recognition technology complies with the department’s AI policies and that employees never use the technology to “engage in or facilitate unlawful discriminatory conduct,” in addition to requiring risk assessments for the accuracy of facial recognition systems used by the department.

The interim policy could also lead to public disclosures of certain information about use of the technology at the department. Components using facial recognition systems are required to “develop a process to account for and track system use” under the interim policy and report on that use annually to the DOJ’s Emerging Technology Board, which was established to oversee the department’s use of AI and emerging technology, and its Data Governance Board. 

“Without compromising law-enforcement sensitive or national security information, each of these annual reports will be consolidated into a publicly released summary on the Department’s FRT use,” the testimony said.

The commission’s March 8 briefing explored federal use of facial recognition technology at DOJ, the Department of Homeland Security, and the Department of Housing and Urban Development as it prepares a report. Adoption of the technology in the federal government has prompted concerns about privacy and civil liberties, including from lawmakers and academics.

Neither the DOJ nor HUD participated in the hearing, and DOJ’s lack of participation, in particular, prompted two commissioners to indicate they were willing to use subpoena power to produce information. At the time of the briefing, a DOJ spokesperson told FedScoop it was communicating with the commission about a response. 

A Government Accountability Office review of facial recognition systems in the government found that agencies, including the DOJ, didn’t have policies specific to the use of the technology and initially didn’t require training. That report found that the DOJ had “taken steps to issue a department-wide policy” but “faced delays.” The GAO ultimately recommended, among other things, that the attorney general develop a plan for issuing a policy that addresses civil rights and civil liberties.

In testimony to the commission at its briefing, GAO’s Gretta Goodwin said the department informed the government watchdog that it had issued an interim policy but the GAO hadn’t yet seen that policy. Goodwin, who directs the watchdog’s Homeland Security and Justice team, said the GAO plans to review the interim policy as part of its follow-up process on the recommendation.

The description of the interim policy in the department’s testimony appears to address some of GAO’s findings. For example, the DOJ said that the policy mandates that employees using those systems receive training that includes information about privacy, civil rights and civil liberties laws relevant to the use of facial recognition technology. 

While the department acknowledged potential equity and fairness implications of the technology, it also underscored the potential benefits. According to the testimony, facial recognition technology was used by the FBI over the last year to combat crime, find missing children, and address threats on the border. The U.S. Marshals Service also uses the technology for investigations and protective security missions, DOJ said. 

“When employed correctly, FRT affirmatively strengthens our public safety system,” the DOJ said. 

The interim policy was created by a working group within the department that met throughout 2022 and 2023. That group included legal experts and subject matter experts throughout the DOJ. The interim policy will be updated after the department completes an interagency report on best practices required under President Joe Biden’s executive order on policing, the DOJ said.

The post Peaceful protests, lawful assembly can’t be sole reason for DOJ facial recognition use under interim policy appeared first on FedScoop.

]]>
76858
Civil rights commissioner slams DOJ, HUD absence at facial recognition briefing https://fedscoop.com/commissioner-slams-doj-hud-facial-recognition-briefing-absence/ Fri, 08 Mar 2024 21:57:41 +0000 https://fedscoop.com/?p=76528 Mondaire Jones, a Democratic appointee on the U.S. Commission on Civil Rights, said he would have urged the use of subpoenas if the panel was given “adequate notice” of the agencies’ refusal to cooperate.

The post Civil rights commissioner slams DOJ, HUD absence at facial recognition briefing appeared first on FedScoop.

]]>
The Department of Justice and the Department of Housing and Urban Development allegedly declined to testify at a U.S. Commission on Civil Rights briefing on government use of facial recognition technology Friday, drawing the ire of panel members.

Mondaire Jones, a commissioner on the panel and former Democratic member of the U.S. House of Representatives, skewered both departments for declining the commission’s invitation to appear and not providing written testimony at the Friday briefing. Jones called their absence “offensive” and alleged the departments are “embarrassed by their failures and are seeking to avoid public accountability.”

“I have not seen anything like this from this administration,” Jones said. “And had the commission been given adequate notice of the failure of these departments to cooperate, I would have urged this commission to exercise its statutory authority to issue subpoenas, which is something that we have rarely had to do in the course of this commission’s existence.”

Commissioner J. Christian Adams, a Republican appointee, said that he shared Jones’ concern about the DOJ’s absence and would support efforts to obtain information from them “even if it extends to exercising subpoena power.” 

In an emailed statement, a HUD spokesperson said the department “does not use any facial recognition technology and urges its program participants to find the right balance between addressing security concerns and respecting residents’ right to privacy.”

They added: “HUD is cooperating with the U.S. Commission on Civil Rights and provided answers to the Commission’s extensive interrogatories and document request in advance of the Commission’s briefing on facial recognition technology. HUD plans to submit written testimony and welcomes future opportunities to collaborate where appropriate.”

In an emailed statement, a spokesman for the DOJ told FedScoop the department is “in communication with the Commission about the Department’s response.” 

Lack of testimony from the departments was particularly notable as the hearing set out to specifically focus on the civil rights implications of facial recognition technology use by the DOJ, HUD and the Department of Homeland Security. 

Unlike the other departments, however, DHS did provide in-person testimony, which Jones praised as the department taking “its statutory obligations and the work of this commission seriously.”

Following Jones’ remarks, Adams underscored the impact of the DOJ’s absence, noting that DOJ’s Office of Legal Counsel and Civil Rights Division would be the “primary drivers of any federal policy related to facial recognition technology,” and DHS’s civil rights office is “effectively subservient” to whatever those offices say about this policy, he said.

“Not having them here takes away the central organizing component of the federal government to answer these questions, so I support you and your concern and whatever steps you think are appropriate going forward,” said Adams, who is the president and general counsel of the Public Interest Legal Foundation and a former DOJ Voting Section attorney.

The Friday hearing comes as use of facial recognition technology in the federal government has prompted concerns about privacy and civil liberties, including from lawmakers and academics. A Government Accountability Office report last year found that seven agencies using the technology had initially used the technology without requiring staff to take training and some agencies didn’t have policies addressing civil rights and civil liberties protections.

Just last month, the National AI Advisory Committee’s Law Enforcement Subcommittee sought to improve agency disclosures of such technologies in their AI use case inventories by approving proposed edits to clarify exclusions.

In opening remarks, Chair Rochelle Garza, a Democratic appointee, noted both the potential benefits of the technology and threats it poses to fundamental rights. She said the briefing marked the commission’s “first step towards investigating the breadth of the challenges that FRT may pose.”

The meeting featured testimony from representatives of the GAO, the White House, Clearview AI, subject matter experts, and federal and state law enforcement, and covered topics such as the capabilities and harms of facial recognition technology and guidance for federal oversight. In addition to the meeting, the commission is accepting public comments as it prepares its report until April 8.

The post Civil rights commissioner slams DOJ, HUD absence at facial recognition briefing appeared first on FedScoop.

]]>
76528
CBP leaning into biometrics on controversial app, raising concerns from immigrant rights advocates https://fedscoop.com/cbp-one-app-biometrics-immigrants-rights/ Thu, 07 Mar 2024 18:42:28 +0000 https://fedscoop.com/?p=76465 As Customs and Border Protection looks to expand the use of biometrics in its CBP One app, two different internal components of the Department of Homeland Security are investigating the platform.

The post CBP leaning into biometrics on controversial app, raising concerns from immigrant rights advocates appeared first on FedScoop.

]]>
U.S. Customs and Border Protection plans to expand the use of biometrics through its CBP One app, a platform created by the agency to help process people who intend to come to the country that has raised concerns from immigrant rights groups. The expansion of biometrics — and in particular, personal data about peoples’ faces — comes amid ongoing issues with the app’s technical capabilities. 

The disclosure, published to the Federal Register last month, states that CBP is introducing a new biometric capability into the app that’s meant to accelerate the Department of Homeland Security’s effort to collect biometric information from nonimmigrants leaving the country, requiring a “selfie” photo with geolocation tracking to confirm that they’ve actually departed. The update is also intended to decrease travel document fraud and improve the agency’s “ability to identify criminals and known or suspected terrorists.”

The app’s update would allow nonimmigrants who are departing the U.S. to “voluntarily provide biographical data, facial images and geolocation,” a step that aligns with a Department of Homeland Security mandate to collect biometric information and support CBP’s plan to fully automate I-94 collection. Individuals are subject to Form I-94 if they are nonimmigrant U.S. visitors; CBP issues I-94 forms to individuals at the time that they “lawfully enter” the U.S., serving as an arrival and departure record.

“Having proof of an exit via the CBP One app would provide nonimmigrants some information for CBP officers to consider in the event the officer is unsure whether a nonimmigrant complied with the I-94 requirements provided upon their previous entry,” the notice reads. 

Additionally, CBP stated its intention to update the Electronic System for Travel Authorization website to require applicants to provide a selfie along with the already required passport biographical page photo. 

The new disclosure also mentions a “liveness detection” feature that’s meant to confirm that the photo is recent and not an older photo. That information, according to the Federal Register posting, is supposed to be filed in the Arrival and Departure Information System, a travel history database that CBP is pushing to expand, according to a privacy impact assessment for the platform. 

The CBP One expansion comes after the agency in September announced plans to utilize the technology before someone arrives in the United States. That information, according to the disclosure, is supposed to be shared with U.S. Citizenship and Immigration Services and air carriers working with CBP’s document validation initiative. In this case, photos sent to the app are, for instance, scanned with a facial recognition algorithm and uploaded to a Traveler Verification System gallery and the Automated Targeting System, which is used to compare traveler information to other law enforcement data, according to a privacy impact assessment published at the beginning of last year.

“Noncitizens are able to use the CBP One mobile application to schedule an appointment at one of seven Southwest Border [ports of entry] and present themselves for inspection to a CBP officer,” Benjamine “Carry” Huffman, then-acting deputy commissioner at CBP, said during a border-focused House hearing last year. “The ability to use the app cuts out the smugglers, decreases migrant exploitation, and makes processing more efficient upon arrival at the [ports of entry].” 

After publication of this story, and more than two weeks after FedScoop reached out for comment, a CBP spokesperson said in an email that the app digitally serves travelers who need to interact with the agency, and permits them to do things such as provide advance notice of the import of biological materials, apply and pay for an I-94 document, and schedule perishable cargo exams.

Additionally, migrants located in Central or North Mexico who do not have sufficient admission documentation can make an appointment and remain in place until presenting for their appointment. This cuts down on migrant crowding in immediate border areas, according to the spokesperson. 

The app’s appointment scheduling functionality, according to the spokesperson, has increased CBP’s capacity to process migrants and cut down on bad actors who could endanger and take advantage of vulnerable migrants.

The purpose of this system, according to the agency’s September posting, is to confirm the identity of someone entering the U.S. and to run that information against potential criminal databases. That disclosure also noted new CBP One applications for certain nationals of countries like Haiti and Colombia, as well as a new program for Ukrainians.

Yet the CBP disclosure from late February is new in that it applies biometrics to those exiting the United States, rather than those entering. Though no time frame is specified, the posting puts the number of respondents expected to use the CBP One in the hundreds of thousands. This expansion only adds to concerns held by immigration rights groups.  

“We are concerned about the ever-expanding surveillance capabilities and requirements that CBP is adding to CBP One. With little notice or oversight, CBP has expanded biometric and geolocation surveillance to individuals not even in the U.S.,” Julie Mao, the co-founder and deputy director of Just Futures Law, a legal organization that focuses on immigrant rights, said in an email to FedScoop. 

She continued: “What business and for that matter legal authority does CBP have to conduct such biometrics and geolocation capture outside the U.S.? This is part of DHS’s disturbing and unchecked externalization of U.S. immigration policy, and therefore surveillance, to other countries.” 

The app has previously come under fire for its technical capabilities. Amnesty International said that mandatory use of CBP One violates peoples’ right to pursue asylum, pointing to risks related to privacy, discrimination, and surveillance. The National Immigration Project and the immigrant rights group Together & Free have also documented the difficulties faced by those who are unable to book an appointment through the app. 

Reviews on two of the most prominent app stores, the Apple App Store and the Google Play Store, also show people pointing to technical issues with the CBP One app. For instance, one person complained in January that the app only works for people with modern Android phones, making it difficult for people in Cuba to use. Another review, posted in December, noted issues with the app’s software not recognizing their face; the CBP spokesperson told FedScoop that this is related to issues in the USCIS system that someone might need to correct before they’re able to move forward in the app. Other complaints note that the app freezes or posts error messages. 

There are other concerns about the app’s technical abilities, particularly surrounding AI. A letter from Human Rights Watch to several entities within DHS warned that the app’s liveness detection “does not always work for asylum seekers with darker skin tones.”  

“CBP’s use of photo recognition to access these features is of concern, particularly in light of the issues we saw when CBP One first launched,” said Raul Pinto, deputy legal director for transparency at the American Immigration Council, which does extensive work looking at the CBP One app. “At that point, there were numerous reports that racial minorities had trouble accessing certain functions of the app.”

In response to FedScoop’s questions, an agency spokesperson said on March 12 that the app no longer receives time-out errors and also resolved bandwidth issues with a third-party provider within weeks. The agency also confirmed that devices using CBP One app appointments need to meet certain RAM requirements and operating system requirements for Android and iOS. 

Additionally, CBP said that biometric traveler verification service matching has a match rate of 99.4% on entry and 98.1% on exit, and that between 2017 and 2022, people using the system from African countries in had a 99.5% match rate, while people coming from Central American countries had a 99.6% match rate. 

Notably, the current status of the liveness detection feature appears to be inaccurately described in DHS’s AI inventory,which the agency has updated several times within the past year. In that inventory, CBP says the tool uses a selfie and artificial intelligence algorithms, and specifically, machine vision, to confirm that it is a live picture and “not a photo, mask, or other spoofing mechanism.” The tool is still listed in the development and acquisition phase, even though DHS already appears to be using it. CBP did not respond to a question about the discrepancy. 

The app is facing several investigations within DHS, including from its Office of Inspector General, which is currently looking into the use of CBP One on the Southwestern border. When asked about the status of that investigation, the OIG office said that it was ongoing and that, upon completion, results would be published to its website.

Meanwhile, the DHS’s Office for Civil Rights and Civil Liberties is also conducting an investigation into difficulties faced by migrants using the app, though it declined to comment. And the Boston-based Lawyers for Civil Rights, which had sent concerns to the DHS civil rights office, said they were aware of changes to the app but had otherwise not heard an update from DHS.

The new updates to the app come amid ongoing discussions on the Hill about the Biden administration’s broader approach to immigration policy. Some Democrats have argued for increased funding for the app to boost its technology, while others have criticized the app, including Sen. Cory Booker, D-N.J. who called the way it’s been deployed “inherently discriminatory. 

Congressional Republicans have also criticized CBP One — but for different reasons. Rep. Mark Green, R-Tenn., chair of the House Homeland Security Committee, said in a release last October that DHS Secretary Alejandro Mayorkas “has utterly abused the CBP One app in his quest for open borders.”  

Editor’s note: This piece was updated on March 14 to include comments and background from CBP, which responded after publication. 

The post CBP leaning into biometrics on controversial app, raising concerns from immigrant rights advocates appeared first on FedScoop.

]]>
76465
AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions https://fedscoop.com/ai-advisory-law-enforcement-use-case-recommendations/ Wed, 28 Feb 2024 18:12:01 +0000 https://fedscoop.com/?p=76246 The National AI Advisory Committee’s Law Enforcement Subcommittee voted unanimously to edit CIO Council recommendations on sensitive use case and common commercial product exclusions, moves intended to broaden law enforcement agency inventories.

The post AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions appeared first on FedScoop.

]]>
There’s little debate that facial recognition and automated license plate readers are forms of artificial intelligence used by police. So the omissions of those technologies in the Department of Justice’s AI use case inventory late last year were a surprise to a group of law enforcement experts charged with advising the president and the National AI Initiative Office on such matters.

“It just seemed to us that the law enforcement inventories were quite thin,” Farhang Heydari, a Law Enforcement Subcommittee member on the National AI Advisory Committee, said in an interview with FedScoop.

Though the DOJ and other federal law enforcement agencies in recent weeks made additions to their use case inventories — most notably with the FBI’s disclosure of Amazon’s image and video analysis software Rekognition — the NAIAC Law Enforcement Subcommittee wanted to get to the bottom of the initial exclusions. With that in mind, subcommittee members last week voted unanimously in favor of edits to two recommendations governing excluded AI use cases in Federal CIO Council guidance

The goal in delivering updated recommendations, committee members said, is to clarify the interpretations of those exemptions, ensuring more comprehensive inventories from federal law enforcement agencies.

“I think it’s important for all sorts of agencies whose work affects the rights and safety of the public,” said Heydari, a Vanderbilt University law professor who researches policing technologies and AI’s impact on the criminal justice system. “The use case inventories play a central role in the administration’s trustworthy AI practices — the foundation of trustworthy AI is being transparent about what you’re using and how you’re using it. And these inventories are supposed to guide that.” 

Office of Management and Budget guidance issued last November called for additional information from agencies on safety- or rights-impacting uses — an addendum especially relevant to law enforcement agencies like the DOJ. 

That guidance intersected neatly with the NAIAC subcommittee’s first AI use case recommendation, which permitted agencies to “exclude sensitive AI use cases,” defined by the Federal CIO Council as those “that cannot be released practically or consistent with applicable law and policy, including those concerning the protection of privacy and sensitive law-enforcement, national security, and other protected interests.”

Subcommittee members said during last week’s meeting that they’d like the CIO Council to go back to the drawing board and make a narrower recommendation, with more specificity around what it means for a use case to be sensitive. Every law enforcement use of AI “should begin with a strong presumption in favor of public disclosure,” the subcommittee said, with exceptions limited to information “that either would substantially undermine ongoing investigations or would put officers or members of the public at risk.”

“If a law enforcement agency wants to use this exception, they have to basically get clearance from the chief AI officer in their unit,” Jane Bambauer, NAIAC’s Law Enforcement Subcommittee chair and a University of Florida law professor, said in an interview with FedScoop. “And they have to document the reason that the technology is so sensitive that even its use at all would compromise something very important.”

It’s no surprise that law enforcement agencies use technologies like facial or gait recognition, Heydari added, making the initial omissions all the more puzzling. 

“We don’t need to know all the details, if it were to jeopardize some kind of ongoing investigation or security measures,” Heydari said. “But it’s kind of hard to believe that just mentioning that fact, which, you know, most people would probably guess on their own, is really sensitive.”

While gray areas may still exist when agencies assess sensitive AI use cases, the second AI use case exclusion targeted by the Law Enforcement Subcommittee appears more cut-and-dried. The CIO Council’s exemption for agency usage of “AI embedded within common commercial products, such as word processors or map navigation systems” resulted in technologies such as automated license plate readers and voice spoofing to often be left on the cutting-room floor. 

Bambauer said very basic AI uses, such as autocomplete or some Microsoft Edge features, shouldn’t be included in inventories because they aren’t rights-impacting technologies. But common commercial AI products might not have been listed because they’re not “bespoke or customized programs.”

“If you’re just going out into the open market and buying something that [appears to be exempt] because nothing is particularly new about it, we understand that logic,” Bambauer said. “But it’s not actually consistent with the goal of inventory, which is to document not just what’s available, but to document what is actually a use. So we recommended a limitation of the exceptions so that the end result is that inventory is more comprehensive.”

Added Heydari: “The focus should be on the use, impacting people’s rights and safety. And if it is, potentially, then we don’t care if it’s a common commercial product — you should be listing it on your inventory.” 

A third recommendation from the subcommittee, which was unrelated to the CIO Council exclusions, calls on law enforcement agencies to adopt an AI use policy that would set limits on when the technology can be used and by whom, as well as who outside the agency could access related data. The recommendation also includes several oversight mechanisms governing an agency’s use of AI.

After the subcommittee agrees on its final edits, the three recommendations will be posted publicly and sent to the White House and the National AI Initiative Office for consideration. Recommendations from NAIAC — a collection of AI experts from the private sector, academia and nonprofits — have no direct authority, but Law Enforcement Subcommittee members are hopeful that their work goes a long way toward improving transparency with AI and policing.

“If you’re not transparent, you’re going to engender mistrust,” Heydari said. “And I don’t think anybody would argue that mistrust between law enforcement and communities hasn’t been a problem, right? And so this seems like a simple place to start building trust.”

The post AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions appeared first on FedScoop.

]]>
76246
Justice Department discloses FBI project with Amazon Rekognition tool https://fedscoop.com/doj-fbi-amazon-rekognition-technology-ai-use-case/ Thu, 25 Jan 2024 23:36:25 +0000 https://fedscoop.com/?p=75733 The disclosure comes after Amazon said in 2020 that it would institute a moratorium on police use of Rekognition.

The post Justice Department discloses FBI project with Amazon Rekognition tool appeared first on FedScoop.

]]>
The Department of Justice has disclosed that the FBI is in the “initiation” phase of using Amazon Rekognition, an image and video analysis software that has sparked controversy for its facial recognition capabilities, according to an update to the agency’s AI inventory

In response to questions from FedScoop, neither Amazon nor the DOJ clarified whether the FBI had access to or is using facial recognition technology, specifically, through this work. But the disclosure is notable, given that Amazon had previously announced a moratorium on police use of Rekognition.

An AI inventory released on the DOJ website discloses that the FBI has a project named “Amazon Rekognition – AWS – Project Tyr.” The description does not mention the term “facial recognition” but states that the agency is working on customizing the tool to “review and identify items containing nudity, weapons, explosives, and other identifying information.” 

“Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from lawfully acquired images and videos,” states a summary of the use case that echoes the Amazon website’s description of the product. In regard to developer information, the disclosure says the system was commercial and off-the-shelf, and that it was purchased pre-built from a third party.

Other aspects of the project have not yet been finalized, according to the inventory. The disclosure says that in collaboration with Amazon Web Services, the agency will determine where the training data originates, whether the source code is made publicly available, and what specific AI techniques were used. The DOJ states that the agency is not able to conduct ongoing testing of the code but can perform audits. The justice agency also claims the use case is consistent with Executive Order 13960, a Trump-era order on artificial intelligence. 

“To ensure the Department remains alert to the opportunities and the attendant risks posed by artificial intelligence (AI) and other emerging technologies, the Deputy Attorney General recently established the Emerging Technologies Board to coordinate and govern AI and other emerging technology issues across the Department,” DOJ spokesperson Wyn Hornbuckle said in response to a series of questions from FedScoop about the use case.

He added: “The board will advance the use of AI and other emerging technologies in a manner that is lawful and respectful of our nation’s values, performance-driven, reliable and effective, safe and resilient, and that will promote information sharing and best practices, monitor taskings and progress on the department’s AI strategy, support interagency coordination, and provide regular updates to leadership.” 

The DOJ did not address several aspects of the work with Amazon, including questions about whether the FBI had put any limits on the use of its technology, the purpose of nudity detection, or the extent to which the law enforcement agency could access facial recognition through the work discussed in the disclosure. Through the DOJ, the FBI declined to comment. 

Amazon was given 24 hours to comment on a series of questions sent from FedScoop but did not respond by the time of publication. A day later, Amazon spokesperson Duncan Neasham emailed FedScoop the following statement:

“We imposed a moratorium on police departments’ use of Amazon Rekognition’s face comparison feature in connection with criminal investigations in June 2020, and to suggest we have relaxed this moratorium is false. Rekognition is an image and video analysis service that has many non-facial analysis and comparison features. Nothing in the Department of Justice’s disclosure indicates the FBI is violating the moratorium in any way.”

The tool was not disclosed in an earlier version of the DOJ’s AI inventory. While it’s not clear when the inventory was updated, a consolidated list of federal AI uses posted to AI.gov in September didn’t include the disclosure. The source date of the DOJ page appears to be incorrect and tags the page to October 2013, though the executive order requiring inventories wasn’t signed by President Donald Trump until late 2020. 

A page on Amazon’s website featuring the Rekognition technology highlights the tool’s applications in “face liveness,” “face compare and search,” and “face detection and analysis,” as well as applications such as “content moderation,” “custom labels,” and “celebrity recognition.” 

Beyond the application examples listed in the inventory, the DOJ did not explain the extent to which the FBI could or would use facial recognition as part of this work. Amazon previously told other media outlets that its moratorium on providing facial recognition to police had been extended indefinitely, though it’s not clear how Amazon interprets that moratorium for federal law enforcement. Notably, Amazon’s website has guidance for public safety uses.

But others have raised concerns about the technology. In 2019, a group of researchers called on Amazon to stop selling Rekognition to law enforcement following the release of a study by AI experts Inioluwa Deborah Raji and Joy Buolamwini that found that an August 2018 version of the technology had “much higher error rates while classifying the gender of darker skinned women than lighter skinned men,” according to the letter. 

Amazon had previously pushed back on those findings and has defended its technology. The National Institute of Standards and Technology confirmed that Amazon has not voluntarily submitted its algorithms for study by the agency. 

“Often times companies like Amazon provide AI services that analyze faces in a number of ways offering features like labeling the gender or providing identification services,” Buolamwini wrote in an early 2019 blog post. “All of these systems regardless of what you call them need to be continuously checked for harmful bias.”

The company has argued in a corporate blog defending its technology that the “mere existence of false positives doesn’t mean facial recognition is flawed. Rather, it emphasizes the need to follow best practices, such as setting a reasonable similarity threshold that correlates with the given use case.” 

The DOJ’s disclosure is also notable because, in the wake of George Floyd’s murder in 2020 — and following an extensive and pre-existing movement against the technology — Amazon said it would implement a one-year pause on providing Rekognition to police. In 2021, the company extended that moratorium indefinitely, according to multiple reports. Originally, Amazon said the moratorium was meant to give Congress time to pass regulation of the technology. 

“It would be a potential civil rights nightmare if the Department of Justice was indeed using Amazon’s facial recognition technology ‘Rekognition,’” Matt Cagle, a senior staff attorney at the American Civil Liberties Union of Northern California, said in a written statement to FedScoop, pointing to the racial bias issues with facial recognition. “After immense public pressure, Amazon committed to not providing a face recognition product to law enforcement, and so any provision of Rekognition to DOJ would raise serious questions about whether Amazon has broken that promise and engaged in deception.”

A 2018 test of Rekognition’s facial recognition capabilities by the ACLU incorrectly matched 28 members of Congress with mugshots. Those members were “disproportionately people of color,” according to the ACLU. 

The DOJ inventory update noting the use of the Amazon tool was “informative, but in some ways surprising,” said Caitlin Seeley George, the director of campaigns and operations at the digital rights group Fight for the Future, because “we haven’t seen specific examples of FBI using Amazon Rekognition in recent years and because Amazon has said and has continued to say that they will not sell their facial recognition technology to law enforcement.” 

“This is the problem with trusting a company like Amazon — or honestly any company — that’s selling this technology,” she added. “Not only could they change their mind at any point, but they can decide the barriers of what their word means and if and how they’re willing to make adjustments to what they have said that they would or wouldn’t do with their product and who they will or won’t sell it to.”

Ben Winters, senior counsel for the Electronic Privacy Information Center, said that “it feels like a weird time to be adopting this big, sensitive type system,” noting that once the technology is there, it’s “more entrenched.” He pointed to the recent executive order on AI and draft guidance for rights-impacting AI that’s due to be finalized by the Office of Management and Budget. 

A NextGov story from 2019 reported that the FBI was piloting Rekognition facial matching software for the purpose of mining through video surveillance footage. According to that story, the pilot started in 2018, though the DOJ did not address a FedScoop question about what happened to it or if it’s the same project discussed in the updated AI inventory.

A record available on the FBI’s Vault, the agency’s electronic Freedom of Information Act library, appears to show that the agency took issue with some reporting on that pilot at the time, but much of the document is redacted. 

The post Justice Department discloses FBI project with Amazon Rekognition tool appeared first on FedScoop.

]]>
75733
Proactive approach from White House, NIST needed for facial recognition technology, report says https://fedscoop.com/facial-recognition-technology-report-dhs-fbi-nist-white-house/ Wed, 17 Jan 2024 16:00:00 +0000 https://fedscoop.com/?p=75604 A National Academies of Sciences, Engineering, and Medicine report sponsored by the FBI and DHS recommends executive action regarding facial recognition technology and making NIST the “logical home” for regulatory activities and standards.

The post Proactive approach from White House, NIST needed for facial recognition technology, report says appeared first on FedScoop.

]]>
Federal laws and regulations haven’t kept pace with advancements in facial recognition technology, a fact that merits executive action and more responsibility for agencies including the National Institute of Standards and Technology, a new report sponsored by the Department of Homeland Security and Federal Bureau of Investigation recommends.

The National Academies of Sciences, Engineering, and Medicine report, released Wednesday, found that the nation is lacking in “authoritative guidance, regulations, or laws to adequately address issues related to facial recognition,” a technology that has only grown in use in recent years with the rapid adoption of artificial intelligence models that fuel the tech. 

The report’s authors said it’s incumbent upon the U.S. government and lawmakers to be more proactive on legal and regulatory questions. 

“It is crucial that governments make tackling these issues a priority,” Jennifer Mnookin, University of Wisconsin-Madison chancellor and co-chair of the committee that wrote the report, said in a statement. “Failing or choosing not to adopt policies and regulations on the development and use of facial recognition technology would effectively cede decisionmaking and rulemaking on these important questions of great public concern entirely to the private sector and the marketplace.”

The report’s authors — who conducted the study independently of their DHS and FBI sponsors but were guided by questions from the agencies and NASEM staff and board members — noted the race and equity shortcomings inherent in facial recognition technologies, which are disproportionately reliant on data from white people. 

With that in mind, the authors urge the president to issue an executive order that develops guidelines for federal agencies on “the appropriate use of facial recognition technology” that takes into account “both equity concerns and the protection of privacy and civil liberties.” 

Meanwhile, the report said that Congress should consider a handful of legislative efforts on facial recognition, including storage limits on facial images and templates; mandated training and certification for system operators and decision-makers, such as those working in law enforcement; passage of a federal privacy law surrounding facial recognition technology or the adoption of federal privacy legislation targeting commercial practices that undermine privacy; and tackling specific concerns regarding the technology, such as surveillance and the potential for harassment and blackmail.

“The number of uses will continue to expand as the technology becomes more widespread and inexpensive,” Edward Felten, a committee co-chair and founding director of the Center for Information Technology Policy at Princeton University, said in a statement. “For example, it is likely only a matter of time before stores routinely scan customers’ faces upon entry to personalize shopping experiences and marketing, and perhaps more troubling, private individuals could potentially use it to target others.”

From a federal government perspective, the report’s authors recommended that NIST take on a greater role, calling on the agency to “sustain a vigorous program of facial recognition technology testing and evaluation to drive continued improvements in accuracy and reduction in demographic biases.” NIST’s Face Recognition Technology Evaluation verification process was cited as “a valuable tool,” making the agency the “logical home” for facial recognition regulatory activities within the government.

The authors also recommended that the federal government develop a risk management framework for organizations that takes into account the “performance, equity, privacy, civil liberties, and effective governance” implications of facial recognition technology. NIST’s Cybersecurity Framework and AI Risk Management Framework were singled out as positive examples of this approach, making the agency a natural fit for developing something similar for facial recognition technology.

DHS and the Department of Justice, meanwhile, are charged by the report’s authors with developing a “a multi-disciplinary and multi-stakeholder working group on facial recognition technology to develop and periodically review standards for reasonable and equitable use, as well as other needed guidelines and requirements for the responsible use” of the technology by federal, state and local law authorities.  

“As governments and other institutions take affirmative steps through both law and policy to ensure the responsible use of [facial recognition technology], they will need to take into account the views of government oversight bodies, civil society organizations, and affected communities to develop appropriate safeguards,” the report stated.

The post Proactive approach from White House, NIST needed for facial recognition technology, report says appeared first on FedScoop.

]]>
75604
Bipartisan Senate bill to ban TSA use of facial recognition technology gains support of civil rights groups https://fedscoop.com/bipartisan-senate-bill-to-ban-tsa-use-of-facial-recognition-technology-gains-support-of-civil-rights-groups/ Sat, 02 Dec 2023 00:32:34 +0000 https://fedscoop.com/?p=75084 The bill aims to tackle TSA’s proposed plan to implement facial recognition scans at over 430 U.S. airports within the next several years.

The post Bipartisan Senate bill to ban TSA use of facial recognition technology gains support of civil rights groups appeared first on FedScoop.

]]>
The Senate introduced bipartisan legislation this week that would ban the use of facial recognition technology and the collection of facial biometric data by the Transportation Security Administration in U.S. airports.

The Traveler Privacy Protection Act aims to tackle TSA’s proposed plan to implement facial recognition scans at over 430 U.S. airports within the next several years. The bill was sponsored by Sens. Jeff Merkley, D-Ore., John Kennedy, R-La., Edward Markey, D-Mass., Roger Marshall, R-Kan., Bernie Sanders, I-Vt., and Elizabeth Warren, D-Mass.

“Every day, TSA scans thousands of Americans’ faces without their permission and without making it clear that travelers can opt out of the invasive screening,” said Kennedy in a statement. “The Traveler Privacy Protection Act would protect every American from Big Brother’s intrusion by ending the facial recognition program.”

Civil and digital rights groups like the ACLU, Electronic Privacy Information Center and others have come out strongly in favor of the legislation, which they say will tackle facial recognition technology’s infringement on people’s privacy and discriminatory practices against people of color and women in particular.

“This bill will most help marginalized communities like Muslim Americans, Black, Indigenous, People of Color and  others systematically targeted by law enforcement and TSA,” said Albert Cahn, the executive director and founder of the Surveillance Technology Oversight Project (S.T.O.P.).

“No one should have this invasive and harmful tech used against them when the mistakes of this tech are so great. We’ve seen so many people wrongly committed for crimes they didn’t commit and TSA’s mass adoption of facial recognition could allow faulty algorithmic analysis arrests to go through the roof,” Cahn told FedScoop during an interview.

In particular, Cahn said that TSA “has a really dubious track record with tech procurement,” because it spends millions of tax dollars on bag scanners and other technology “that their own analysis shows misses weapons and aren’t effective.”

“Many of us are not willing to criticize TSA because we want peace of mind and security when we travel. But the agency’s track record doesn’t inspire much confidence at all, so we shouldn’t accept facial recognition as a false safety blanket,” said Cahn.

Some leaders in the Senate said attempts to stop TSA’s facial recognition technology from scaling have not succeeded and new legislation is needed.

“Passengers should not have to choose between safety and privacy when they travel. Despite our repeated calls for TSA to halt its unacceptable use of facial recognition technologies, the agency has continued to expand its use across the country,” Sen. Markey said in a statement.

The post Bipartisan Senate bill to ban TSA use of facial recognition technology gains support of civil rights groups appeared first on FedScoop.

]]>
75084
Homeland Security adds facial comparison, machine learning uses to AI inventory https://fedscoop.com/homeland-security-adds-facial-comparison-machine-learning-uses-to-ai-inventory/ Thu, 09 Nov 2023 16:27:49 +0000 https://fedscoop.com/?p=74721 Inventory now includes a facial comparison tool being used by the Transportation Security Agency and Customs and Border Protection.

The post Homeland Security adds facial comparison, machine learning uses to AI inventory appeared first on FedScoop.

]]>
The Department of Homeland Security recently updated its artificial intelligence use case inventory to reflect several uses of the technology that have already been made public elsewhere, including facial comparison and machine learning tools used within the department.

The additions include the U.S. Customs and Border Protection’s use of its Traveler Verification Service, a tool that deploys facial comparison technology to verify a traveler’s identity, in addition to the Transportation Security Agency’s deployment of that same tool for the PreCheck process. 

The department also added the Federal Emergency Management Agency’s geospatial damage assessments, which uses machine learning and machine vision to assess damage caused by a disaster, and CBP’s use of AI to inform port of entry risk assessment decisions. 

While the four additions were picked up by a website tracker used by FedScoop on Oct. 31, all appear to have been already public elsewhere — three for at least a year — underscoring existing concerns about inventories not reflecting the full scope of publicly known AI uses. 

When asked why the uses were added now, a DHS spokesperson pointed to its process for evaluating public disclosure.

“Due to DHS’s sensitive law enforcement and national security missions, we have a rigorous internal process for evaluating whether certain sensitive AI Use Cases are safe to share externally. These use cases have recently been cleared for sharing externally,” the spokesperson said in an emailed statement.

Aside from the Department of Defense, those in the intelligence community, and independent regulatory agencies, federal agencies are required to post publicly their uses of AI in an annual inventory under a Trump-era executive order. But they’ve so far been inconsistent in terms of categories included, format and timing. Among the concerns researchers and advocates have pointed to is apparent exclusion of public uses in the inventories.  

Use of the Traveler Verification Service facial comparison technology has been referenced elsewhere on the TSA’s website since at least early 2021 and on the CPB’s website since at least 2019, according to pages archived by the Wayback Machine. And according to a Government Accountability Office report, the Traveler Verification Service was developed and implemented in 2017. The use of AI for geospatial damage assessments has also appeared on FEMA’s website since August 2022, according to the Wayback Machine’s archive.

The spokesperson also noted that DHS Chief Information Officer and Chief AI Officer Eric Hysen testified on CBP’s port of entry risk assessment use case at a September hearing before the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation.

Ben Winters, a senior counsel at the Electronic Privacy Information Center who also leads its AI and Human Rights Project, called the lack of promptness and completeness in the disclosure “concerning.”

“AI use case inventories are only as valuable as compliance with them is. It illustrates why the government does not have the adequate oversight, transparency, and accountability mechanisms in place to continue using or purchasing sensitive AI tools at this time,” Winters said in an emailed statement to FedScoop.

He added that he hopes the Office of Management and Budget guidance “does not broadly exempt these types of ‘national security’ tools and DHS chooses to prioritize transparency and accountability moving forward.”

Currently, there isn’t a clear process for agencies to add or remove things from their inventories. In the past, the OMB has said that agencies “are responsible for maintaining the accuracy of their inventories.”

DHS previously added and removed several uses of AI in August. At that time, it added Immigration and Customs Enforcement’s use of facial recognition technology, as well as CBP’s use of technology to identify “proof of life” and prevent fraud on an agency app. It also removed a reference to a TSA system that it described as an algorithm to address COVID-19 risks at airports.

The agency also expects to soon release more information about its work with generative AI, according to the spokesperson.

“DHS is actively exploring pilots of generative AI technology across our mission areas and expects to have more to share in the coming weeks,” the spokesperson said.

The post Homeland Security adds facial comparison, machine learning uses to AI inventory appeared first on FedScoop.

]]>
74721