NAIAC law enforcement subcommittee Archives | FedScoop https://fedscoop.com/tag/naiac-law-enforcement-subcommittee/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Tue, 30 Apr 2024 17:20:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 NAIAC law enforcement subcommittee Archives | FedScoop https://fedscoop.com/tag/naiac-law-enforcement-subcommittee/ 32 32 How often do law enforcement agencies use high-risk AI? Presidential advisers want answers https://fedscoop.com/summary-reports-facial-recognition-high-risk-ai/ Tue, 30 Apr 2024 16:50:53 +0000 https://fedscoop.com/?p=77809 A national AI advisory group will recommend this week that law enforcement agencies be required to create and publish annual summary usage reports for facial recognition and other AI tools of that kind.

The post How often do law enforcement agencies use high-risk AI? Presidential advisers want answers appeared first on FedScoop.

]]>
A visit with the Miami Police Department by a group of advisers to the president on artificial intelligence may ultimately inform how federal law enforcement agencies are required to report their use of facial recognition and other AI tools of that kind.

During a trip to South Florida earlier this year, Law Enforcement Subcommittee members on the National AI Advisory Committee asked MPD leaders how many times they used facial recognition software in a given year. The answer they got was “around 40.” 

“That just really changes the impression, right? It’s not like everyone’s being tracked everywhere,” Jane Bambauer, NAIAC’s Law Enforcement Subcommittee chair and a University of Florida law professor, said in an interview with FedScoop. “On the other hand, we can imagine that there could be a technology that seems relatively low-risk, but based on how often it’s used … the public understanding of it should change.” 

Based in part on that Miami fact-finding mission, Bambauer’s subcommittee on Thursday will recommend to the full NAIAC body that federal law enforcement agencies be required to create and publish yearly summary usage reports for safety- or rights-impacting AI. Those reports would be included in each agency’s AI use case inventory, in accordance with Office of Management and Budget guidance finalized in March.

Bambauer said NAIAC’s Law Enforcement Subcommittee, which also advises the White House’s National AI Initiative Office, came to the realization that simply listing certain types of AI tools in agency use case inventories “doesn’t tell us much about the scope” or the “quality of its use in a real-world way.” 

“If we knew an agency, for example, was using facial recognition, some observers would speculate that it’s a fundamental shift into a sort of surveillance state, where our movements will be tracked everywhere we go,” Bambauer said. “And others said, ‘Well, no, it’s not to be used that often, only when the circumstances are consistent … with the use limitations.’” 

The draft recommendation calls on federal law enforcement agencies to include in their annual usage reports a description of the technology and the number of times it has been used that year, as well as the purpose of the tool and how many people used it. The report would also include total annual costs for the tool and detail when it was used on behalf of other agencies. 

The subcommittee had previously tabled discussions of the public summary reporting requirement for the use of high-risk AI, but after some refinement brought it back into conversation during an April 5 public meeting of the group. 

Anthony Bak, head of AI at Palantir, said during that meeting that the goal of the recommendation was to make “the production” of those summary statistics a “very low lift for the agencies that are using AI.” Internal IT systems that track AI use cases within law enforcement agencies “should be able to produce these statistics very easily,” he added.

Beyond the recommendation’s topline takeaway on reporting the frequency of AI usage, Bak said the proposed rule would also provide law enforcement agencies with a “gut check for AI use case policy adherence.”

If an agency says they’re using an AI tool “only for certain kinds of crimes” and then they’re reporting for a “much broader category of crimes, you can check that very quickly and easily with these kinds of summary statistics,” Bak said.

Benji Hutchinson, a subcommittee member and chief revenue officer of Rank One Computing, said that from a commercial and technical perspective, it wouldn’t be an “overly complex task” to produce these summary reports. The challenges would come in coordination and standardization.

“Being able to make sure that we have a standard approach to how the systems are built and implemented is always the tough thing,” Hutchinson said. “Because there’s just so many layers to state and local and federal government and how they share their data. And there’s all sorts of different MOUs in place and challenges associated with that.”

The subcommittee seemingly aimed to address the standardization issue by noting in its draft that summary statistics “should include counts by type of case or investigation” according to definitions spelled out in the Uniform Crime Reporting Program’s National Incident-Based Reporting System. Data submitted to NIBRS — which includes victim details, known offenders, relationships between offenders and victims, arrestees, and property involved in crimes — would be paired with information on the source of the image and the person conducting the search.

The Law Enforcement Subcommittee plans to deliver two other recommendations to NAIAC members Thursday: The first is the promotion of a checklist for law enforcement agencies “to test the performance of an AI tool before it is fully adopted and integrated into normal use,” per a draft document, and the second encourages the federal government to invest in the development of statewide repositories of body-worn camera footage that can be accessed and analyzed by academic researchers.

Those recommendations serve as a continuation of the work that the Law Enforcement Subcommittee has prioritized this year. During February’s NAIAC meeting, Bambauer delivered recommendations to amend Federal CIO Council guidance on sensitive use case and common commercial product exclusions from agency inventories. Annual summary usage reports for safety- and rights-impacting AI align with an overarching goal to create more comprehensive use case inventories. 

“We want to sort of prompt a public accounting of whether the use actually seems to be in line with expectations,” Bambauer said.

The post How often do law enforcement agencies use high-risk AI? Presidential advisers want answers appeared first on FedScoop.

]]>
77809
AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions https://fedscoop.com/ai-advisory-law-enforcement-use-case-recommendations/ Wed, 28 Feb 2024 18:12:01 +0000 https://fedscoop.com/?p=76246 The National AI Advisory Committee’s Law Enforcement Subcommittee voted unanimously to edit CIO Council recommendations on sensitive use case and common commercial product exclusions, moves intended to broaden law enforcement agency inventories.

The post AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions appeared first on FedScoop.

]]>
There’s little debate that facial recognition and automated license plate readers are forms of artificial intelligence used by police. So the omissions of those technologies in the Department of Justice’s AI use case inventory late last year were a surprise to a group of law enforcement experts charged with advising the president and the National AI Initiative Office on such matters.

“It just seemed to us that the law enforcement inventories were quite thin,” Farhang Heydari, a Law Enforcement Subcommittee member on the National AI Advisory Committee, said in an interview with FedScoop.

Though the DOJ and other federal law enforcement agencies in recent weeks made additions to their use case inventories — most notably with the FBI’s disclosure of Amazon’s image and video analysis software Rekognition — the NAIAC Law Enforcement Subcommittee wanted to get to the bottom of the initial exclusions. With that in mind, subcommittee members last week voted unanimously in favor of edits to two recommendations governing excluded AI use cases in Federal CIO Council guidance

The goal in delivering updated recommendations, committee members said, is to clarify the interpretations of those exemptions, ensuring more comprehensive inventories from federal law enforcement agencies.

“I think it’s important for all sorts of agencies whose work affects the rights and safety of the public,” said Heydari, a Vanderbilt University law professor who researches policing technologies and AI’s impact on the criminal justice system. “The use case inventories play a central role in the administration’s trustworthy AI practices — the foundation of trustworthy AI is being transparent about what you’re using and how you’re using it. And these inventories are supposed to guide that.” 

Office of Management and Budget guidance issued last November called for additional information from agencies on safety- or rights-impacting uses — an addendum especially relevant to law enforcement agencies like the DOJ. 

That guidance intersected neatly with the NAIAC subcommittee’s first AI use case recommendation, which permitted agencies to “exclude sensitive AI use cases,” defined by the Federal CIO Council as those “that cannot be released practically or consistent with applicable law and policy, including those concerning the protection of privacy and sensitive law-enforcement, national security, and other protected interests.”

Subcommittee members said during last week’s meeting that they’d like the CIO Council to go back to the drawing board and make a narrower recommendation, with more specificity around what it means for a use case to be sensitive. Every law enforcement use of AI “should begin with a strong presumption in favor of public disclosure,” the subcommittee said, with exceptions limited to information “that either would substantially undermine ongoing investigations or would put officers or members of the public at risk.”

“If a law enforcement agency wants to use this exception, they have to basically get clearance from the chief AI officer in their unit,” Jane Bambauer, NAIAC’s Law Enforcement Subcommittee chair and a University of Florida law professor, said in an interview with FedScoop. “And they have to document the reason that the technology is so sensitive that even its use at all would compromise something very important.”

It’s no surprise that law enforcement agencies use technologies like facial or gait recognition, Heydari added, making the initial omissions all the more puzzling. 

“We don’t need to know all the details, if it were to jeopardize some kind of ongoing investigation or security measures,” Heydari said. “But it’s kind of hard to believe that just mentioning that fact, which, you know, most people would probably guess on their own, is really sensitive.”

While gray areas may still exist when agencies assess sensitive AI use cases, the second AI use case exclusion targeted by the Law Enforcement Subcommittee appears more cut-and-dried. The CIO Council’s exemption for agency usage of “AI embedded within common commercial products, such as word processors or map navigation systems” resulted in technologies such as automated license plate readers and voice spoofing to often be left on the cutting-room floor. 

Bambauer said very basic AI uses, such as autocomplete or some Microsoft Edge features, shouldn’t be included in inventories because they aren’t rights-impacting technologies. But common commercial AI products might not have been listed because they’re not “bespoke or customized programs.”

“If you’re just going out into the open market and buying something that [appears to be exempt] because nothing is particularly new about it, we understand that logic,” Bambauer said. “But it’s not actually consistent with the goal of inventory, which is to document not just what’s available, but to document what is actually a use. So we recommended a limitation of the exceptions so that the end result is that inventory is more comprehensive.”

Added Heydari: “The focus should be on the use, impacting people’s rights and safety. And if it is, potentially, then we don’t care if it’s a common commercial product — you should be listing it on your inventory.” 

A third recommendation from the subcommittee, which was unrelated to the CIO Council exclusions, calls on law enforcement agencies to adopt an AI use policy that would set limits on when the technology can be used and by whom, as well as who outside the agency could access related data. The recommendation also includes several oversight mechanisms governing an agency’s use of AI.

After the subcommittee agrees on its final edits, the three recommendations will be posted publicly and sent to the White House and the National AI Initiative Office for consideration. Recommendations from NAIAC — a collection of AI experts from the private sector, academia and nonprofits — have no direct authority, but Law Enforcement Subcommittee members are hopeful that their work goes a long way toward improving transparency with AI and policing.

“If you’re not transparent, you’re going to engender mistrust,” Heydari said. “And I don’t think anybody would argue that mistrust between law enforcement and communities hasn’t been a problem, right? And so this seems like a simple place to start building trust.”

The post AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions appeared first on FedScoop.

]]>
76246