AI use case inventory Archives | FedScoop https://fedscoop.com/tag/ai-use-case-inventory/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Mon, 22 Apr 2024 20:24:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 AI use case inventory Archives | FedScoop https://fedscoop.com/tag/ai-use-case-inventory/ 32 32 White House hopeful ‘more maturity’ of data collection will improve AI inventories https://fedscoop.com/white-house-hopes-data-collection-maturity-improves-ai-inventories/ Mon, 22 Apr 2024 20:24:55 +0000 https://fedscoop.com/?p=77492 Communication and skills for collecting and sorting the information in artificial intelligence inventories have gotten better, Deputy Federal CIO Drew Myklegard told FedScoop.

The post White House hopeful ‘more maturity’ of data collection will improve AI inventories appeared first on FedScoop.

]]>
An expansion of the process for agencies’ AI use case inventories outlined in the Office of Management and Budget’s recent memo will benefit from “clearer directions and more maturity of collecting data,” Deputy Federal Chief Information Officer Drew Myklegard said.

Federal CIO Clare Martorana has “imbued” the idea of “iterative policy” within administration officials, Myklegard said in an interview Thursday with FedScoop at Scoop News Group’s AITalks. “We’re not going to get it right the first time.” 

As the inventories, which were established under a Trump-era executive order, enter the third year of collection, Myklegard said agencies have a better idea of what they’re buying, and communication — as well as the skills for collecting and sorting the data — are improving. 

On the same day OMB released its recent memo outlining a governance strategy for artificial intelligence in the federal government, it also released new, expansive draft guidance for agencies’ 2024 AI use case inventories. 

Those inventories have, in the past, suffered from inconsistencies and even errors. While they’re required to be published publicly and annually by certain agencies, the disclosures have varied widely in terms of things like the type of information contained, format, and collection method.

Now, the Biden administration is seeking to change that. Under the draft, information about each use case would be now collected via a form and agencies would be required to post a “machine-readable” comma-separated value (CSV) format inventory of the public uses to their website, in addition to other changes. The White House is currently soliciting feedback on that draft guidance, though a deadline for those comments isn’t clear.

In the meantime, agencies are getting to work on a host of other requirements OMB outlined in the new AI governance memo. According to Myklegard, the volume of comments was the highest the administration had seen on an OMB memo.

“We were really surprised. It’s the most comments we’ve received from any memo that we’ve put out,” Myklegard said during remarks on stage at AI Talks. He added that “between those we really feel like we were able to hear you.”

The memo received roughly 196 public comments, according to Regulations.gov. The same number for OMB’s previous guidance on the Federal Risk and Authorization Management Program (FedRAMP) process, for example, was 161.

Among the changes in the final version of that memo were several public disclosure requirements, including requiring civilian agencies and the Defense Department to report aggregate metrics about AI uses not published in an inventory, and requiring agencies to report information about the new determinations and waivers they can issue for uses that are assumed to be rights- and safety-impacting under the memo. 

Myklegard told FedScoop those changes are an example of the iterative process that OMB is trying to take. When OMB seeks public input on memos, which Myklegard said hasn’t happened often in the past, “we realize areas in our memos that we either missed and need to address, or need to clarify more, and that was just this case.”

Another addition to the memo was encouragement for agencies to name an “AI Talent Lead.” That individual will serve “for at least the duration of the AI Talent Task Force” and be responsible for tracking AI hiring in their agency, providing data to the Office of Personnel Management and OMB, and reporting to agency leadership, according to the memo.

In response to a question about how that role came about, Myklegard pointed to the White House chief of staff’s desire to look for talent internally and the U.S. Digital Service’s leadership on that effort.

“It just got to a point that we felt we needed to formalize and … give agencies the ability to put that position out,” Myklegard said. The administration hopes “there’s downstream effects” of things like shared position descriptions (PDs), he added.

He specifically pointed to the Department of Homeland Security’s hiring efforts as an example of what the administration would like to see governmentwide. CIO Eric Hysen has already hired multiple people with “good AI-specific skillsets” from the commercial sector, which is typically “unheard of” in government, he said.

In February, DHS launched a unique effort to hire 50 AI and machine learning experts and establish an AI Corps. The Biden administration has since said it plans to hire 100 AI professionals across the government by this summer. 

“We’re hoping that every agency can look to what Eric and his team did around hiring and adopt those same skills and best practices, because frankly, it’s really hard,” Myklegard said. 

The post White House hopeful ‘more maturity’ of data collection will improve AI inventories appeared first on FedScoop.

]]>
77492
State Department trims several uses from public AI inventory https://fedscoop.com/state-department-removes-several-ai-uses/ Tue, 09 Apr 2024 20:01:25 +0000 https://fedscoop.com/?p=77125 Deletions include a Facebook ad system used for collecting media clips and behavioral analytics for online surveys.

The post State Department trims several uses from public AI inventory appeared first on FedScoop.

]]>
The Department of State recently removed several items from its public artificial intelligence use case inventory, including a behavioral analytics system and tools to collect and analyze media clips.

In total, the department removed nine items from its website — several of which appeared to be identical use cases listed under two different agencies — and changed the bureau for a handful of the remaining items. The State Department didn’t provide a response to FedScoop’s requests for comment on why those uses were removed or changed.

The deletions came roughly a week after the Office of Management and Budget released draft guidance for 2024 inventories that says, among many other requirements, that agencies “must not remove retired or decommissioned use cases that were included in prior inventories, but instead mark them as no longer in use.” OMB has previously stated that agencies “are responsible for maintaining the accuracy of their inventories.”

AI use case inventories — which are public, annual disclosures first required by a Trump-era executive order — have so far lacked consistency. Other agencies have also made changes to their inventories outside the annual schedule, including the Department of Transportation and the Department of Homeland Security. OMB’s recent draft guidance and memo on AI governance seek to enhance and expand what is reported in those disclosures.

OMB declined to comment on the removals or whether it’s given agencies guidance on deleting items in their current inventories.

Notably, the department removed a use case titled “forecasting,” which was a pilot using statistical models to forecast outcomes that the agency told FedScoop last year it had shuttered. The description for the use case stated that it had been “applied to COVID cases as well as violent events in relation to tweets.” 

Several of the other deleted State Department uses were related to media and digital content. 

For example, the agency removed the disclosure of a “Facebook Ad Test Optimization System” that it said was used to collect media clips from around the world, a “Global Audience Segmentation Framework” it reported using to analyze “media clips reports” from embassy public affairs sections, and a “Machine-Learning Assisted Measurement and Evaluation of Public Outreach” that it said was used for “collecting, analyzing, and summarizing the global digital content footprint of the Department.” 

State also removed its disclosure of “Behavioral Analytics for Online Surveys Test (Makor Analytics),” which the agency said was a pilot that “aims to provide additional information beyond self-reported data that reflects sentiment analysis in the country of interest.” That use case had been listed under the Bureau of Information Resource Management and the Under Secretary for Public Diplomacy and Public Affairs. Both references were removed.

Two of the removed items had been listed under two agencies but had only one disclosure removed: an AI tool for “identifying similar terms and phrases based off a root word” and a use for “optical character recognition and natural language processing on Department cables.”

Another removed use was for a “Verified Imagery Pilot Project” by the Bureau of Conflict and Stabilization Operations. That pilot tested “how the use of a technology service, Sealr, could verify the delivery of foreign assistance to conflict-affected areas where neither” the department nor its “implementing partner could go.”

While the use case inventory was trimmed down, the department also appears to be adding uses of AI to its operations. State Chief Information Officer Kelly Fletcher recently announced that the department was launching an internal AI chatbot to help with things like translation after staff requested such a tool. 

Rebecca Heilweil and Caroline Nihill contributed to this report.

The post State Department trims several uses from public AI inventory appeared first on FedScoop.

]]>
77125
AI talent role, releasing code, deadline extension among additions in OMB memo https://fedscoop.com/ai-talent-role-releasing-code-deadline-extension-among-additions-in-omb-memo/ Fri, 29 Mar 2024 16:40:52 +0000 https://fedscoop.com/?p=76904 Requiring the release of custom AI code, designating an “AI Talent Lead,” and extending deadlines were among the changes made to the final version of a White House memo on AI governance.

The post AI talent role, releasing code, deadline extension among additions in OMB memo appeared first on FedScoop.

]]>
Additions and edits to the Office of Management and Budget’s final memo on AI governance create additional public disclosure requirements, provide more compliance time to federal agencies, and establish a new role for talent.

The policy, released Thursday, corresponds with President Joe Biden’s October executive order on AI and establishes a framework for federal agency use and management of the technology. Among the requirements, agencies must now vet their AI uses for risks, expand what they share in their annual AI use case inventories, and select a chief AI officer.

While the final version largely tracks with the draft version that OMB published for public comment in November, there were some notable changes. Here are six of the most interesting alterations and additions to the policy: 

1. Added compliance time: The new policy changes the deadline for agencies to be in compliance with risk management practices from Aug. 1 to Dec. 1, giving agencies four more months than the draft version. The requirement states that agencies must implement risk management practices or stop using safety- or rights-impacting AI tools until the agency is in compliance. 

In a document published Thursday responding to comments on the draft policy, OMB said it received feedback that the August deadline was “too aggressive” and that timeline didn’t account for action OMB is expected to take later this year on AI acquisition. 

2. Sharing code, data: The final memo adds an entirely new section requiring agencies to share custom-developed AI code model information on an ongoing basis. Agencies must “release and maintain that code as open source software on a public repository” under the memo, unless sharing it would pose certain risks or it’s restricted by law, regulation, or contract.

Additionally, the memo states that agencies must share and release data used to test AI if it’s considered a “data asset” under the Open, Public, Electronic and Necessary (OPEN) Government Data Act, a federal law that requires such information to be published in a machine-readable format.

Agencies are required to share whatever information possible, even if a portion of the information can’t be released publicly. The policy further states that agencies should, where they’re able, share resources that can’t be released without restrictions through federally operated means that allow controlled access, like the National AI Research Resource (NAIRR).

3. AI Talent Lead: The policy also states agencies should designate an “AI Talent Lead,” which didn’t appear in the draft. That official, “for at least the duration of the AI Talent Task Force, will be accountable for reporting to agency leadership, tracking AI hiring across the agency, and providing data to [the Office of Personnel Management] and OMB on hiring needs and progress,” the memo says. 

The task force, which was established under Biden’s AI executive order, will provide that official with “engagement opportunities to enhance their AI hiring practices and to drive impact through collaboration across agencies.” The memo also stipulates that agencies must follow hiring practices in OPM’s forthcoming AI and Tech Hiring Playbook.

Biden’s order placed an emphasis on AI hiring in the federal government, and so far OPM has authorized direct-hire authority for AI roles and outlined incentives for attracting and retaining AI talent. 

4. Aggregate metrics: Agencies and the Department of Defense will both have to “report and release aggregate metrics” for AI uses that aren’t included in their public inventory of use cases under the new memo. The draft version included only the DOD in that requirement, but the version released Thursday added federal agencies.

Those disclosures, which will be annual, will provide information about how many of the uses are rights- and safety-impacting and their compliance with the standards for those kinds of uses outlined in the memo. 

The use case inventories, which were established by a Trump-era executive order and later enshrined into federal statute, have so far lacked consistency across agencies. The memo and corresponding draft guidance for the 2024 inventories seeks to enhance and expand those reporting requirements.

5. Safety, rights determinations: The memo also added a new requirement that agencies have to validate the determinations and waivers that CAIOs make on safety- and rights-impacting use cases, and publish a summary of those decisions on an annual basis. 

Under the policy, CAIOs can determine that an AI application presumed to be safety- or rights-impacting — which includes a wide array of uses such as election security and conducting biometric identification — doesn’t match the memo’s definitions for what should be considered safety- or rights-impacting. CAIOs may also waive certain requirements for those uses.

While the draft stipulated that agencies should report lists of rights- and safety-impacting uses to OMB, the final memo instead requires the annual validation of those determinations and waivers and public summaries.

In its response to comments, OMB said it made the update to address concerns from some commenters that CAIOs “would hold too much discretion to waive the applicability of risk management requirements to particular AI uses cases.” 

6. Procurement considerations: Three procurement recommendations related to test data, biometric identification, and sustainability were also added to the final memo. 

On testing data, OMB recommends agencies ensure developers and vendors aren’t using test data that an agency might employ to evaluate an AI system to train that system. For biometrics, the memo also encourages agencies to assess risks and request documentation on accuracy when procuring AI systems that use identifiers such as faces and fingerprints. 

And finally on sustainability, the memo includes a recommendation that agencies consider the environmental impact of “computationally intensive” AI systems. “This should include considering the carbon emissions and resource consumption from supporting data centers,” the memo said. That addition was a response to commenters who wanted the memo to expand risk assessment requirements to include environmental considerations, according to OMB.

The post AI talent role, releasing code, deadline extension among additions in OMB memo appeared first on FedScoop.

]]>
76904
White House unveils AI governance policy focused on risks, transparency https://fedscoop.com/white-house-unveils-ai-governance-policy/ Thu, 28 Mar 2024 09:00:00 +0000 https://fedscoop.com/?p=76877 The Office of Management and Budget memo released Thursday finalizes draft guidance issued after Biden’s artificial intelligence executive order.

The post White House unveils AI governance policy focused on risks, transparency appeared first on FedScoop.

]]>
The White House released its much-anticipated artificial intelligence governance policy Thursday, establishing a roadmap for federal agencies’ management and usage of the budding technology.

The 34-page memo from Office of Management and Budget Director Shalanda D. Young corresponds with President Joe Biden’s October AI executive order, providing more detailed guardrails and next steps for agencies. It finalizes a draft of the policy that was released for public comment in November. 

“This policy is a major milestone for President Biden’s landmark AI executive order, and it demonstrates that the federal government is leading by example in its own use of AI,” Young said in a call with reporters before the release of the memo. 

Among other things, the memo mandates that agencies establish guardrails for AI uses that could impact Americans’ rights or safety, expands what agencies share in their AI use case inventories, and establishes a requirement for agencies to designate chief AI officers to oversee their use of the technology. 

Vice President Kamala Harris highlighted those three areas on the call with the press, noting those “new requirements have been shaped in consultation with leaders from across the public and private sectors, from computer scientists to civil rights leaders, to legal scholars and business leaders.”

“President Biden and I intend that these domestic policies will serve as a model for global action,” Harris said.

In addition to the memo, Young announced that the National AI Talent Surge established under the order will hire “at least 100 AI professionals into government by this summer.” She also said OMB will take action later this year on federal procurement of AI and is releasing a request for information on that work.

Under the policy, agencies are required to evaluate and monitor how AI could impact the public and mitigate the risk of discrimination. That includes things like allowing people at the airport to opt out of the Transportation Security Administration’s use of facial recognition “without any delay or losing their place in line,” or requiring a human to oversee the use of AI in health care diagnostics, according to a fact sheet provided by OMB.

Additionally, the policy expands existing disclosures that agencies must share publicly and annually that inventory their AI uses. Those inventories must now identify whether a use is rights- or safety-impacting. The Thursday memo also requires agencies to submit aggregate metrics about use cases that aren’t required to be included in the inventory. In the draft, the requirement for aggregate metrics applied only to the Department of Defense.

The policy also establishes the requirement for agencies to designate within 60 days of the memo’s publication a CAIO to oversee and manage AI uses. Many agencies have already started naming people for those roles, which have tended to be chief information, data and technology officials. 

“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris said of the CAIO role.

The post White House unveils AI governance policy focused on risks, transparency appeared first on FedScoop.

]]>
76877
Biden administration working on ‘enhancing’ AI use case reporting, Martorana says https://fedscoop.com/biden-administration-working-on-enhancing-ai-use-case-reporting-martorana-says/ Tue, 05 Mar 2024 23:39:31 +0000 https://fedscoop.com/?p=76437 Improving agency AI use case reporting includes efforts to make the disclosures more searchable, the Federal CIO said Tuesday.

The post Biden administration working on ‘enhancing’ AI use case reporting, Martorana says appeared first on FedScoop.

]]>
The Biden administration’s efforts to improve reporting of artificial intelligence use case inventories include efforts to make them more searchable, the government’s top IT official said Tuesday.

“We’re working really hard to make sure that we’re enhancing those use cases … with metadata so that we can search them and really interrogate them, rather than just collect them and broadcast them — really to get key learnings from those,” Federal CIO Clare Martorana told reporters at a Federal CIO Council symposium Tuesday. 

The White House has previously indicated the inventories will be more central to understanding how agencies are using the technology going forward. In fact, draft Office of Management and Budget guidance that corresponded to President Joe Biden’s AI executive order proposed expanding the inventories with information about safety- and rights-impacting AI, the risks of uses, and how those risks are being managed.

“Federal agencies have special responsibility to get AI governance right, and we believe this policy will continue our global leadership,” Martorana said of that guidance in a keynote address earlier in the day.

The draft guidance was released in November shortly after Biden’s AI order and would establish a framework for agencies to carry out the administration’s policies for the budding technology. It included, among other things, requirements for agencies to designate chief AI officers — which agencies have already been starting on — and expanding existing reporting on agency AI uses.

Martorana, while talking to reporters, said public comment was “critical” to the development process for the guidance, and noted that equity and transparency as common themes in comments they received from interested parties.

With respect to transparency, Martorana pointed to the administration’s desire to improve agencies’ AI use case inventories, which were required initially under a Trump-era executive order and later enshrined into statute.

As of September, the federal government reported over 700 public uses of AI, demonstrating broad interest and potential for the technology across the federal government. Those inventories, which are required annually, have also so far been inconsistent in terms of things like format and information included. 

The post Biden administration working on ‘enhancing’ AI use case reporting, Martorana says appeared first on FedScoop.

]]>
76437
MITRE researched air traffic language AI tool for FAA, documents show https://fedscoop.com/mitre-air-traffic-conversation-ai-tool-faa-dot/ Wed, 14 Feb 2024 22:34:02 +0000 https://fedscoop.com/?p=76048 The Department of Transportation has been relatively mum about its work on AI.

The post MITRE researched air traffic language AI tool for FAA, documents show appeared first on FedScoop.

]]>
MITRE, a public interest research nonprofit that receives federal funding, proposed a system for transcribing and studying conversations between pilots and air traffic controllers, according to documents obtained by FedScoop through a public records request. 

A presentation dated August 2023 and titled “Advanced Capabilities for Capturing Controller-Pilot dialogue” shows that MITRE engaged in a serious effort to study how natural language processing could be used to help the Federal Aviation Administration, and, in particular, to help with “understanding the safety-related and routine operations of the National Airspace System.”

MITRE, which supported the project through its Center for Advanced Aviation System Development, told FedScoop that the prototype is currently being transitioned to the FAA for “potential operational implementation.” Otherwise, it’s not clear what the current status of the tool is, as the agency’s artificial intelligence use case inventory was last updated in July 2023, according to a DOT page. The FAA did not respond to a request for comment and instead directed FedScoop to the Department of Transportation. 

“Communications between pilots and air traffic controllers are a crucial source of information and context for operations across the national airspace,” Greg Tennille, MITRE’s managing director for transportation safety, said in a statement to FedScoop in response to questions about the documents. “Collecting voice data accurately, efficiently and effectively can provide important insights into the national airspace and unearth trends and potential hazards.” 

The August 2023 presentation describes several ways that natural language processing, a type of AI meant to focus on interpreting and understanding speech and text, could be fine-tuned to understand conversations between air traffic controllers and pilots. The project reported on the performance of different strategies and models in terms of accuracy and provided recommendations. At the end, it also describes a brief exploration of how ChatGPT might be able to help with comprehension of Air Traffic Control sub-dialogues, noting that “the results were surprisingly good.”  

The presentation reveals how the often-overwhelmed aviation agency might try to take advantage of artificial intelligence and comes as the Biden administration continues to push federal agencies to look for ways to deploy the technology. 

At the same time, it also outlines potential interest in ChatGPT. While the Department of Transportation said it doesn’t have a relationship with OpenAI, other documents show that officials within the agency are interested in generative AI.  

The Department of Transportation and ChatGPT

The reference to ChatGPT in the project, though it appears to be provisional and not a core part of the research, is more evidence of how the Department of Transportation might use generative AI tools in the future. FedScoop previously reported, for example, that the DOT’s Pipeline and Hazardous Materials Safety Administration had disclosed a use case — described as “planned” and “not in production” — involving “ChatGPT to support the rulemaking processes.”

PHMSA, which said it’s continuing to study the short- and long-term risks and benefits associated with generative AI, has said it does not plan on using the technology for rulemaking. The agency also said that it has an agreement with Incentive Technology Group, worth about several hundreds of thousands of dollars, to explore generative AI pilots. 

PHMSA said that the project did not involve ChatGPT, but instead involved “Azure OpenAI Generative LLM version 3.5.” (OpenAI explains on its website that GPT-3.5 models can be used to understand and generate natural language or code, but PHMSA did not explain whether the reference to ChatGPT in the AI use case disclosure was a mistake or a distinct project from its work with Incentive Technology Group.)

Notably, while other agencies are beginning to develop policies for generative AI, the Department of Transportation has not responded to questions from FedScoop about what policies or guidance it might have surrounding the technology. 

Emails obtained by FedScoop through public records requests show that the Chief Data Officer Dan Morgan had on-hand a “generative AI” guidance document attributed to the government of New Zealand. An email last summer to the Department of Transportation’s AI Task Force from Matt Cuddy, operations research analyst at the DOT’s Volpe National Transportation Systems Center, shows that the agency had made large language models a topic of interest.

A publicly available document from 2019 said that through the task force, the DOT had made transportation-related AI “an agency research & development priority.” 

Last year, FedScoop reported that the Department of Transportation had disclosed the use of ChatGPT for code-writing assistance in its inventory, but then removed the entry and said it was made in error. The department has not responded to questions about how that error actually occurred. Emails obtained by FedScoop show that the incident attracted attention from Conrad Stosz, the artificial intelligence director in the Office of the Federal Chief Information Officer. 

In regard to this story, the Department of Transportation told FedScoop again that the FAA ChatGPT entry was made in error and that the “FAA does not use Chat GPT in any of its systems, including air traffic systems.” It also said that the use case was unrelated to the MITRE FAA project. 

The post MITRE researched air traffic language AI tool for FAA, documents show appeared first on FedScoop.

]]>
76048
DHS’s initial AI inventory included a cybersecurity use case that wasn’t AI, GAO says https://fedscoop.com/dhs-ai-use-case-inventory-cybersecurity-gao-report/ Wed, 07 Feb 2024 22:31:58 +0000 https://fedscoop.com/?p=75962 A new watchdog report finds that the Department of Homeland Security wasn’t verifying whether use case submissions were actual examples of AI, raising “questions about the overall reliability” of the inventory.

The post DHS’s initial AI inventory included a cybersecurity use case that wasn’t AI, GAO says appeared first on FedScoop.

]]>
The Department of Homeland Security didn’t properly certify whether the artificial intelligence use cases for cybersecurity listed in its AI inventory were actual examples of the technology, according to a new Government Accountability Office report, calling into question the veracity of the agency’s full catalog.

DHS’s AI inventory, launched in 2022 to meet requirements called out in the Trump administration’s 2020 executive order on AI in the federal government, included 21 use cases across agency components, with two focused specifically on cybersecurity.

DHS officials told GAO that one of the two cyber use cases — Automated Scoring and Feedback, a predictive model intended to share cyber threat information — “was incorrectly characterized as AI.” The inclusion of AS&F “raises questions about the overall reliability of DHS’s AI Use Case Inventory,” the GAO stated.

“Although DHS has a process to review use cases before they are added to the AI inventory, the agency acknowledges that it does not confirm whether uses are correctly characterized as AI,” the report noted. “Until it expands its process to include such determinations, DHS will be unable to ensure accurate use case reporting.”

The GAO faulted DHS for its failure to fully implement the watchdog’s 2021 AI Accountability Framework, noting that the agency only “incorporated selected practices” to “manage and oversee its use of AI for cybersecurity.”

That AI framework features 11 key practices that were taken into account for DHS management, operations and oversight of AI cybersecurity practices, covering everything from governance and data to performance and monitoring. The agency’s Chief Technology Officer Directorate reviewed all 21 use cases listed in the launch of DHS’s use case inventory, but additional steps to determine whether a use case “was characteristic of AI” did not occur, the report said.

“CTOD officials said they did not independently verify systems because they rely on components and existing IT governance and oversight efforts to ensure accuracy,” the GAO said. “According to experts who participated in the Comptroller General’s Forum on Artificial Intelligence, existing frameworks and standards may not provide sufficient detail on assessing social and ethical issues which may arise from the use of AI systems.”

The GAO offered eight recommendations to DHS, including an expansion of the agency’s AI review process, adding steps to ensure the accuracy of inventory submissions, and a complete implementation of the watchdog’s AI framework practices. DHS agreed with all eight recommendations, the report noted.

“Ensuring responsible and accountable use of AI will be critical as DHS builds its capabilities to use AI for its operations,” the GAO stated. “By fully implementing accountability practices, DHS can promote public trust and confidence that AI can be a highly effective tool for helping attain strategic outcomes.”

The DHS report follows earlier GAO findings of “incomplete and inaccurate data” in agencies’ AI use case inventories. A December 2023 report from the watchdog characterized most inventories as “not fully comprehensive and accurate,” a conclusion that matched previous FedScoop reporting.

The post DHS’s initial AI inventory included a cybersecurity use case that wasn’t AI, GAO says appeared first on FedScoop.

]]>
75962
Amazon says DOJ disclosure doesn’t indicate violation of facial recognition moratorium https://fedscoop.com/amazon-response-doj-fbi-use-rekognition-software/ Sat, 27 Jan 2024 02:43:20 +0000 https://fedscoop.com/?p=75755 The statement came after FedScoop reporting noting that, according to the DOJ, the FBI is in the “initiation” phase of using Rekognition.

The post Amazon says DOJ disclosure doesn’t indicate violation of facial recognition moratorium appeared first on FedScoop.

]]>
A Department of Justice disclosure that the FBI is in the “initiation” phase of using Amazon’s Rekognition tool for a project doesn’t run afoul of the company’s moratorium on police use of the software, an Amazon spokesperson said in response to FedScoop questions Friday.

The statement comes after FedScoop reported Thursday that the DOJ disclosed in its public inventory of AI use cases that the FBI was initiating use of Rekognition as part of something called “Project Tyr.” The disclosure is significant because Amazon had previously extended a moratorium on police use of Rekognition, though the company did not originally clarify how that moratorium might apply to federal law enforcement. 

In an emailed response to FedScoop, Amazon spokesperson Duncan Neasham said: “We imposed a moratorium on police departments’ use of Amazon Rekognition’s face comparison feature in connection with criminal investigations in June 2020, and to suggest we have relaxed this moratorium is false. Rekognition is an image and video analysis service that has many non-facial analysis and comparison features. Nothing in the Department of Justice’s disclosure indicates the FBI is violating the moratorium in any way.”

According to Amazon’s terms of service, the company placed a moratorium on the “use of Amazon Rekognition’s face comparison feature by police departments in connection with criminal investigations. This moratorium does not apply to use of Amazon Rekognition’s face comparison feature to help identify or locate missing persons.”

The company’s public statement about its one-year moratorium in 2020, which was reportedly extended indefinitely, stated that it applied to “police use of Rekognition.” That statement did not specifically call out the “face comparison feature” or use of the tool related to criminal investigations.

Neasham further stated on Friday that Amazon believes “governments should put in place regulations to govern the ethical use of facial recognition technology, and we are ready to help them design appropriate rules, if requested.”

The description of the use case in DOJ’s AI inventory doesn’t mention the term “facial recognition,” but it states that the agency is working on customizing the tool to “review and identify items containing nudity, weapons, explosives, and other identifying information.” Neither Amazon nor the DOJ have clarified FedScoop questions about whether the FBI had access to facial recognition technology through this work.

Civil liberties advocates told FedScoop that the use case surprised them, given Amazon’s previous statements on facial recognition, Rekognition, and police.

“After immense public pressure, Amazon committed to not providing a face recognition product to law enforcement, and so any provision of Rekognition to DOJ would raise serious questions about whether Amazon has broken that promise and engaged in deception,” American Civil Liberties Union of Northern California attorney Matt Cagle said in a Thursday statement to FedScoop.  

DOJ spokesperson Wyn Hornbuckle did not address several aspects of the project but provided a statement pointing to the agency’s creation of an Emerging Technologies Board to “coordinate and govern AI and other emerging technology issues across the Department.” The FBI declined to comment through the DOJ.

The post Amazon says DOJ disclosure doesn’t indicate violation of facial recognition moratorium appeared first on FedScoop.

]]>
75755
Labor Department names deputy CIO Louis Charlier as chief AI officer https://fedscoop.com/louis-charlier-named-dol-chief-ai-officer/ Thu, 04 Jan 2024 18:29:46 +0000 https://fedscoop.com/?p=75473 Charlier was already handling the role of responsible AI official for the department.

The post Labor Department names deputy CIO Louis Charlier as chief AI officer appeared first on FedScoop.

]]>
The Labor Department has named Deputy Chief Information Officer Louis Charlier as its chief AI officer, a department spokesperson confirmed to FedScoop on Thursday. 

Charlier’s new role comes as agencies across the federal government have been designating their CAIOs following President Joe Biden’s October AI executive order. That order requires agencies to designate such an official after the corresponding Office of Management and Budget guidance is finalized, but many agencies are getting a head start. 

So far, 14 of the 24 Chief Financial Officer Act agencies have named a CAIO.

As CAIO, Charlier will be responsible for coordinating the department’s use of AI, promoting AI innovations, and managing risks associated with the technology. DOL’s public inventory for AI, which is required of agencies, includes 18 use cases for the technology, including chatbots, document processing, and audio transcription. 

Charlier was already handling the role of responsible AI official, which was required by a Trump administration executive order on AI.

According to his DOL biography, Charlier “has more than 30 years of leadership and transformational experience in the military, private, and public sectors initiating and implementing enterprise-wide, IT capabilities and strategies.” Charlier has been at DOL for more than 17 years, according to his LinkedIn profile.

Rebecca Heilweil contributed to this article.

The post Labor Department names deputy CIO Louis Charlier as chief AI officer appeared first on FedScoop.

]]>
75473
DOJ AI tool has been in pilot stage for over three years https://fedscoop.com/doj-ai-tool-has-been-in-pilot-stage-for-over-three-years/ Wed, 03 Jan 2024 20:27:41 +0000 https://fedscoop.com/?p=75449 A 2020 AI strategy document for the agency listed AI adoption as a priority.

The post DOJ AI tool has been in pilot stage for over three years appeared first on FedScoop.

]]>
An artificial intelligence tool designed to help the Department of Justice consolidate records is still in the pilot stage, despite being in operation for more than three years. 

The system, which is called the Intelligent Records Consolidation Tool, was built with the help of an IT consulting group called the Savan Group and is maintained by the agency’s Justice Management Division. 

The software is supposed to assist in measuring the similarity of record schedules in order to reduce the time spent by the records manager, according to a description in the DOJ’s artificial intelligence inventory. It’s one of just four AI use cases mentioned in the inventory, which is required by a 2020 Trump administration executive order. 

While documents obtained by FedScoop show that the tool had an “informal kick-off” in late June 2020, the system is still in a pilot stage, a DOJ spokesperson confirmed. The AI inventory describes the tool as having been in production for “more than 1 year” and also states that the tool has been used in “multiple information management domains for the past three years.” 

Meanwhile, it’s not clear if there’s been any recent discussion about the tool within the agency.  A FedScoop public records request for documents related to the tool were all dated in 2020 and 

and a DOJ spokesman did not answer FedScoop questions about why the tool had not progressed from the pilot stage.

The slow expansion is notable, particularly as the Biden administration pushes federal agencies to adopt AI to improve internal operations. A DOJ artificial intelligence strategy, dated December 2020 and listed on the Justice Management Division’s page, also states that the agency hopes to “promote successful use cases, pilots, proof-of-concepts, and knowledge sharing to accelerate the deployment and appropriate use of AI.”

The Savan Group did not respond to a request for comment.

“The Intelligent Records Consolidation Tool as [sic] an administrative tool being used in a pilot stage to explore potential efficiencies and quality improvements for records consolidation and categorization,” Wyn Hornbuckle, a spokesperson for the agency, said in an email to FedScoop. “The use of technology such as artificial intelligence (AI) to increase automation in record processing is an emerging and promising area, which the department continues to explore while ensuring that there is sufficient human monitoring and appropriate safeguards are established.”

Madison Alder contributed to this article. 

The post DOJ AI tool has been in pilot stage for over three years appeared first on FedScoop.

]]>
75449