EO 14110 Archives | FedScoop https://fedscoop.com/tag/eo-14110/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 29 May 2024 22:20:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 EO 14110 Archives | FedScoop https://fedscoop.com/tag/eo-14110/ 32 32 HHS names acting chief AI officer as it searches for permanent official https://fedscoop.com/hhs-names-acting-chief-ai-officer/ Wed, 29 May 2024 15:58:57 +0000 https://fedscoop.com/?p=78559 Micky Tripathi will serve as acting CAIO in addition to his role as national coordinator for health IT, a spokesperson said.

The post HHS names acting chief AI officer as it searches for permanent official appeared first on FedScoop.

]]>
The Department of Health and Human Services has designated Micky Tripathi, its national coordinator for health IT, as acting chief artificial intelligence officer while it searches for a permanent replacement, a department spokesperson confirmed to FedScoop.

“Micky has been a leading expert in our AI work and will provide tremendous expertise and relationships across HHS and externally to guide our efforts in the coming months,” the spokesperson said. “Micky already serves as co-chair of the HHS AI Task Force. He will continue in his role as National Coordinator for Health IT during the search for a permanent Chief AI Officer.”  

Greg Singleton, the previous CAIO, is still part of the agency’s IT workforce, the spokesperson confirmed. But they also noted that the Office of Management and Budget required agencies to designate CAIOs at the executive level in an effort to improve accountability for AI issues. 

HHS didn’t say when the department had named Tripathi as acting CAIO, but the change appears to have been made recently on the agency’s website. Singleton was still listed as CAIO as of at least May 14, per a copy of HHS’s Office of the CAIO webpage archived in the Wayback Machine. According to the webpage at the time of this story, the content was last reviewed on May 24.

Under President Joe Biden’s AI executive order, CAIOs serve as the official in charge of promoting the use of the technology within an agency and managing its risks. The requirement to have such an official went into effect 60 days after OMB’s memo on AI governance, which would have been May 27.

Many agencies moved quickly to designate CAIOs after the order, tapping officials such as chief information, data and technology officers to carry out the role. Other agencies already had a CAIO, including HHS and the Department of Homeland Security. In fact, the position at HHS has been around since 2021 when the agency named Oki Mek as its first CAIO. Singleton replaced Mek as the department’s top AI official in March 2022.

The post HHS names acting chief AI officer as it searches for permanent official appeared first on FedScoop.

]]>
78559
How the Biden administration is tackling diversity in federal AI hiring https://fedscoop.com/how-the-biden-administration-is-tackling-diversity-in-federal-ai-hiring/ Thu, 16 May 2024 16:27:10 +0000 https://fedscoop.com/?p=78347 The pool of potential AI workers could pose a challenge to the administration’s efforts to build a diverse workforce to responsibly manage artificial intelligence.

The post How the Biden administration is tackling diversity in federal AI hiring appeared first on FedScoop.

]]>
The Biden administration’s plan to bolster the federal civilian workforce with more than 500 artificial intelligence professionals by the end of fiscal year 2025 could face a challenge when it comes to another one of its priorities: promoting a workforce that looks like America.

While data is limited, the broader AI workforce and pipeline appears to have the same demographic underrepresentation issues that STEM careers experience, lacking diversity in terms of gender, race and ethnicity. And just like the private sector, the government has historically struggled with diversity in STEM roles.

Aware of that landscape, the Biden administration says it’s making efforts to promote diversity in AI hiring, including encouraging agencies to target their outreach for open positions, underscoring the need for “AI-enabling” jobs, and engaging with groups aimed at diversifying technologists. Ultimately, what hangs in the balance of those actions is having a workforce that will bring a variety of experiences and perspectives to the table when managing the application of the booming technology — something the administration, experts, and advocates have stressed.

“If we don’t have a diverse group of people building something that needs to serve a larger group of people, we’re going to do ourselves a disservice, and there’s going to be a lot of unhappy people that can’t benefit from something that should be able to be accessible to all,” said Lisa Mae Brunson, founder of Wonder Women Tech, an organization aimed at helping advance women, people of color, and other underrepresented communities in tech and science fields. 

Already, the federal government is hiring for artificial intelligence positions and seeing interest in open roles. Since President Joe Biden’s October executive order on AI, more than 150 people have been hired in AI and AI-enabling roles, according to a report to the White House by the AI Tech and Talent Task Force. As of March, applications for AI and AI-enabling roles in 2024 have doubled when compared to similar periods in the previous two years, the report said.

That report also underscored the need for diversity, noting that the task force has “prioritized recruiting from a diverse pool of qualified candidates,” consistent with previous Biden executive orders that established the White House Gender Policy Council and outlined actions to promote diversity in federal government hiring.

According to the task force, those efforts recognize “the need for technical experts who can work to mitigate bias in AI systems and the overall underrepresentation of women, people of color, first-generation professionals and immigrants, individuals with disabilities, and LGBTQI+ individuals in the STEM field as a whole.”

Active recruiting 

As AI hiring efforts move forward, officials are stressing the importance of recruitment. 

Kyleigh Russ, a senior adviser to the Office of Personnel Management’s deputy director, told FedScoop the administration is trying to get away from a passive “post and pray” method of hiring — meaning the job gets posted and agencies hope the right person applies. Instead, agencies are encouraged to shift to “active recruiting.” 

Often the volume of applications isn’t the problem for federal government positions, Russ said, but there is a desire to make sure the right people and a diverse group of people are applying.

Active recruiting could mean reaching out to someone on LinkedIn, recruiting directly from minority-serving institutions, or engaging in events like its recent virtual job fair. 

Russ described the push for active recruiting as a “change in practice” and said OPM is working on a training program that will address active recruiting. That program, which it’s collaborating on with the U.S. Digital Service and the Office of Performance and Personnel Management, will be aimed at teaching human resources how to recruit and hire technologists, as it’s a specialized field with “unique challenges,” Russ said. 

During a panel about women in AI last month at Scoop News Group’s AI Talks, USDS Administrator Mina Hsiang pointed to the concept of telling stories about use cases and problems they’re trying to solve as a tool for hiring. 

“Different people want to solve different problems that they see in their communities or in their lives,” Hsiang said. “And so the more that we can connect this to problems that people care about, and show how these are relevant pieces of that, the more people will be motivated to sort of move into those fields.”

Hiring a diverse federal workforce across the government has been an important issue for the Biden administration since its early days. In June 2021, the White House issued an executive order to advance diversity, equity and inclusion in the federal workforce. That order directed agencies to assess the state of diversity, equity and inclusion in their workforces, and took steps to advance things like pay equity. 

AI-enabling

The administration is also highlighting the difference between AI and AI-enabling jobs, which includes roles that are less technical and broadens the pool of candidates. 

Roles that fall into the enabling category include things like data scientists, data analysis, and technical recruiters, Russ said. She noted that the administration has been stressing that it’s looking for both categories of roles in its recruiting campaign and specifically with the recent Tech to Gov job fair. 

That April 18 virtual fair, which is similar to others Tech to Gov has held before, yielded registrations from over 1,300 people representing all 50 states, according to numbers provided by the nonpartisan and nonprofit Tech Talent Project that coordinates the Tech to Gov coalition. The event focused on senior-level technologist roles at the roughly 15 federal agencies and four state agencies that participated. 

Jennifer Anastasoff, executive director of the Tech Talent Project, similarly underscored that both AI and AI-enabling roles are needed. For government hires, Anastasoff said it isn’t required “that every one of the folks who’s inside is someone who has deep expertise in the most technical of technical AI.”

“What we need are folks who can really help make sure that all of our systems — technically, data and otherwise — are really focused on the people who are supposed to be receiving those services,” said Anastasoff, who was a founding member of USDS. 

Anastasoff said the administration’s work with Tech to Gov shows a “level of commitment” to diversity in the technology workforce, as the coalition’s members are interested in that issue. Tech to Gov’s members include organizations like the U.S. Digital Response, Coding it Forward, and AnitaB.org

There’s also more work planned with groups trying to diversify tech. Deputy Federal Chief Information Officer Drew Myklegard told FedScoop the administration is planning a hiring push at this year’s Grace Hopper Celebration, a conference for women and non-binary people in technology that’s organized by AnitaB.org.

“It’s 30,000 individuals that come together who are excited, young, extremely diverse,” Myklegard said, “and we think we have a very compelling pitch why they should come and work for the government in AI.”

Additionally, there’s action being taken to support a diverse pipeline of AI professionals outside government. The National Science Foundation, for example, has a program targeted at diversifying the AI research community, including funding research and education at minority-serving institutions. Biden’s AI executive order directed NSF to continue its support of AI-related education and workforce development in an effort to “foster a diverse AI-ready workforce.” 

“We know that the existing research institutions, and some of the other institutions, are building curriculum, but this curriculum has to be everywhere because talent and ideas are anxious to engage, and that’s a deep commitment from NSF,” Sethuraman Panchanathan, the agency’s director, told FedScoop.

Diversity data

The growth of the AI workforce comes as STEM careers more broadly have historically struggled with diversity — both in the private sector and the federal government.

The U.S. Equal Employment Opportunity Commission, for example, found that women made up less than 30% of federal STEM jobs in fiscal year 2019. A November 2023 report by the Union of Concerned Scientists found that while the shares of scientists in the federal government grew more racially and ethnically diverse between 2017 and 2022, there were decreases in certain groups and inequities were still present in the STEM workforce at specific agencies. 

When it comes to the federal AI workforce, specifically, there aren’t yet figures. The government, through OPM, is still in the process of getting a head count of federal AI and AI-enabling employees. A recent publication from OPM that describes and defines those AI roles will aid those efforts to get a sense of the workforce within the government. Russ said that will likely include demographic data.

Data on the AI workforce is a challenge outside of government as well. Nestor Maslej, a research manager at Stanford’s Institute for Human-Centered Artificial Intelligence who manages the AI Index, said there isn’t as much data on diversity in AI compared to economic or technical performance data, and emphasized the need to address that.

“Although things are getting better, we really would want to kind of create a world in which there is more data — there is much more reporting,” Maslej said. “Because I think data is the first step in actually understanding what’s going on, what the landscape is like, and what kind of changes are required.” 

Stanford’s most recent AI Index report, for example, uses data on computer science graduates to paint a picture of the AI workforce pipeline. That data shows that men represent roughly 3 in 4 bachelor’s, master’s, and PhD computer science graduates in North America. Those students are predominantly white, though Asian graduates also make up a substantial portion in each degree type as well.

If people are able to see that the government has a diverse and representative AI workforce, Maslej said it could generate more confidence from the public in its regulation of that technology.

Looking forward

While the hiring push is still in its early stages, there are some suggestions on how to improve efforts.

Wonder Woman Tech’s Brunson said she’d like to see the administration be more vocal about a commitment to diversity with its AI hires, especially as the tech industry has seen a rollback of some diversity, equity and inclusion initiatives.

Brunson said she now doesn’t have the resources to be able to tell people looking for jobs where to go, and many people who are interested are trying to teach themselves about AI. “Where is there an opportunity … to train up these diverse candidates so that the future of AI talent looks different than what it looks like today?” Brunson said. 

But there is also optimism that diverse hiring is achievable. Seth Dobrin, founder and CEO of Qantm AI and the author of a forthcoming book on AI strategy, talent and culture, said that while the talent pool of people building AI models isn’t particularly diverse, the pool that the Biden administration will likely hire from is separate from that. He said that in his experience “it’s not as bleak as some of these studies show.” 

Dobrin, who was IBM’s first global chief AI officer, emphasized the importance of intentionally crafting job postings and descriptions so they are more inclusive to diversity. 

“It’s not hiring for a lowest common denominator,” he said. “It’s making sure that you craft your job descriptions appropriately, that you don’t interview until you have a diverse pool of candidates, and then you hire the best person from that pool.”

FedScoop reporter Caroline Nihill contributed to this story.

The post How the Biden administration is tackling diversity in federal AI hiring appeared first on FedScoop.

]]>
78347
NSF, Energy announce first 35 projects to access National AI Research Resource pilot https://fedscoop.com/nsf-energy-announce-first-projects-for-nairr-pilot/ Mon, 06 May 2024 15:13:09 +0000 https://fedscoop.com/?p=78145 The projects will get computational time through NAIRR pilot program, which is meant to provide students and researchers with access to AI resources needed for their work.

The post NSF, Energy announce first 35 projects to access National AI Research Resource pilot appeared first on FedScoop.

]]>
The National Science Foundation and the Department of Energy on Monday announced the first 35 projects to access the pilot for the National AI Research Resource, allowing computational time for a variety of investigations and studies.

The projects range from research into language model safety and synthetic data generation for privacy, to developing a model for aquatic sciences and using AI for identifying agricultural pests, according to a release from the NSF. Of those projects, 27 will be supported on NSF-funded advanced computing systems and eight projects will have access to those supported by DOE, including the Summit supercomputer at Oak Ridge National Laboratory.

“You will see among these 35 projects’ unbelievable span in terms of geography, in terms of ideas, core ideas, as well as application interests,” NSF Director Sethuraman Panchanathan said at a White House event. 

The NAIRR, which launched earlier this year in pilot form as part of President Joe Biden’s executive order on AI, is aimed at providing researchers with the resources needed to carry out their work on AI by providing access to advanced computing, data, software, and AI models.

The pilot is composed of contributions from multiple federal agencies and private sector partners, including Microsoft, Amazon Web Services, NVIDIA, Intel, and IBM. Those contributions include access to supercomputers; datasets from NASA and the National Oceanic and Atmospheric Administration; and access to models from OpenAI, Anthropic, and Meta.

In addition to the project awards, NSF also announced the NAIRR pilot has opened the next opportunity to apply for access to research resources, including cloud computing platforms and access to foundation models, according to the release. That includes resources from nongovernmental partners and NSF-supported platforms.

Panchanathan described the appetite for the resource as “pretty strong,” noting that 50 projects have been reviewed as positive. But he said there aren’t yet resources to scale those 50 projects. “There is so much need, and so we need more resources to be brought to the table,” Panchanathan said.

While the pilot continues, there are also bipartisan efforts in Congress to codify and fully fund a full-scale NAIRR. Panchanathan and Office of Science and Technology Policy Director Arati Prabhakar underscored the need for that legislation Monday.

“Fully establishing NAIRR is going to take significant funding, and we’re happy to see that Congress has initiated action,” Prabhaker said, adding that the White House is hopeful “that full funding will be achieved.”

The post NSF, Energy announce first 35 projects to access National AI Research Resource pilot appeared first on FedScoop.

]]>
78145
OPM issues generative AI guidance, competency model for AI roles required by Biden order https://fedscoop.com/opm-issued-generative-ai-guidance-ai-competency-model/ Mon, 29 Apr 2024 11:00:00 +0000 https://fedscoop.com/?p=77713 The guidance was among several actions required by the federal workforce agency within 180 days of President Joe Biden’s executive order on the technology.

The post OPM issues generative AI guidance, competency model for AI roles required by Biden order appeared first on FedScoop.

]]>
Guidance on generative AI and a competency model for AI roles are among the latest actions that the Office of Personnel Management has completed under President Joe Biden’s executive order on the technology, an agency spokesperson said.

In a statement provided to FedScoop ahead of the Monday announcement, OPM disclosed it would issue guidance on use of generative AI tools for the federal workforce; a competency model and skills-based hiring guidance for AI positions to help agencies find people with the skills needed for those roles; and an AI competency model specifically for civil engineering

All of those actions were among those the agency was required to complete at the 180-day mark of the October executive order, which would have been over the weekend. The spokesperson also noted that the agency established an interagency working group for AI, as required by the order. 

OPM was given multiple actions under the sweeping order, most of which were aimed at helping agencies attract and retain a federal workforce prepared to address AI. That role is important as the government is working to rapidly hire for 100 AI positions by this summer. The latest actions from OPM give federal agencies a better roadmap for hiring workers in those positions.

They also add to OPM’s existing work under the order, which has included authorizing direct hire authority for AI-related positions and outlining incentives for attracting and retaining AI workers in the federal government. 

Notably, OPM’s action on the responsible use of generative AI comes as agencies across the government have been developing their own unique approaches to those tools for their workforces. Those policies have ranged from banning the use of certain third-party tools to allowing use across the workforce with guidelines. 

The OPM guidance, which was posted publicly Monday, outlines risks and benefits of the technology along with best practices for implementing it in work. 

Though it ultimately directs employees to consult their agency’s policy, the guidance provides examples of uses and specific considerations for those uses, such as summarizing notes and transcripts, drafting content, and using generative tools for software and code development. 

“GenAI has the potential to improve the way the federal workforce delivers results for the public,” the guidance says. “Federal employees can leverage GenAI to enhance creativity, efficiency, and productivity. Federal agencies and employees are encouraged to consider how best to use these tools to fulfill their missions.”

Under the order, OPM was required to create that guidance in consultation with the Office of Management and Budget. 

In addition to the competency models and guidance, the OPM spokesperson also disclosed that the agency issued an AI classification policy and talent acquisition guidance. While those actions support the rest of OPM’s work, they weren’t required by Biden’s executive order but rather the 2020 AI in Government Act. The spokesperson described those actions as addressing “position classification, job evaluation, qualifications, and assessments for AI positions.”

OPM is seeking feedback on that policy and guidance in a 30-day comment period ending May 29. 

This story was updated April 29, 2024, with additional information and links from OPM released Monday.

The post OPM issues generative AI guidance, competency model for AI roles required by Biden order appeared first on FedScoop.

]]>
77713
ACLU seeks AI records from NSA, Defense Department in new lawsuit https://fedscoop.com/aclu-seeks-ai-records-from-nsa-defense-department/ Fri, 26 Apr 2024 19:49:35 +0000 https://fedscoop.com/?p=77647 The complaint, filed under the Freedom of Information Act, aims to compel the release of documents related to NSA’s use of artificial intelligence.

The post ACLU seeks AI records from NSA, Defense Department in new lawsuit appeared first on FedScoop.

]]>
The American Civil Liberties Union is seeking the disclosure of records related to the National Security Agency’s use of artificial intelligence, as the Biden administration emphasizes transparency surrounding use of the technology in the government.

In a Thursday complaint, the ACLU asked the U.S. District Court for the Southern District of New York to compel the release of documents detailing the agency’s integration of the technology and plans for the future. Despite the agency’s public comments about its AI efforts and past pledges to be transparent, those documents haven’t yet been released, the ACLU argued.

“Immediate disclosure of these records is critical to allowing members of the public to participate in the development and adoption of appropriate safeguards for these society-altering systems,” the ACLU said in its filing, which was first reported by Bloomberg Law.

In addition to the NSA, the complaint also names the Department of Defense and the Office of the Director of National Intelligence — which oversee the spy agency — as plaintiffs.

The Freedom of Information Act lawsuit comes as the Biden administration has underscored the need for transparency in the use of AI by the government. In an Office of Management and Budget memo released last month, the administration expanded what civilian agencies are required to report in their annual, public AI use case inventories, adding requirements for safety- and rights-impacting uses. Certain intelligence community agencies and DOD, however, continue to be exempt from that process.

“Transparency is one of the core values animating White House efforts to create rules and guidelines for the federal government’s use of AI, but exemptions for national security threaten to obscure some of the most high-risk uses of AI,” Patrick Toomey, deputy director of the ACLU’s National Security Project who is representing the civil rights organization, told FedScoop.

Toomey said the NSA has described itself as a leader among the intelligence agencies in the development and deployment of AI, and officials have noted that it’s using the technology to gather information on foreign governments, assist with language processing, and monitor networks for cybersecurity threats. 

“But unfortunately, that’s about all we know,” Toomey said. “And as the NSA integrates AI into some of its most profound decisions, it’s left the public in the dark about how it uses AI and what safeguards, if any, are in place to protect everyday Americans and others around the world whose privacy hangs in the balance.”

The complaint pointed to several actions the NSA has taken on AI, including a joint evaluation of the agency’s integration of AI, conducted by its inspector general and DOD, and studies and roadmaps NSA has completed about its use of the technology.

The specific documents being requested include an October 2022 report from DOD and NSA titled “Joint Evaluation of the National Security Agency’s Integration of Artificial Intelligence,” several roadmap documents created by NSA starting in January 2023, and documents related to the agency’s proposed uses of AI and machine learning created on or after January 2022.

The NSA didn’t immediately respond to FedScoop’s request for comment on the lawsuit. 

While the intelligence community is exempt from the inventory process that other civilian agencies must complete, President Joe Biden’s October 2023 executive order on AI required the development of a memo on the governance of AI that’s used for national security, military or intelligence. That memo is required to be produced 270 days after the issuance of the order. 

Toomey said the ACLU is hopeful that memo “will incorporate some of the very important transparency principles that the Biden administration and even the intelligence agencies have publicly committed themselves to.”

The post ACLU seeks AI records from NSA, Defense Department in new lawsuit appeared first on FedScoop.

]]>
77647
DOJ seeks public input on AI use in criminal justice system https://fedscoop.com/doj-seeks-input-on-criminal-justice-ai/ Wed, 24 Apr 2024 21:36:41 +0000 https://fedscoop.com/?p=77578 The department’s research, development and evaluation arm will use the information as it puts together a report on AI in the criminal justice system due later this year.

The post DOJ seeks public input on AI use in criminal justice system appeared first on FedScoop.

]]>
The Justice Department’s National Institute of Justice is looking for public input on the use of artificial intelligence in the criminal system.

In a document posted for public inspection on the Federal Register Wednesday, the research, development and evaluation arm of the department said it’s seeking feedback to “inform a report that addresses the use of artificial intelligence (AI) in the criminal justice system.” Those comments are due 30 days after the document is published.

That report is among the actions intended to strengthen AI and civil rights that President Joe Biden included in his October 2023 executive order on the technology. According to the order, its aim is to “promote the equitable treatment of individuals and adhere to the Federal Government’s fundamental obligation to ensure fair and impartial justice for all.”

Ultimately, the report is required to address the use of the technology throughout the criminal justice system — from sentencing and parole to policing surveillance and crime forecasting — as well as identify areas where AI could benefit law enforcement, outline recommended best practices, and make recommendations to the White House on additional actions. 

The DOJ must also work with the Homeland Security secretary and the director of the Office of Science and Technology Policy on that report, and it’s due 365 days after the order was issued.

The post DOJ seeks public input on AI use in criminal justice system appeared first on FedScoop.

]]>
77578
White House hopeful ‘more maturity’ of data collection will improve AI inventories https://fedscoop.com/white-house-hopes-data-collection-maturity-improves-ai-inventories/ Mon, 22 Apr 2024 20:24:55 +0000 https://fedscoop.com/?p=77492 Communication and skills for collecting and sorting the information in artificial intelligence inventories have gotten better, Deputy Federal CIO Drew Myklegard told FedScoop.

The post White House hopeful ‘more maturity’ of data collection will improve AI inventories appeared first on FedScoop.

]]>
An expansion of the process for agencies’ AI use case inventories outlined in the Office of Management and Budget’s recent memo will benefit from “clearer directions and more maturity of collecting data,” Deputy Federal Chief Information Officer Drew Myklegard said.

Federal CIO Clare Martorana has “imbued” the idea of “iterative policy” within administration officials, Myklegard said in an interview Thursday with FedScoop at Scoop News Group’s AITalks. “We’re not going to get it right the first time.” 

As the inventories, which were established under a Trump-era executive order, enter the third year of collection, Myklegard said agencies have a better idea of what they’re buying, and communication — as well as the skills for collecting and sorting the data — are improving. 

On the same day OMB released its recent memo outlining a governance strategy for artificial intelligence in the federal government, it also released new, expansive draft guidance for agencies’ 2024 AI use case inventories. 

Those inventories have, in the past, suffered from inconsistencies and even errors. While they’re required to be published publicly and annually by certain agencies, the disclosures have varied widely in terms of things like the type of information contained, format, and collection method.

Now, the Biden administration is seeking to change that. Under the draft, information about each use case would be now collected via a form and agencies would be required to post a “machine-readable” comma-separated value (CSV) format inventory of the public uses to their website, in addition to other changes. The White House is currently soliciting feedback on that draft guidance, though a deadline for those comments isn’t clear.

In the meantime, agencies are getting to work on a host of other requirements OMB outlined in the new AI governance memo. According to Myklegard, the volume of comments was the highest the administration had seen on an OMB memo.

“We were really surprised. It’s the most comments we’ve received from any memo that we’ve put out,” Myklegard said during remarks on stage at AI Talks. He added that “between those we really feel like we were able to hear you.”

The memo received roughly 196 public comments, according to Regulations.gov. The same number for OMB’s previous guidance on the Federal Risk and Authorization Management Program (FedRAMP) process, for example, was 161.

Among the changes in the final version of that memo were several public disclosure requirements, including requiring civilian agencies and the Defense Department to report aggregate metrics about AI uses not published in an inventory, and requiring agencies to report information about the new determinations and waivers they can issue for uses that are assumed to be rights- and safety-impacting under the memo. 

Myklegard told FedScoop those changes are an example of the iterative process that OMB is trying to take. When OMB seeks public input on memos, which Myklegard said hasn’t happened often in the past, “we realize areas in our memos that we either missed and need to address, or need to clarify more, and that was just this case.”

Another addition to the memo was encouragement for agencies to name an “AI Talent Lead.” That individual will serve “for at least the duration of the AI Talent Task Force” and be responsible for tracking AI hiring in their agency, providing data to the Office of Personnel Management and OMB, and reporting to agency leadership, according to the memo.

In response to a question about how that role came about, Myklegard pointed to the White House chief of staff’s desire to look for talent internally and the U.S. Digital Service’s leadership on that effort.

“It just got to a point that we felt we needed to formalize and … give agencies the ability to put that position out,” Myklegard said. The administration hopes “there’s downstream effects” of things like shared position descriptions (PDs), he added.

He specifically pointed to the Department of Homeland Security’s hiring efforts as an example of what the administration would like to see governmentwide. CIO Eric Hysen has already hired multiple people with “good AI-specific skillsets” from the commercial sector, which is typically “unheard of” in government, he said.

In February, DHS launched a unique effort to hire 50 AI and machine learning experts and establish an AI Corps. The Biden administration has since said it plans to hire 100 AI professionals across the government by this summer. 

“We’re hoping that every agency can look to what Eric and his team did around hiring and adopt those same skills and best practices, because frankly, it’s really hard,” Myklegard said. 

The post White House hopeful ‘more maturity’ of data collection will improve AI inventories appeared first on FedScoop.

]]>
77492
Department of Commerce announces US, UK AI safety partnership https://fedscoop.com/us-uk-announce-ai-safety-partnership/ Tue, 02 Apr 2024 18:11:55 +0000 https://fedscoop.com/?p=76963 AI safety bodies in the U.S. and the U.K. will work together on AI safety research, evaluations and guidance under partnership.

The post Department of Commerce announces US, UK AI safety partnership appeared first on FedScoop.

]]>
The U.S. and U.K. on Monday signed an agreement to have their AI safety institutes work together on research, evaluations and guidance, furthering the Biden administration’s commitment to work with other countries on regulating the technology.

Under a memorandum of understanding signed by Commerce Secretary Gina Raimondo and U.K. Technology Secretary Michelle Donelan, both countries will work “to align their scientific approaches” and “accelerate and rapidly iterate robust suites of evaluations for AI models, systems, and agents,” according to a release from the Department of Commerce. The agreement is effective immediately.

“Our partnership makes clear that we aren’t running away from these concerns – we’re running at them,” Raimondo said in a statement. “Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.”

The announcement comes as the Biden administration has emphasized its desire to work with other countries on AI. The administration’s October executive order on the technology, for example, directed the Department of Commerce to establish international AI frameworks.  

AI safety institutes from both countries have plans to create “a common approach to AI safety testing.” They also plan to conduct “at least one joint testing exercise on a publicly accessible model” and “tap into a collective pool of expertise by exploring personnel exchanges between the Institutes,” according to the release. 

The Department of Commerce’s National Institute of Standards and Technology houses the AI Safety Institute in the U.S. That body got its leadership and launched a consortium with participation from over 200 stakeholders in February. 

Partnering with the U.K. likely isn’t the end of the collaboration. According to Commerce’s announcement, the two countries “have also committed to develop similar partnerships with other countries to promote AI safety across the globe.”

“We have always been clear that ensuring the safe development of AI is a shared global issue,” the U.K.’s Donelan said. “Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

The post Department of Commerce announces US, UK AI safety partnership appeared first on FedScoop.

]]>
76963
AI talent role, releasing code, deadline extension among additions in OMB memo https://fedscoop.com/ai-talent-role-releasing-code-deadline-extension-among-additions-in-omb-memo/ Fri, 29 Mar 2024 16:40:52 +0000 https://fedscoop.com/?p=76904 Requiring the release of custom AI code, designating an “AI Talent Lead,” and extending deadlines were among the changes made to the final version of a White House memo on AI governance.

The post AI talent role, releasing code, deadline extension among additions in OMB memo appeared first on FedScoop.

]]>
Additions and edits to the Office of Management and Budget’s final memo on AI governance create additional public disclosure requirements, provide more compliance time to federal agencies, and establish a new role for talent.

The policy, released Thursday, corresponds with President Joe Biden’s October executive order on AI and establishes a framework for federal agency use and management of the technology. Among the requirements, agencies must now vet their AI uses for risks, expand what they share in their annual AI use case inventories, and select a chief AI officer.

While the final version largely tracks with the draft version that OMB published for public comment in November, there were some notable changes. Here are six of the most interesting alterations and additions to the policy: 

1. Added compliance time: The new policy changes the deadline for agencies to be in compliance with risk management practices from Aug. 1 to Dec. 1, giving agencies four more months than the draft version. The requirement states that agencies must implement risk management practices or stop using safety- or rights-impacting AI tools until the agency is in compliance. 

In a document published Thursday responding to comments on the draft policy, OMB said it received feedback that the August deadline was “too aggressive” and that timeline didn’t account for action OMB is expected to take later this year on AI acquisition. 

2. Sharing code, data: The final memo adds an entirely new section requiring agencies to share custom-developed AI code model information on an ongoing basis. Agencies must “release and maintain that code as open source software on a public repository” under the memo, unless sharing it would pose certain risks or it’s restricted by law, regulation, or contract.

Additionally, the memo states that agencies must share and release data used to test AI if it’s considered a “data asset” under the Open, Public, Electronic and Necessary (OPEN) Government Data Act, a federal law that requires such information to be published in a machine-readable format.

Agencies are required to share whatever information possible, even if a portion of the information can’t be released publicly. The policy further states that agencies should, where they’re able, share resources that can’t be released without restrictions through federally operated means that allow controlled access, like the National AI Research Resource (NAIRR).

3. AI Talent Lead: The policy also states agencies should designate an “AI Talent Lead,” which didn’t appear in the draft. That official, “for at least the duration of the AI Talent Task Force, will be accountable for reporting to agency leadership, tracking AI hiring across the agency, and providing data to [the Office of Personnel Management] and OMB on hiring needs and progress,” the memo says. 

The task force, which was established under Biden’s AI executive order, will provide that official with “engagement opportunities to enhance their AI hiring practices and to drive impact through collaboration across agencies.” The memo also stipulates that agencies must follow hiring practices in OPM’s forthcoming AI and Tech Hiring Playbook.

Biden’s order placed an emphasis on AI hiring in the federal government, and so far OPM has authorized direct-hire authority for AI roles and outlined incentives for attracting and retaining AI talent. 

4. Aggregate metrics: Agencies and the Department of Defense will both have to “report and release aggregate metrics” for AI uses that aren’t included in their public inventory of use cases under the new memo. The draft version included only the DOD in that requirement, but the version released Thursday added federal agencies.

Those disclosures, which will be annual, will provide information about how many of the uses are rights- and safety-impacting and their compliance with the standards for those kinds of uses outlined in the memo. 

The use case inventories, which were established by a Trump-era executive order and later enshrined into federal statute, have so far lacked consistency across agencies. The memo and corresponding draft guidance for the 2024 inventories seeks to enhance and expand those reporting requirements.

5. Safety, rights determinations: The memo also added a new requirement that agencies have to validate the determinations and waivers that CAIOs make on safety- and rights-impacting use cases, and publish a summary of those decisions on an annual basis. 

Under the policy, CAIOs can determine that an AI application presumed to be safety- or rights-impacting — which includes a wide array of uses such as election security and conducting biometric identification — doesn’t match the memo’s definitions for what should be considered safety- or rights-impacting. CAIOs may also waive certain requirements for those uses.

While the draft stipulated that agencies should report lists of rights- and safety-impacting uses to OMB, the final memo instead requires the annual validation of those determinations and waivers and public summaries.

In its response to comments, OMB said it made the update to address concerns from some commenters that CAIOs “would hold too much discretion to waive the applicability of risk management requirements to particular AI uses cases.” 

6. Procurement considerations: Three procurement recommendations related to test data, biometric identification, and sustainability were also added to the final memo. 

On testing data, OMB recommends agencies ensure developers and vendors aren’t using test data that an agency might employ to evaluate an AI system to train that system. For biometrics, the memo also encourages agencies to assess risks and request documentation on accuracy when procuring AI systems that use identifiers such as faces and fingerprints. 

And finally on sustainability, the memo includes a recommendation that agencies consider the environmental impact of “computationally intensive” AI systems. “This should include considering the carbon emissions and resource consumption from supporting data centers,” the memo said. That addition was a response to commenters who wanted the memo to expand risk assessment requirements to include environmental considerations, according to OMB.

The post AI talent role, releasing code, deadline extension among additions in OMB memo appeared first on FedScoop.

]]>
76904
White House unveils AI governance policy focused on risks, transparency https://fedscoop.com/white-house-unveils-ai-governance-policy/ Thu, 28 Mar 2024 09:00:00 +0000 https://fedscoop.com/?p=76877 The Office of Management and Budget memo released Thursday finalizes draft guidance issued after Biden’s artificial intelligence executive order.

The post White House unveils AI governance policy focused on risks, transparency appeared first on FedScoop.

]]>
The White House released its much-anticipated artificial intelligence governance policy Thursday, establishing a roadmap for federal agencies’ management and usage of the budding technology.

The 34-page memo from Office of Management and Budget Director Shalanda D. Young corresponds with President Joe Biden’s October AI executive order, providing more detailed guardrails and next steps for agencies. It finalizes a draft of the policy that was released for public comment in November. 

“This policy is a major milestone for President Biden’s landmark AI executive order, and it demonstrates that the federal government is leading by example in its own use of AI,” Young said in a call with reporters before the release of the memo. 

Among other things, the memo mandates that agencies establish guardrails for AI uses that could impact Americans’ rights or safety, expands what agencies share in their AI use case inventories, and establishes a requirement for agencies to designate chief AI officers to oversee their use of the technology. 

Vice President Kamala Harris highlighted those three areas on the call with the press, noting those “new requirements have been shaped in consultation with leaders from across the public and private sectors, from computer scientists to civil rights leaders, to legal scholars and business leaders.”

“President Biden and I intend that these domestic policies will serve as a model for global action,” Harris said.

In addition to the memo, Young announced that the National AI Talent Surge established under the order will hire “at least 100 AI professionals into government by this summer.” She also said OMB will take action later this year on federal procurement of AI and is releasing a request for information on that work.

Under the policy, agencies are required to evaluate and monitor how AI could impact the public and mitigate the risk of discrimination. That includes things like allowing people at the airport to opt out of the Transportation Security Administration’s use of facial recognition “without any delay or losing their place in line,” or requiring a human to oversee the use of AI in health care diagnostics, according to a fact sheet provided by OMB.

Additionally, the policy expands existing disclosures that agencies must share publicly and annually that inventory their AI uses. Those inventories must now identify whether a use is rights- or safety-impacting. The Thursday memo also requires agencies to submit aggregate metrics about use cases that aren’t required to be included in the inventory. In the draft, the requirement for aggregate metrics applied only to the Department of Defense.

The policy also establishes the requirement for agencies to designate within 60 days of the memo’s publication a CAIO to oversee and manage AI uses. Many agencies have already started naming people for those roles, which have tended to be chief information, data and technology officials. 

“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris said of the CAIO role.

The post White House unveils AI governance policy focused on risks, transparency appeared first on FedScoop.

]]>
76877