chief AI officer Archives | FedScoop https://fedscoop.com/tag/chief-ai-officer/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Fri, 07 Jun 2024 20:34:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 chief AI officer Archives | FedScoop https://fedscoop.com/tag/chief-ai-officer/ 32 32 Labor Department has ‘a leg up’ on artificial intelligence, new CAIO says https://fedscoop.com/dol-caio-leg-up-ai-modernization/ Fri, 07 Jun 2024 20:34:29 +0000 https://fedscoop.com/?p=78718 Though the agency isn’t pursuing a “big-bang approach” when it comes to AI, Mangala Kuppa says DOL is poised to scale those systems quickly.

The post Labor Department has ‘a leg up’ on artificial intelligence, new CAIO says appeared first on FedScoop.

]]>
A shout-out from the White House doesn’t happen to federal agencies every day, but the Department of Labor got a turn in March when it was lauded in a fact sheet for “leading by example” with its work on principles to mitigate artificial intelligence’s potential harms to employees. 

Mangala Kuppa, who took over as DOL’s chief AI officer this week after previously serving as its deputy CAIO, believes the agency has even more to be confident about when it comes to its work on the technology, possessing a “leg up” on scaling AI quickly.

In an interview with FedScoop, Kuppa pointed to DOL’s previous efforts to modernize internal operations and customer-facing services as part of the department’s journey to implement emerging technologies like AI. Having foundational building blocks and existing infrastructure, along with existing AI applications, has made it “easier” for the agency to scale up, she said. 

“It’s not a ‘big bang’ approach,” said Kuppa, who also serves as DOL’s chief technology officer. “Another aspect that we take very seriously in modernizing is [to] take this opportunity to not just update the technology, but also take this opportunity to re-engineer the business process to help the public.” 

Kuppa pointed to an internal shared services initiative that designated the agency’s Office of the Chief Information Officer to be a “shared services provider for all Departmental IT services.”  That process, Kuppa said, has allowed the department to keep an inventory of all systems and technologies and understand where the legacy systems or opportunities for improvement might exist.

“Using that methodology, we’ve been looking at all high-risk systems, because maybe the technology is very legacy and outdated,” Kuppa said. “We’ve been using that methodology to start those modernization initiatives.”

By considering the age of the technology, the operations burden, security vulnerabilities, regulation compliance and other parameters, DOL came up with a methodology that scores each mission system to determine if it is a candidate for modernization. The agency then looks at the scores on a consistent basis and revises based on new information that becomes available.

These systems can be major: the DOL’s Employment and Training Administration, for example, which provides labor certifications when a company files for hiring an immigrant workforce, was scored for modernization.

“Being an immigrant, I wasn’t aware DOL had a hand in my immigration journey there,” Kuppa said. 

The Technology Modernization Fund has played an “instrumental” role in the department “finding the resources to modernize,” Kuppa said.

She gave the example of using TMF funds to expedite temporary visa applications, which is expected to save 45 days of cycle time for processing labor certification applications.

According to a case study on the TMF site, that project contributed to $1.9 million in annual cost savings, and a key part of the innovation allowed the application forms to auto-populate with the previous year’s information.

“Usually all immigrants eventually start filing for permanent visa applications,” Kuppa said. “Again, you have to repeat the process of labor certification, and so we had two different systems not communicating with each other.”

For Kuppa, modernization is ultimately an exercise in reimagining where new technologies can ultimately be most helpful.

“We have great partnership, we work very closely with our programs and then we have these dialogues every day, in terms of the system’s development lifecycle,” she said. “And that’s how we approach modernization.”

The post Labor Department has ‘a leg up’ on artificial intelligence, new CAIO says appeared first on FedScoop.

]]>
78718
VA’s technical infrastructure is ‘on pretty good footing,’ CAIO and CTO says https://fedscoop.com/vas-technical-infrastructure-is-on-pretty-good-footing-caio-and-cto-says/ Tue, 04 Jun 2024 20:39:56 +0000 https://fedscoop.com/?p=78663 In an interview with FedScoop, Charles Worthington discusses the agency’s AI and modernization efforts amid scrutiny from lawmakers and the threat of budget cuts.

The post VA’s technical infrastructure is ‘on pretty good footing,’ CAIO and CTO says appeared first on FedScoop.

]]>
Working under the threat of technology-related budget cuts that has elicited concern from both sides of the aisle, the Department of Veterans Affairs has managed to make progress on several tech priorities, the agency’s artificial intelligence chief said last week.

In an interview with FedScoop, Charles Worthington, the VA’s CAIO and CTO, said the agency is engaged in targeted hiring for AI experts while also sustaining its existing modernization efforts. “I wish we could do more,” he said.

While Worthington wrestles with the proposed fiscal year 2025 funding reductions, the VA’s Office of Information and Technology also finds itself in the legislative crosshairs over modernization system upgrades, a supposed lack of AI disclosures and inadequate tech contractor sanctions and ongoing scrutiny over its electronic health record modernization initiative with Oracle Cerner

Worthington spoke to FedScoop about the VA’s embrace of AI, the status of its modernization push, how it is handling budget uncertainty and more.

Editor’s note: The transcript has been edited for clarity and length. 

FedScoop: I know that you’ve started your role as the chief AI officer at the Department of Veterans Affairs. And I wanted to circle back on some stuff that we’ve seen the VA engaged with this past year. The Office of Information and Technology has appeared before Congress, where legislators have voiced their concerns for AI disclosures, inadequate contractor sanctions, budgetary pitfalls in the fiscal year 2025 budget for VA OIT and the supply chain system upgrade. What is your response to them?

Charles Worthington: I think AI represents a really big opportunity for the VA and for every agency, because it really changes what our computing systems are going to be capable of. So I think we’re all going to have to work through what that means for our existing systems over the coming years, but I think really there’s hardly any part of VA’s software infrastructure that’s going to be untouched by this change in how computer systems work and what they’re capable of. So I think it’s obviously gonna be a big focus for us and for Congress over the next couple of years. 

FS: I want to take a step back and focus on the foundational infrastructure challenges that the VA has been facing. Do you attribute that to the emerging technologies’ need for more advanced computing power? What does that look like?

CW: I think overall, VA’s technical infrastructure is actually on a pretty good footing. We’ve spent a lot of time in the past 10 years with the migration to the cloud and with really leaning into using a lot of leading commercial products in the software-as-a-service model where that makes sense. So, by and large, I think we’ve done a good job of bringing our systems up to standard. I think it’s always a challenge in the VA and in government to balance the priorities of modernization and taking advantage of new capabilities with the priorities of running everything that you already have.

One of the unique challenges of this moment in time is that almost every aspect of the VA’s operations depends on technology in some way. There’s just a lot of stuff to maintain; I think we have nearly a thousand systems in operations. And then obviously, with something like AI, there’s a lot of new ideas about how we could do even more [to] use technology and even more ways to further our mission. 

FS: In light of these voiced concerns from legislators, as you progress into your role of chief AI officer, how do you anticipate the agency will be able to use emerging technologies like AI to its fullest extent?

CW: I think there’s really two priorities that we have with AI right now. One is, this represents an enormous opportunity to deliver services more effectively and provide great technology services to the VA staff, because these systems are so powerful and can do so many new things. One priority is to take advantage of these technologies, really to make sure that our operations are running as effectively as possible. 

On the other hand, I think this is such a new technology category that a lot of the existing processes we have around technology governance in government don’t apply in exactly the same ways to artificial intelligence. So in a lot of ways, there are novel concerns that AI brings. … With an AI system that is, instead, taking those inputs and then generating a best guess or generating some piece of content, the way that we need to make sure that those systems are working effectively, those are still being developed. At the same time, as we’re trying to take advantage of these new capabilities, we’re also trying to build a framework that will allow us to safely use and deploy these solutions to make sure that we’re upholding the trust that veterans put in us to manage their data securely. 

FS: In what ways is the agency prioritizing AI requirements, especially from the artificial intelligence executive order that we saw last October, and maintaining a competitive edge with the knowledge that the fiscal year 2025 budget has seen a significant clawback of funds?

CW: We are investing a lot in standing up, I would say, the AI operations and governance. We have four main priorities that we’re focused on right now. One is setting up that policy framework and the governance framework for how we’re going to manage these. We have already convened our first AI governance council meeting — we’ve actually had two of them — where we’re starting to discuss how the agency is going to approach managing our inventory of AI use cases and the policies that we’ll use. 

The second priority is really focused on our workforce. We need to make sure that our VA staff have the knowledge and the skills they need to be able to use these solutions effectively and understand what they’re capable of and also their limitations. We need to be able to bring in the right sort of talent to be able to buy and build these sorts of solutions. 

Third, we’re working on our infrastructure [to] make sure that we have the technical infrastructure in place for VA to actually either build or, in some cases, just buy and run AI solutions. 

Then, finally, we have a set of high-priority use cases that we’re really leaning into. This was one of the things that was specifically called out to the VA in the executive order, which was basically to run a couple of pilots — we call them tech sprints — on AI.

FS: I would definitely love to hear some insights from you personally about some challenges you’re anticipating with artificial intelligence, especially as you’ve referenced that the VA has already been using AI.

CW: I think one of the challenges right now is that most of the AI use cases are built in a very separate way from the rest of our computing systems. So if you take a predictive model, it maybe takes a set of inputs and then generates a prediction, which is typically a number. But how do you actually integrate that prediction into a system that somebody’s already using is a challenge that we see, I think, with most of these systems.

In my opinion, integrating AI with more traditional types of software is going to be one of the biggest challenges of the next 10 years. VA has got over a thousand systems and to really leverage these tools effectively, you’d ideally like to see these capabilities integrated tightly with those systems so that it’s all kind of one workflow, and it appears naturally as a way that can assist the person with the task they’re trying to achieve, as opposed to something that’s in a different window that they’ve got to flip back and forth between. 

I feel like right now, we’re in that awkward stage where most of these tools are a different window … where there’s a lot of flipping back and forth between tools and figuring out how best to integrate those AI tools with the more traditional systems. I think that’s just kind of a relatively unfigured-out problem. Especially, if you think of a place like VA, where we have a lot of legacy systems, things that have been built over the past number of decades, oftentimes updating those is not the easiest thing. So I think it really speaks to the importance of modernizing our software systems to make them easier to change, more flexible, so that we can add things like AI or just other enhancements.

The post VA’s technical infrastructure is ‘on pretty good footing,’ CAIO and CTO says appeared first on FedScoop.

]]>
78663
NASA has a new chief AI officer https://fedscoop.com/nasa-has-a-new-chief-ai-officer/ Mon, 13 May 2024 18:54:25 +0000 https://fedscoop.com/?p=78284 Several CDOs have now taken on the role.

The post NASA has a new chief AI officer appeared first on FedScoop.

]]>
NASA has named David Salvagnini, the space agency’s chief data officer, as its chief artificial intelligence officer, fulfilling a requirement laid out in recent White House guidance and President Joe Biden’s executive order on AI

In a press release, NASA said that Salvagnini will help lead the agency’s work on developing AI technology, as well as its collaborations with academic institutions and other experts. Salvagnini will replace Kate Calvin, the agency’s chief scientist and former responsible AI official, in leading NASA’s efforts on the technology. 

“Artificial intelligence has been safely used at NASA for decades, and as this technology expands, it can accelerate the pace of discovery,” NASA Administrator Bill Nelson said in a statement. “It’s important that we remain at the forefront of advancement and responsible use. In this new role, David will lead NASA’s efforts to guide our agency’s responsible use of AI in the cosmos and on Earth to benefit all humanity.”  

NASA makes use of myriad forms of artificial intelligence, according to the agency’s AI inventory

NASA’s announcement comes after several agencies have already appointed individuals to the chief AI officer roles, including the National Science Foundation, the General Services Administration, and the Department of Veterans Affairs. Several others have also opted to name their chief data officers as their chief AI officers. 

The post NASA has a new chief AI officer appeared first on FedScoop.

]]>
78284
NSF is piloting an AI chatbot to connect people with grants https://fedscoop.com/nsf-is-piloting-an-ai-chatbot-to-connect-people-with-grants/ Fri, 10 May 2024 19:02:00 +0000 https://fedscoop.com/?p=78272 The tool, which is the first artificial intelligence pilot of a commercial platform by NSF, is also serving as a way for the agency to pursue rapid implementation of an AI capability.

The post NSF is piloting an AI chatbot to connect people with grants appeared first on FedScoop.

]]>
The National Science Foundation is piloting a public-facing AI chatbot for grant opportunities, while simultaneously using that process to shape future implementations of the technology, the agency’s top AI official said.

The chatbot is aimed at making the process of looking for NSF grants easier, Dorothy Aronson, the agency’s chief AI officer told FedScoop. It will provide information about grants based on inputs from users about who they are and their research and can answer questions about the process, as it was trained using NSF’s proposal guide, Aronson said.

Aside from testing the chatbot itself, the process has also been a test of sorts for the agency, according to Aronson. 

“The most important thing about this exercise that we’re running is that it’s not only to create the chatbot; I think that’s a nice side effect,” Aronson said. “From my perspective, it’s to experience what it’s like to do a rapid implementation … of an AI capability.”

Aronson said they’re hoping to engage the NSF community in a conversation about responsible AI, how they can do that well at the agency, and get people thinking about the future.

The pilot comes as agencies across the government are experimenting with AI. Already the government has disclosed at least 700 use cases, and chatbots appear to be a popular use of the technology, with agencies like the Department of State and Centers and Disease Control and Prevention recently noting they’re using such tools. 

Although the chatbot is NSF’s first pilot of a commercial platform and first for a public-facing tool, it’s not the first AI use for the agency. NSF lists several use cases on its public inventory, and Aronson said the agency has developed smaller AI solutions before, such as a tool that suggests reviewers for people who work with NSF on research. 

The first three months of the pilot are wrapping up, marking the end of the development phase, according to an NSF spokesperson. Now, the agency is “beginning to widely demonstrate the pilot, gather feedback, and further train and hone the model.” NSF is working with Spatial Front, Inc., a small business contractor, on the chatbot. 

The chatbot will be particularly useful to people outside larger universities, which typically have offices dedicated to things like NSF grants, according to Aronson, who is also the chief data officer and has served as the CIO of the agency.

“This is most important to smaller universities or underrepresented communities who do not have access to large offices within their university that can help facilitate that work,” she said. 

Creating the tool has also been different from the norm for IT solutions, which start with what the end result will look like, Aronson said. With AI, the component is educated to give the desired answers and the interface comes after. “It’s a completely different way of working,” she said.

Aronson said the agency has put the first skin — or appearance — on the chatbot and shopped it out to customers to get feedback. Now, NSF is thinking about two directions: how to improve the chatbot further and what the next AI pilot will be, she said.

Going forward, Aronson said the agency plans to do a few pilots to find additional capabilities of the technology. “In the next one, we know we want to do something more complicated, ultimately, and we’ve broken that more complicated longer-term objective into smaller bits,” Aronson said. 

She also noted that while funding is tight this year, NSF is being “scrappy” about ways to move forward, and using the pilots to help figure out what to ask for in fiscal year 2026 so it has a “legitimate funding bucket for AI.” 

Additionally, Aronson said she’s working with the Federal CIO and CIOs at other agencies to explore the idea of “a journey map of data and AI and IT initiatives that would allow all of the federal agencies better insight into what other people are doing.” 

That journey map would allow agencies to get a picture of what others in the federal government are working on and learn about other solutions they could leverage, she said. An agency, for example, could use the map to see if another agency is developing a testbed for AI, identify extra compute power available elsewhere, or review an existing generative AI policy. 

If agencies could see “where the expertise was across the federal government, we could leverage each other’s expertise instead of each of us evolving to have that level of knowledge on our own,” Aronson said.

The post NSF is piloting an AI chatbot to connect people with grants appeared first on FedScoop.

]]>
78272
The CAIO’s role in driving AI success across the federal government https://fedscoop.com/the-caios-role-in-driving-ai-success-across-the-federal-government/ Tue, 07 May 2024 18:55:12 +0000 https://fedscoop.com/?p=78203 In this commentary, former federal AI leaders Lt. Gen. Jack Shanahan and Joel Meyer share five actions newly appointed chief AI officers should take to set the stage for the successful adoption of AI.

The post The CAIO’s role in driving AI success across the federal government appeared first on FedScoop.

]]>
Artificial intelligence isn’t just a buzzword — it’s a revolution transforming societies and the backbone of both private and public sector innovation.

While federal agencies have lagged commercial industry in recognizing AI’s potential impacts and adapting accordingly, the U.S. government is now rushing to catch up. On March 28, the White House Office of Management and Budget released its new AI governance memo as a follow-up to the October 2023 White House Executive Order on the Safe, Secure, and Trustworthy Use of Artificial Intelligence, and federal agencies have completed all required actions to date under the Executive Order on schedule.

As required by the executive order, all federal agencies must now designate a Chief AI Officer (CAIO) to coordinate their agency’s use of AI, promote AI innovation in their agency, and manage risks from their agency’s use of AI. As a consequence, the government is looking for 400 CAIOs and many federal departments and agencies have already named one.

The creation of CAIO positions is a significant step toward an AI-enabled federal government. However, it presents challenges akin to those faced in the private sector. To navigate these challenges successfully, CAIOs should take five immediate actions to set the stage for success:

Lead the Mission: CAIOs must articulate a clear vision for AI adoption within their agencies, ensuring alignment and serving as the focal point for implementing AI priorities. The Chief AI Officer should report directly to the department or agency head to demonstrate that they have their full-throated support.

Balance Innovation and Risk: Many government functions are considered no-fail missions—protecting the nation, providing uninterrupted financial and medical benefits, securing domestic and international travel, building weapon systems, and serving as the nation’s eyes and ears through intelligence collection and analysis. Even seemingly small error rates may be intolerable. Yet with AI, risk aversion offers a path to stagnation and obsolescence. CAIOs should fight to strike a balance between each agency’s legitimate concerns about risks, and the imperative to accelerate AI adoption and integration.

Quick Wins and Strategy: CAIOs should identify low-hanging fruit that, with focused senior-level attention and a burst of resources, can deliver demonstrable outcomes that are clearly AI-driven. This creates a virtuous cycle of success that opens the aperture for the more difficult and ambitious work to come. AI pilots can be chosen thoughtfully to demonstrate hypotheses that can then be affirmed in each department’s AI strategy. These quick wins can build momentum for broader AI strategy implementation.

Budgeting and Procurement: The budgets that CAIOs are working with now were likely built in early 2022 before large language models or generative AI were widely available. CAIOs should work with agency chief financial officers and department comptrollers to identify current-year funds for reprogramming. At the same time, they need to shape future year budgets in ways that reflect the required infusion of resources in support of the entire AI lifecycle.

Yet even when funds are identified, procurement processes often move slower than the pace of technology — a product on the cutting edge today may be on the path to obsolescence tomorrow. CAIOs should work with acquisition and contracting officials to take full advantage of extant authorities while seeking new and more flexible authorities to accelerate AI procurement.

Talent Acquisition: The scarcity of AI talent necessitates creative approaches to recruitment and retention within the public sector. CAIOs should push to hire AI experts directly, but to move faster they should also hire outside AI experts for temporary assignments through pathways such as fellowships from corporations, think tanks, and academia, or in excepted service or special government employee roles. CAIOs can pursue a strategy of establishing a centralized AI talent hub that the rest of the department or agency can access, or of placing talent in key directorates and offices that are leveraging AI. A blend of different human capital solutions will help accelerate AI adoption across the government.

These strategies are not only aimed at integrating AI into federal operations but also at leveraging its potential to enhance public service delivery. The CAIO’s role is pivotal in this process, requiring a blend of visionary leadership, strategic planning, and operational acumen.

The experiences of the Defense Department’s Joint AI Center and Chief Digital and AI Office and the Department of Homeland Security’s AI Task Force exemplify the multifaceted opportunities and challenges AI presents. These initiatives highlighted the necessity for a centralized strategy to provide direction, coupled with the flexibility to foster innovation and experimentation within a decentralized framework. Absent the proper balance between centralization and decentralization, one of two things will happen: AI will never scale beyond pilot projects — overly decentralized — or the end users’ needs will be marginalized to the point of failure — overly centralized. The balancing act between rapid technological adoption and the careful management of associated risks underscores the complex landscape that CAIOs navigate.

The decision to institutionalize the role of CAIOs demonstrates a clear acknowledgment of AI’s strategic significance. This action signifies a deeper commitment to keeping the United States at the forefront of technological innovation, emphasizing the use of AI to improve public service delivery, enhance operational efficiency, and safeguard national interests. As we navigate this still-uncharted territory, leadership, innovation, and responsible governance will be essential in realizing the full promise of AI within the federal realm. CAIOs will play an indispensable role in shaping the government’s AI-enhanced future.

Joel Meyer served as the Deputy Assistant Secretary of Homeland Security for Strategic Initiatives in the Biden Administration, where he drove the creation of DHS’s Artificial Intelligence Task Force and the Third Quadrennial Homeland Security Review. He has led public sector businesses at three artificial intelligence technology startups, including currently serving as President of Public Sector at Domino Data Lab, provider of the leading enterprise AI platform trusted by over 20% of the Fortune 100 and major government agencies.

Lieutenant General John (Jack) N.T. Shanahan, United States Air Force, Retired, retired in 2020 after a 36-year military career. Jack served in a variety of operational and staff positions in various fields including flying, intelligence, policy, and command and control. As the first Director of the Algorithmic Warfare Cross-Functional Team (Project Maven), Jack established and led DoD’s pathfinder AI fielding program charged with bringing AI capabilities to intelligence collection and analysis. In his final assignment, he served as the inaugural Director of the U.S. Department of Defense Joint Artificial Intelligence Center.

Both authors serve as Commissioners on the Atlantic Council’s Commission on Software-Defined Warfare.

The post The CAIO’s role in driving AI success across the federal government appeared first on FedScoop.

]]>
78203
AI talent role, releasing code, deadline extension among additions in OMB memo https://fedscoop.com/ai-talent-role-releasing-code-deadline-extension-among-additions-in-omb-memo/ Fri, 29 Mar 2024 16:40:52 +0000 https://fedscoop.com/?p=76904 Requiring the release of custom AI code, designating an “AI Talent Lead,” and extending deadlines were among the changes made to the final version of a White House memo on AI governance.

The post AI talent role, releasing code, deadline extension among additions in OMB memo appeared first on FedScoop.

]]>
Additions and edits to the Office of Management and Budget’s final memo on AI governance create additional public disclosure requirements, provide more compliance time to federal agencies, and establish a new role for talent.

The policy, released Thursday, corresponds with President Joe Biden’s October executive order on AI and establishes a framework for federal agency use and management of the technology. Among the requirements, agencies must now vet their AI uses for risks, expand what they share in their annual AI use case inventories, and select a chief AI officer.

While the final version largely tracks with the draft version that OMB published for public comment in November, there were some notable changes. Here are six of the most interesting alterations and additions to the policy: 

1. Added compliance time: The new policy changes the deadline for agencies to be in compliance with risk management practices from Aug. 1 to Dec. 1, giving agencies four more months than the draft version. The requirement states that agencies must implement risk management practices or stop using safety- or rights-impacting AI tools until the agency is in compliance. 

In a document published Thursday responding to comments on the draft policy, OMB said it received feedback that the August deadline was “too aggressive” and that timeline didn’t account for action OMB is expected to take later this year on AI acquisition. 

2. Sharing code, data: The final memo adds an entirely new section requiring agencies to share custom-developed AI code model information on an ongoing basis. Agencies must “release and maintain that code as open source software on a public repository” under the memo, unless sharing it would pose certain risks or it’s restricted by law, regulation, or contract.

Additionally, the memo states that agencies must share and release data used to test AI if it’s considered a “data asset” under the Open, Public, Electronic and Necessary (OPEN) Government Data Act, a federal law that requires such information to be published in a machine-readable format.

Agencies are required to share whatever information possible, even if a portion of the information can’t be released publicly. The policy further states that agencies should, where they’re able, share resources that can’t be released without restrictions through federally operated means that allow controlled access, like the National AI Research Resource (NAIRR).

3. AI Talent Lead: The policy also states agencies should designate an “AI Talent Lead,” which didn’t appear in the draft. That official, “for at least the duration of the AI Talent Task Force, will be accountable for reporting to agency leadership, tracking AI hiring across the agency, and providing data to [the Office of Personnel Management] and OMB on hiring needs and progress,” the memo says. 

The task force, which was established under Biden’s AI executive order, will provide that official with “engagement opportunities to enhance their AI hiring practices and to drive impact through collaboration across agencies.” The memo also stipulates that agencies must follow hiring practices in OPM’s forthcoming AI and Tech Hiring Playbook.

Biden’s order placed an emphasis on AI hiring in the federal government, and so far OPM has authorized direct-hire authority for AI roles and outlined incentives for attracting and retaining AI talent. 

4. Aggregate metrics: Agencies and the Department of Defense will both have to “report and release aggregate metrics” for AI uses that aren’t included in their public inventory of use cases under the new memo. The draft version included only the DOD in that requirement, but the version released Thursday added federal agencies.

Those disclosures, which will be annual, will provide information about how many of the uses are rights- and safety-impacting and their compliance with the standards for those kinds of uses outlined in the memo. 

The use case inventories, which were established by a Trump-era executive order and later enshrined into federal statute, have so far lacked consistency across agencies. The memo and corresponding draft guidance for the 2024 inventories seeks to enhance and expand those reporting requirements.

5. Safety, rights determinations: The memo also added a new requirement that agencies have to validate the determinations and waivers that CAIOs make on safety- and rights-impacting use cases, and publish a summary of those decisions on an annual basis. 

Under the policy, CAIOs can determine that an AI application presumed to be safety- or rights-impacting — which includes a wide array of uses such as election security and conducting biometric identification — doesn’t match the memo’s definitions for what should be considered safety- or rights-impacting. CAIOs may also waive certain requirements for those uses.

While the draft stipulated that agencies should report lists of rights- and safety-impacting uses to OMB, the final memo instead requires the annual validation of those determinations and waivers and public summaries.

In its response to comments, OMB said it made the update to address concerns from some commenters that CAIOs “would hold too much discretion to waive the applicability of risk management requirements to particular AI uses cases.” 

6. Procurement considerations: Three procurement recommendations related to test data, biometric identification, and sustainability were also added to the final memo. 

On testing data, OMB recommends agencies ensure developers and vendors aren’t using test data that an agency might employ to evaluate an AI system to train that system. For biometrics, the memo also encourages agencies to assess risks and request documentation on accuracy when procuring AI systems that use identifiers such as faces and fingerprints. 

And finally on sustainability, the memo includes a recommendation that agencies consider the environmental impact of “computationally intensive” AI systems. “This should include considering the carbon emissions and resource consumption from supporting data centers,” the memo said. That addition was a response to commenters who wanted the memo to expand risk assessment requirements to include environmental considerations, according to OMB.

The post AI talent role, releasing code, deadline extension among additions in OMB memo appeared first on FedScoop.

]]>
76904
AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says https://fedscoop.com/ai-transparency-creates-big-cultural-challenge-for-parts-of-dhs-ai-chief-says/ Wed, 20 Mar 2024 16:25:46 +0000 https://fedscoop.com/?p=76678 Transparency around AI may result in issues for DHS elements that are more discreet in their operations and the information they share publicly, CIO Eric Hysen said.

The post AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says appeared first on FedScoop.

]]>
As the Department of Homeland Security ventures deeper into the adoption of artificial intelligence — while doing so in a transparent, responsible way in line with policies laid out by the Biden administration — it’s likely to result in friction for some of the department’s elements that don’t typically operate in such an open manner, according to DHS’s top AI official.

Eric Hysen, CIO and chief AI officer for DHS, said Tuesday at the CrowdStrike Gov Threat Summit that “transparency and responsible use [of AI] is critical to get right,” especially for applications in law enforcement and national security settings where the “permission structure in the public eye, in the public mind” faces a much higher bar.

But that also creates a conundrum for those DHS elements that are more discreet in their operations and the information they share publicly, Hysen acknowledged.

“What’s required to build and maintain trust with the public in our use of AI, in many cases, runs counter to how law enforcement and security agencies generally tend to operate,” he said. “And so I think we have a big cultural challenge in reorienting how we think about privacy, civil rights, transparency as not something that we do but that we tack on” to technology as an afterthought, but instead “something that has to be upfront and throughout every stage of our workplace.”

While President Joe Biden’s AI executive order gave DHS many roles in leading the development of safety and security in the nation’s use of AI applications, internally, Hysen said, the department is focused on “everything from using AI for cybersecurity to keeping fentanyl and other drugs out of the country or assisting our law enforcement officers and investigators in investigating crimes and making sure that we’re doing all of that responsibly, safely and securely.”

Hysen’s comments came a day after DHS on Monday published its first AI roadmap, spelling out the agency’s current use of the technology and its plans for the future. Responsible use of AI is a key part of the roadmap, pointing to policies DHS issued in 2023 promoting transparency and responsibility in the department’s AI adoption and adding that “[a]s new laws and government-wide policies are developed and there are new advances in the field, we will continue to update our internal policies and procedures.”

“There are real risks to using AI in mission spaces that we are involved in. And it’s incumbent on us to take those concerns incredibly seriously and not put out or use new technologies unless we are confident that we are doing everything we can, even more than what would be required by law or regulation, to ensure that it is responsible,” Hysen said, adding that his office worked with DHS’s Privacy Office, the Office for Civil Rights and Civil Liberties and the Office of the General Counsel to develop those 2023 policies.

To support the responsible development and adoption of AI, Hysen said DHS is in the midst of hiring 50 AI technologists to stand up a new DHS AI Corp, which the department announced last month.

“We are still hiring if anyone is interested,” Hysen said, “and we are moving aggressively expand our skill sets there.”

The post AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says appeared first on FedScoop.

]]>
76678
DOJ picks Princeton computer scientist as its chief AI officer https://fedscoop.com/doj-chief-ai-officer-jonathan-mayer/ Thu, 22 Feb 2024 21:57:27 +0000 https://fedscoop.com/?p=76160 Jonathan Mayer, a former FCC technologist and policy adviser to Kamala Harris, will lead the Justice Department’s artificial intelligence work, including its Emerging Technology Board.

The post DOJ picks Princeton computer scientist as its chief AI officer appeared first on FedScoop.

]]>
The Department of Justice has tapped Princeton University professor Jonathan Mayer as its first chief artificial intelligence officer and chief science and technology adviser, the agency announced Thursday.

The DOJ’s appointment of Mayer — who teaches in Princeton’s computer science department and in its school of public and international affairs — satisfies the White House AI executive order requirement that each of the Chief Financial Officers Act agencies designate a permanent CAIO. FedScoop has tracked the appointments of those AI officials across agencies.

In a statement announcing Mayer’s selection, Attorney General Merrick Garland said that the DOJ’s mission depends on its ability to “keep pace with rapidly evolving scientific and technological developments.”

“Jonathan’s expertise will be invaluable in ensuring that the entire Justice Department — including our law enforcement components, litigating components, grantmaking entities, and U.S. Attorneys’ Offices — is prepared for both the challenges and opportunities that new technologies present,” Garland added.

As the DOJ’s CAIO, Mayer will oversee the department’s Emerging Technology Board, which is tasked with coordinating and governing AI and other types of emerging tech throughout the agency. More broadly, Mayer — who holds a Ph.D. in computer science and a law degree from Stanford — will lead cross-agency and intra-department efforts on AI and related issues.

The DOJ currently has 15 AI use cases listed in its inventory, including a disclosure covered by FedScoop last month that the FBI is in the “initiation” phase of using Amazon Rekognition, an image and video analysis software. 

Neither the DOJ nor Amazon, which previously issued a moratorium on police use of Rekognition, would confirm to FedScoop at the time if Rekognition’s facial recognition capabilities were accessible to or in use by the FBI.

Other AI use cases revealed by the DOJ in its inventory include a machine translation service for the FBI, a voice transcription to text system for the agency’s Office of the Inspector General, and gunshot detection and identification software for the Bureau of Alcohol, Tobacco, Firearms and Explosives, among others.

Mayer’s move from Princeton — where his tech-, policy- and law-focused research has centered on criminal procedure, national security, and consumer protection — to the DOJ represents a return to public service. From November 2015 to March 2017, he served as chief technologist in the Federal Communications Commission’s Enforcement Bureau. After that, Mayer spent a year in then-Sen. Kamala Harris’s office as a technology law and policy adviser to the California Democrat.   

The post DOJ picks Princeton computer scientist as its chief AI officer appeared first on FedScoop.

]]>
76160
Nuclear Regulatory Commission taps its CIO and chief data officer to lead agency AI operations https://fedscoop.com/nrc-chief-ai-officer-david-nelson/ Fri, 15 Dec 2023 20:04:10 +0000 https://fedscoop.com/?p=75270 David Nelson will take on chief AI officer duties after the NRC told FedScoop last month that it was analyzing and assessing how the White House executive order applies to it as an independent agency.

The post Nuclear Regulatory Commission taps its CIO and chief data officer to lead agency AI operations appeared first on FedScoop.

]]>
The Nuclear Regulatory Commission has selected David Nelson, the agency’s chief data officer and chief information officer, as its chief artificial intelligence officer, a spokesperson for the agency told FedScoop.

President Joe Biden’s recent executive order on AI requires many federal agencies to name officials to the CAIO position. Several agencies, including the National Science Foundation and the Education Department, have already announced their picks.

As a regulatory agency, the NRC is exempt from CAIO requirements, but said in a statement to FedScoop last month that it is “analyzing the executive order and assessing whether and how it applies” given its independent status.

“We’re aware of the growth of AI and the need to prepare for its use in the nuclear field,” the NRC statement said. 

“For those reasons, we issued an AI Strategic Plan that outlines our need to cultivate an AI-proficient workforce, keep pace with AI technological innovations, and ensure the safe and secure use of AI in NRC-regulated activities.”

The post Nuclear Regulatory Commission taps its CIO and chief data officer to lead agency AI operations appeared first on FedScoop.

]]>
75270
Housing and Urban Development names Vinay Singh as chief AI officer https://fedscoop.com/hud-names-vinay-singh-chief-ai-officer/ Wed, 22 Nov 2023 18:37:36 +0000 https://fedscoop.com/?p=74930 Vinay Singh is currently the department’s chief financial officer and will work closely with the agency’s senior IT and policy officials in the new role.

The post Housing and Urban Development names Vinay Singh as chief AI officer appeared first on FedScoop.

]]>
The Department of Housing and Urban Development has selected its top financial official, Vinay Singh, to serve as the department’s chief artificial intelligence officer following a Biden executive order requiring such a position at federal agencies.

Singh will work closely with Beth Niblock, the department’s chief information officer, and senior official for policy development Solomon Greene “to advance responsible AI innovation, increase transparency, protect HUD employees and the public they serve, and manage risks from sensitive government uses of AI,” a spokesperson told FedScoop in an emailed statement. 

Under President Joe Biden’s recent AI executive order (EO 14110), certain government agencies will be required to name a chief AI officer within 60 days of the Office of Management and Budget’s corresponding guidance, which is currently in draft form and being finalized. According to the order, the new CAIOs are responsible for coordinating an agency’s uses of AI, promoting AI innovation and managing risks, among other things.

While some agencies already had chief AI officers before the Biden order, such as the Department of Health and Human Services and the Department of Homeland Security, others are getting started publicly naming their officials. 

In response to FedScoop inquiries, for example, the National Science Foundation and the General Services Administration both disclosed that their chief data officers will serve as each agency’s chief AI officer. The Department of Education also said it tapped its chief technology officer for the role.

Among the responsibilities for the chief AI officers outlined in OMB’s draft guidance will be vice chairing their agency’s AI governance board. Those boards, which will coordinate AI adoption and manage risk, are required within 60 days of OMB’s guidance and will be chaired by each agency’s deputy secretary. 

Prior to the Biden administration order and draft guidance, agencies were already required to have a responsible AI official under a Trump administration order (EO 13960). But according to OMB’s draft guidance, the new chief AI officers will also carry out those responsibilities. For HUD, a decision about the existing role is forthcoming. 

“​​The AI Governance Board will determine the appropriate role and integration of the Responsible AI Official into the important work ahead,” the HUD spokesperson said.

Outside of naming a CAIO, other agencies told FedScoop they’re making progress on AI-related work in response to inquiries.

A NASA spokesperson, for example, said the agency “is developing recommendations on leveraging emerging Artificial Intelligence technology to best serve our goals and missions, from sifting through Earth science imagery to identifying areas of interest, to searching for exoplanet data from NASA’s James Webb Space Telescope, scheduling communications from the Perseverance Mars rover through the Deep Space Network, and more.”

Similarly, a Department of Transportation spokesperson said the agency is working on a “strategy to align with the EO.”

Rebecca Heilweil contributed to this story.

The post Housing and Urban Development names Vinay Singh as chief AI officer appeared first on FedScoop.

]]>
74930