AI Archives | FedScoop https://fedscoop.com/category/ai/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 12 Jun 2024 22:13:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 AI Archives | FedScoop https://fedscoop.com/category/ai/ 32 32 Bipartisan Senate bill would establish federal AI acquisition guardrails https://fedscoop.com/bipartisan-bill-would-establish-ai-acquisition-guardrails/ Wed, 12 Jun 2024 22:13:50 +0000 https://fedscoop.com/?p=78793 A new bill from Sens. Gary Peters, D-Mich. and Thom Tillis, R-N.C., would require agencies to assess the risks of AI before acquiring it.

The post Bipartisan Senate bill would establish federal AI acquisition guardrails appeared first on FedScoop.

]]>
Federal agencies would have to assess the risks of artificial intelligence technologies before purchasing them and using them under a new bipartisan Senate bill. 

The legislation, among other things, would establish pilot programs to try out “more flexible, competitive purchasing practices” and require that government contracts for AI “to include safety and security terms for data ownership, civil rights, civil liberties and privacy, adverse incident reporting and other key areas,” according to a release.

“Artificial intelligence has the power to reshape how the federal government provides services to the American people for the better, but if left unchecked, it can pose serious risks,” Sen. Gary Peters, D-Mich., who sponsors the bill with Sen. Thom Tillis, R-N.C., said in a statement. “These guardrails will help guide federal agencies’ responsible adoption and use of AI tools, and ensure that systems paid for by taxpayers are being used safely and securely.”

According to the release, the Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment (PREPARED) for AI Act builds on a law passed in 2022 that required agencies to protect privacy and civil rights when purchasing AI. That legislation was also sponsored by Peters. President Joe Biden cited that law in a section of his executive order on AI that directed the Office of Management and Budget to take action on addressing federal AI acquisition. 

The OMB in March asked for input on AI procurement, including how the administration can promote competition and protect the government’s rights to access its data in those contracts. The administration has said it plans to take action on AI procurement later this year.

“As the role of artificial intelligence in the public and private sectors continues to grow, it is crucial federal agencies have a robust framework for procuring and implementing AI safely and effectively,” Tillis said in the release. 

A Senate Homeland Security and Governmental Affairs Committee aide told FedScoop that Peters, who chairs the panel, plans a markup for the bill this summer. Once it’s passed by the panel, the aide said Peters “will keep all options on the table and pursue any path forward, whether that’s advancing the bill as a standalone or as part of a larger vehicle.” 

The bill has the support of Center for Democracy and Technology, Transparency Coalition, the AI Procurement Lab, and the Institute of Electrical and Electronics Engineers (IEEE), according to the release.

The post Bipartisan Senate bill would establish federal AI acquisition guardrails appeared first on FedScoop.

]]>
78793
Bipartisan Senate bill calls on Commerce to lead AI push with small businesses https://fedscoop.com/bipartisan-senate-bills-calls-on-commerce-to-lead-ai-push-with-small-businesses/ Wed, 12 Jun 2024 19:05:29 +0000 https://fedscoop.com/?p=78778 Legislation from Sens. Cantwell and Moran tasks Commerce and SBA with the creation of AI training resources for small businesses in underserved communities.

The post Bipartisan Senate bill calls on Commerce to lead AI push with small businesses appeared first on FedScoop.

]]>
A new bill from a bipartisan pair of senators aims to accelerate small business use of artificial intelligence, assigning new responsibilities to both the Commerce Department and the Small Business Administration to provide training in the technology. 

The legislation from Sens. Maria Cantwell, D-Wash., and Jerry Moran, R-Kan., titled the Small Business Artificial Intelligence Training and Toolkit Act, would have the Commerce secretary work with the administrator of the SBA on creating AI training resources for small businesses located in rural areas, Tribal communities, or other underserved regions. The training resources would be centered on artificial intelligence and emerging technologies, including quantum technologies, among other topics.

Those trainings would be provided via grants distributed by the SBA, as well as through gifting from the private sector. The Commerce Department would also submit reports to Congress about the state of the program. The legislation requires Commerce to update these trainings, too. 

“Small businesses are the foundation of the U.S. economy, making up 99 percent of all businesses,” Cantwell said in a statement. “They drive economic growth and innovation. It is essential that all American entrepreneurs — especially our small businesses — have access to AI training and reskilling in the 21st-century marketplace. This bill gives small businesses a boost with new tools to thrive as we step into this innovative era.”

The SBA has already taken some steps to encourage businesses to deploy the technology, though the agency’s ability to inventory its AI use cases has also attracted some scrutiny from Congress.

The post Bipartisan Senate bill calls on Commerce to lead AI push with small businesses appeared first on FedScoop.

]]>
78778
OpenAI official meets with the USAID administrator https://fedscoop.com/openai-official-meets-with-the-usaid-administrator/ Tue, 11 Jun 2024 18:13:52 +0000 https://fedscoop.com/?p=78760 Samantha Power’s meeting with OpenAI’s Anna Makanju comes amid continued investments and interest from the international development agency in the technology.

The post OpenAI official meets with the USAID administrator appeared first on FedScoop.

]]>
USAID Administrator Samantha Power met this week with OpenAI’s head of global affairs, according to an agency press release, a move that comes as the international development organization continues to invest in artificial intelligence while also raising concerns about the technology’s privacy, security, bias, and risks.

The Monday meeting with OpenAI’s Anna Makanju focused on artificial intelligence’s impact on global development, the release stated. Topics included “advancing progress in key sectors like global health and food security, preventing the misuse of AI, and strengthening information integrity and resilience in USAID partner countries.” 

The announcement comes as several federal agencies, including NASA and the Department of Homeland Security, experiment with OpenAI’s technology. USAID is also prioritizing looking at artificial intelligence use cases and is in the midst of developing a playbook for AI in global development. 

“Administrator Power and Vice President Makanju also discussed USAID’s commitment to localization, and the potential for generative AI and other AI tools to support burden reduction for USAID implementing partners – in particular, burdens that disproportionately impact local organizations,” the agency said.

Meanwhile, OpenAI appears to be continuing to look for ways to work with U.S. federal agencies. Makanju, for her part, has previously said that government use of OpenAI tools is a goal for the company. At a conference hosted by the Semafor in April, she said she was “bullish” on government use of the technology because of its role in providing services to people. 

The post OpenAI official meets with the USAID administrator appeared first on FedScoop.

]]>
78760
IRS dinged by GAO for subpar documentation of AI audit models https://fedscoop.com/irs-ai-audit-models-gao-report/ Fri, 07 Jun 2024 21:17:27 +0000 https://fedscoop.com/?p=78723 The tax agency has taken steps to address the watchdog’s concerns over how AI is used to select audit cases.

The post IRS dinged by GAO for subpar documentation of AI audit models appeared first on FedScoop.

]]>
An IRS pilot program that uses artificial intelligence to select audit cases and identify noncompliance didn’t properly document elements of the technology’s sample selection models, a new watchdog report found.

Because the tax agency had “not completed its documentation of several elements” of the models used for its National Research Program audits, the IRS could struggle to “retain organizational knowledge, ensure the models are implemented consistently, and make the process more transparent to future users,” according to the Government Accountability Office.

The IRS first piloted AI techniques for sampling tax returns in NRP audits during the 2019 filing season. The tax agency selected 4,000 returns for audit through that new AI-powered methodology, while an equal share was chosen through its traditional selection process. The following year, the NRP sample was approximately 1,500, all selected with the AI-informed process, and in 2021, 4,000 returns were picked based on two different AI samples.

The GAO noted that the implementation of redesigned sample selection processes “can be a complex undertaking,” especially when an emerging technology like AI is added to the mix. With that in mind, the watchdog pointed to the usefulness of its AI accountability framework.

“The AI Framework emphasizes the importance of documentation to help ensure that the AI system’s objectives are met,” the GAO wrote. “It further emphasizes that documentation can offer a way for agencies to provide transparency, such as (1) what the system is for, (2) what it is not for, (3) how it was designed, and (4) what its limitations are.”

The GAO’s audit found that the IRS had fallen short in two framework areas: clearly defining and documenting roles and responsibilities for each step of the AI sample selection process, and documenting the variables used to develop and run those selection models.

As the IRS reviewed the GAO report in April and responded with comments, it made two changes to address the watchdog’s concerns: writing a draft memo that listed the people responsible for steps in the AI development and sample selection process, and updating a technical document with specifics on variables and the code behind the AI models. 

“These actions will increase IRS’s ability to effectively implement and ensure operational effectiveness of the AI models,” the GAO said.

The post IRS dinged by GAO for subpar documentation of AI audit models appeared first on FedScoop.

]]>
78723
Labor Department has ‘a leg up’ on artificial intelligence, new CAIO says https://fedscoop.com/dol-caio-leg-up-ai-modernization/ Fri, 07 Jun 2024 20:34:29 +0000 https://fedscoop.com/?p=78718 Though the agency isn’t pursuing a “big-bang approach” when it comes to AI, Mangala Kuppa says DOL is poised to scale those systems quickly.

The post Labor Department has ‘a leg up’ on artificial intelligence, new CAIO says appeared first on FedScoop.

]]>
A shout-out from the White House doesn’t happen to federal agencies every day, but the Department of Labor got a turn in March when it was lauded in a fact sheet for “leading by example” with its work on principles to mitigate artificial intelligence’s potential harms to employees. 

Mangala Kuppa, who took over as DOL’s chief AI officer this week after previously serving as its deputy CAIO, believes the agency has even more to be confident about when it comes to its work on the technology, possessing a “leg up” on scaling AI quickly.

In an interview with FedScoop, Kuppa pointed to DOL’s previous efforts to modernize internal operations and customer-facing services as part of the department’s journey to implement emerging technologies like AI. Having foundational building blocks and existing infrastructure, along with existing AI applications, has made it “easier” for the agency to scale up, she said. 

“It’s not a ‘big bang’ approach,” said Kuppa, who also serves as DOL’s chief technology officer. “Another aspect that we take very seriously in modernizing is [to] take this opportunity to not just update the technology, but also take this opportunity to re-engineer the business process to help the public.” 

Kuppa pointed to an internal shared services initiative that designated the agency’s Office of the Chief Information Officer to be a “shared services provider for all Departmental IT services.”  That process, Kuppa said, has allowed the department to keep an inventory of all systems and technologies and understand where the legacy systems or opportunities for improvement might exist.

“Using that methodology, we’ve been looking at all high-risk systems, because maybe the technology is very legacy and outdated,” Kuppa said. “We’ve been using that methodology to start those modernization initiatives.”

By considering the age of the technology, the operations burden, security vulnerabilities, regulation compliance and other parameters, DOL came up with a methodology that scores each mission system to determine if it is a candidate for modernization. The agency then looks at the scores on a consistent basis and revises based on new information that becomes available.

These systems can be major: the DOL’s Employment and Training Administration, for example, which provides labor certifications when a company files for hiring an immigrant workforce, was scored for modernization.

“Being an immigrant, I wasn’t aware DOL had a hand in my immigration journey there,” Kuppa said. 

The Technology Modernization Fund has played an “instrumental” role in the department “finding the resources to modernize,” Kuppa said.

She gave the example of using TMF funds to expedite temporary visa applications, which is expected to save 45 days of cycle time for processing labor certification applications.

According to a case study on the TMF site, that project contributed to $1.9 million in annual cost savings, and a key part of the innovation allowed the application forms to auto-populate with the previous year’s information.

“Usually all immigrants eventually start filing for permanent visa applications,” Kuppa said. “Again, you have to repeat the process of labor certification, and so we had two different systems not communicating with each other.”

For Kuppa, modernization is ultimately an exercise in reimagining where new technologies can ultimately be most helpful.

“We have great partnership, we work very closely with our programs and then we have these dialogues every day, in terms of the system’s development lifecycle,” she said. “And that’s how we approach modernization.”

The post Labor Department has ‘a leg up’ on artificial intelligence, new CAIO says appeared first on FedScoop.

]]>
78718
Treasury seeks information on AI uses and risks in the financial sector https://fedscoop.com/treasury-department-ai-rfi-janet-yellen/ Thu, 06 Jun 2024 20:48:40 +0000 https://fedscoop.com/?p=78710 The RFI continues an agency push for “stakeholder engagement to improve our understanding of AI in financial services,” Secretary Janet Yellen says.

The post Treasury seeks information on AI uses and risks in the financial sector appeared first on FedScoop.

]]>
The Treasury Department is seeking public feedback from financial institutions, consumers, academics, advocates and other industry stakeholders on the uses, opportunities and risks posed by artificial intelligence as part of an ongoing agencywide exploration of the technology’s potential.

The request for information, released Thursday, asks for comments on advancements in existing AI tools and on emerging AI technologies that can benefit the financial sector. The RFI has specific callouts for information on the use of AI in financial products and services, risk management, capital markets, internal operations, customer service, marketing and regulatory compliance. 

“Treasury is proud to be playing a key role in spurring responsible innovation, especially in relation to AI and financial institutions. Our ongoing stakeholder engagement allows us to improve our understanding of AI in financial services,” Under Secretary for Domestic Finance Nellie Liang said in a statement. “The Biden administration is committed to fostering innovation in the financial sector while ensuring that we protect consumers, investors, and our financial system from risks that new technologies pose.”

Treasury listed 19 questions, plus numerous follow-ups, for respondents within its RFI, including: asking for feedback on any AI models that financial institutions are currently using; whether AI use cases differ within institutions; what barriers small banks face in AI deployment; how AI has benefited low-to-moderate income consumers and/or underserved individuals and communities; the extent to which AI models are developed in-house, by third parties or via open-source code; and how industry is applying risk management frameworks to AI use.

During remarks Thursday at the Financial Stability Oversight Council Conference on Artificial Intelligence and Financial Stability in Washington, D.C., Treasury Secretary Janet Yellen touted the release of the RFI as a way of “continuing our stakeholder engagement to improve our understanding of AI in financial services.” Yellen also announced a future roundtable discussion, convened by Treasury’s Federal Insurance Office, on the benefits and challenges of AI use for insurers. 

“FSOC will continue its efforts to monitor AI’s impact on financial stability, facilitate the exchange of information, and promote dialogue among financial regulators,” Yellen said. “Given how quickly AI technology is developing, with fast-evolving potential use cases for financial firms and market participants, scenario analysis could help regulators and firms identify potential future vulnerabilities and inform what we can do to enhance resilience.”

Much of Treasury’s RFI is informed by the agency’s previous work on AI, including a March report that sounded the alarm on AI-specific cybersecurity risks to the financial sector. Just last month, the department issued a national strategy for combating terrorism and other illicit financing, which called out the benefits AI might have in winning that fight.

Closer to home, Treasury has experimented with its own AI use cases, while also engaging in public-private partnerships to ensure that smaller financial institutions have the same defensive AI capabilities as the country’s biggest banks. 

The post Treasury seeks information on AI uses and risks in the financial sector appeared first on FedScoop.

]]>
78710
FBI’s AI work includes ‘Shark Tank’-style idea exploration, tip line use case https://fedscoop.com/fbis-ai-work-includes-shark-tank-style-idea-exploration-tip-line-use-case/ Wed, 05 Jun 2024 18:44:25 +0000 https://fedscoop.com/?p=78689 Adopting AI for its own work — such as the FBI’s tip line — and identifying how adversaries could be using the technology are both in focus for the agency, officials said.

The post FBI’s AI work includes ‘Shark Tank’-style idea exploration, tip line use case appeared first on FedScoop.

]]>
The FBI’s approach to artificial intelligence ranges from figuring out how bad actors are harnessing the growing technology to adopting its own uses internally, officials said Tuesday, including through a “Shark Tank”-style model aimed at exploring ideas.

Four FBI technology officials who spoke at a GDIT event in Washington detailed the agency’s focus on promoting AI innovations where those tools are merited — such as in its tip line — and ensuring uses could ultimately meet the law enforcement agency’s need to have technology that could later be defended legally. 

In the generative AI space, the pace of change in models and use cases is a concern when the agency’s “work has to be defensible in court,” David Miller, the FBI’s interim chief technology officer, said during the Scoop News Group-produced event. “That means that when we deploy and build something, it has to be sustainable.”

That Shark Tank format, which the agency has noted it’s used previously, allows the FBI to educate its organization about its efforts to explore the technology in a “safe and secure way,” centralize use cases, and get outcomes it can explain to leadership.

Under the model, which ostensibly is named after the popular ABC show “Shark Tank,” Miller said the agency has put in place a constraint of 90 days to prove a concept and at the end the agency has “validated learnings” about cost, missing skill sets that are needed, and potentially any concerns for integrating it in the organization. 

“By establishing that director’s innovation Shark Tank model, it allows us to have really strategic innovation in doing outcomes,” Miller said. 

Some AI uses are already being deployed at the agency.

Cynthia Kaiser, deputy assistant director of the FBI’s Cyber Division, pointed to the agency’s use of AI to help manage the FBI tip line. That phone number serves as a way for the public to provide information to the agency. While Kaiser said there will always be a person taking down concerns or tips through that line, she also said people can miss things. 

Kaiser said the FBI is using natural language processing models to go over the synopsis of calls and online tips to see if anything was missed. That AI is trained using the expertise of people who have been taking in the tips for years and know what to flag, she said, adding that the technology helps the agency “fill in the cracks.” 

According to the Justice Department’s use case inventory for AI, that tool has been used since 2019, and is also used to “screen social media posts directed to the FBI.” It is one of five uses listed for the FBI. Other disclosed uses include translation software and Amazon’s Rekognition tool, which has attracted controversy in the past for its use as a facial recognition tool.

To assess AI uses and whether they’re needed, the officials also said the agency is looking to its AI Ethics Council, which has been around for several years.

Miller, who leads that body, said that council includes membership from across the agency, including the scientific technology and human resource branches, and offices for integrity and compliance, and diversity, equity and inclusion. Currently, the council is going through what Miller called “version two” in which it’s tackling scale and doing more “experimental activities.” 

At the time it was created, Miller said, the panel established a number of ethical controls similar to that of the National Institute of Standards and Technology’s Risk Management Framework. But he added that it can’t spend “weeks reviewing a model or reviewing one use case” and has to look at how it can “enable the organization to innovate” while still taking inequities and constraints into account. 

Officials also noted that important criteria for the agency’s own use of the technology are transparency and consistency. 

Kathleen Noyes, the FBI’s section chief of Next Generation Technology and Lawful Access, said on Tuesday that one of the agency’s requests for industry is that systems “can’t be a black box.”

“We need some transparency and accountability for knowing when we’re invoking an AI capability and when we’re not,” Noyes said.

She said the FBI started with a risk assessment in which it analyzed its needs and use cases to assist with acquisition and evaluation. “We had to start strategic — I think everyone does,” she said, adding that the first question to answer is “are we already doing this?”

At the same event, Justin Williams, deputy assistant director for the FBI’s Information Management Division, also noted that an important question when they’re using AI is whether they can explain the interface.

“I personally have used a variety of different AI tools, and I can ask the same question and get very similar but different answers,” Williams said. But, he added, it wouldn’t be good for the FBI if it can’t defend the consistency in the outputs it’s getting. That’s a “big consideration” for the agency as it slowly adopts emerging technologies, Williams said.

The post FBI’s AI work includes ‘Shark Tank’-style idea exploration, tip line use case appeared first on FedScoop.

]]>
78689
National lab official highlights role of government datasets in AI work https://fedscoop.com/national-lab-official-highlights-role-of-government-datasets-in-ai-work/ Wed, 05 Jun 2024 17:53:45 +0000 https://fedscoop.com/?p=78683 Jennifer Gaudioso of Sandia’s Center for Computing Research touted the work Department of Energy labs have done to support AI advances.

The post National lab official highlights role of government datasets in AI work appeared first on FedScoop.

]]>
The Department of Energy’s national labs have an especially critical role to play in the advancement of artificial intelligence systems and research into the technology, a top federal official said Tuesday during a Joint Economic Committee hearing on AI and economic growth.

Jennifer Gaudioso, director of the Sandia National Laboratory’s Center for Computing Research, emphasized during her testimony the role that DOE’s national labs could have in both accelerating computing capacity and helping support advances in AI technology. She pointed to her own lab’s work in securing the U.S. nuclear arsenal — and the national labs’ historical role in promoting high-performance computing. 

“Doing AI at the frontier and at scale is crucial for maintaining competitiveness and solving complex global challenges,“ Gaudioso said. “Breakthroughs in one area beget discoveries in others.”

Gaudioso also noted the importance of building AI systems based on more advanced data than the internet-based sources used to build systems like ChatGPT. That includes government datasets, she added.

“What I get really excited about is the transformative potential of training models on science data,” she said. “We can then do new manufacturing. We can make digital twins of the human body to take drug discovery from decades down to months. Maybe 100 days for the next vaccine.” 

The national labs’ current work on artificial intelligence includes AI and nuclear deterrence, national security, non-proliferation, and advanced science and technology, Gaudioso shared. She also referenced the Frontiers in Artificial Intelligence for Science, Security and Technology — a DOE effort focused on using supercomputing for AI. The FASST initiative was announced last month. 

Last November, FedScoop reported on how the Oak Ridge National Laboratory in Tennessee was preparing its supercomputing resources — including the world’s fastest supercomputer, Frontier — for AI work. 

Tuesday’s hearing follows the White House’s continued promotion of new AI-focused policies, and as Congress mulls legislation focused on both regulating and incubating artificial intelligence

The post National lab official highlights role of government datasets in AI work appeared first on FedScoop.

]]>
78683
VA’s technical infrastructure is ‘on pretty good footing,’ CAIO and CTO says https://fedscoop.com/vas-technical-infrastructure-is-on-pretty-good-footing-caio-and-cto-says/ Tue, 04 Jun 2024 20:39:56 +0000 https://fedscoop.com/?p=78663 In an interview with FedScoop, Charles Worthington discusses the agency’s AI and modernization efforts amid scrutiny from lawmakers and the threat of budget cuts.

The post VA’s technical infrastructure is ‘on pretty good footing,’ CAIO and CTO says appeared first on FedScoop.

]]>
Working under the threat of technology-related budget cuts that has elicited concern from both sides of the aisle, the Department of Veterans Affairs has managed to make progress on several tech priorities, the agency’s artificial intelligence chief said last week.

In an interview with FedScoop, Charles Worthington, the VA’s CAIO and CTO, said the agency is engaged in targeted hiring for AI experts while also sustaining its existing modernization efforts. “I wish we could do more,” he said.

While Worthington wrestles with the proposed fiscal year 2025 funding reductions, the VA’s Office of Information and Technology also finds itself in the legislative crosshairs over modernization system upgrades, a supposed lack of AI disclosures and inadequate tech contractor sanctions and ongoing scrutiny over its electronic health record modernization initiative with Oracle Cerner

Worthington spoke to FedScoop about the VA’s embrace of AI, the status of its modernization push, how it is handling budget uncertainty and more.

Editor’s note: The transcript has been edited for clarity and length. 

FedScoop: I know that you’ve started your role as the chief AI officer at the Department of Veterans Affairs. And I wanted to circle back on some stuff that we’ve seen the VA engaged with this past year. The Office of Information and Technology has appeared before Congress, where legislators have voiced their concerns for AI disclosures, inadequate contractor sanctions, budgetary pitfalls in the fiscal year 2025 budget for VA OIT and the supply chain system upgrade. What is your response to them?

Charles Worthington: I think AI represents a really big opportunity for the VA and for every agency, because it really changes what our computing systems are going to be capable of. So I think we’re all going to have to work through what that means for our existing systems over the coming years, but I think really there’s hardly any part of VA’s software infrastructure that’s going to be untouched by this change in how computer systems work and what they’re capable of. So I think it’s obviously gonna be a big focus for us and for Congress over the next couple of years. 

FS: I want to take a step back and focus on the foundational infrastructure challenges that the VA has been facing. Do you attribute that to the emerging technologies’ need for more advanced computing power? What does that look like?

CW: I think overall, VA’s technical infrastructure is actually on a pretty good footing. We’ve spent a lot of time in the past 10 years with the migration to the cloud and with really leaning into using a lot of leading commercial products in the software-as-a-service model where that makes sense. So, by and large, I think we’ve done a good job of bringing our systems up to standard. I think it’s always a challenge in the VA and in government to balance the priorities of modernization and taking advantage of new capabilities with the priorities of running everything that you already have.

One of the unique challenges of this moment in time is that almost every aspect of the VA’s operations depends on technology in some way. There’s just a lot of stuff to maintain; I think we have nearly a thousand systems in operations. And then obviously, with something like AI, there’s a lot of new ideas about how we could do even more [to] use technology and even more ways to further our mission. 

FS: In light of these voiced concerns from legislators, as you progress into your role of chief AI officer, how do you anticipate the agency will be able to use emerging technologies like AI to its fullest extent?

CW: I think there’s really two priorities that we have with AI right now. One is, this represents an enormous opportunity to deliver services more effectively and provide great technology services to the VA staff, because these systems are so powerful and can do so many new things. One priority is to take advantage of these technologies, really to make sure that our operations are running as effectively as possible. 

On the other hand, I think this is such a new technology category that a lot of the existing processes we have around technology governance in government don’t apply in exactly the same ways to artificial intelligence. So in a lot of ways, there are novel concerns that AI brings. … With an AI system that is, instead, taking those inputs and then generating a best guess or generating some piece of content, the way that we need to make sure that those systems are working effectively, those are still being developed. At the same time, as we’re trying to take advantage of these new capabilities, we’re also trying to build a framework that will allow us to safely use and deploy these solutions to make sure that we’re upholding the trust that veterans put in us to manage their data securely. 

FS: In what ways is the agency prioritizing AI requirements, especially from the artificial intelligence executive order that we saw last October, and maintaining a competitive edge with the knowledge that the fiscal year 2025 budget has seen a significant clawback of funds?

CW: We are investing a lot in standing up, I would say, the AI operations and governance. We have four main priorities that we’re focused on right now. One is setting up that policy framework and the governance framework for how we’re going to manage these. We have already convened our first AI governance council meeting — we’ve actually had two of them — where we’re starting to discuss how the agency is going to approach managing our inventory of AI use cases and the policies that we’ll use. 

The second priority is really focused on our workforce. We need to make sure that our VA staff have the knowledge and the skills they need to be able to use these solutions effectively and understand what they’re capable of and also their limitations. We need to be able to bring in the right sort of talent to be able to buy and build these sorts of solutions. 

Third, we’re working on our infrastructure [to] make sure that we have the technical infrastructure in place for VA to actually either build or, in some cases, just buy and run AI solutions. 

Then, finally, we have a set of high-priority use cases that we’re really leaning into. This was one of the things that was specifically called out to the VA in the executive order, which was basically to run a couple of pilots — we call them tech sprints — on AI.

FS: I would definitely love to hear some insights from you personally about some challenges you’re anticipating with artificial intelligence, especially as you’ve referenced that the VA has already been using AI.

CW: I think one of the challenges right now is that most of the AI use cases are built in a very separate way from the rest of our computing systems. So if you take a predictive model, it maybe takes a set of inputs and then generates a prediction, which is typically a number. But how do you actually integrate that prediction into a system that somebody’s already using is a challenge that we see, I think, with most of these systems.

In my opinion, integrating AI with more traditional types of software is going to be one of the biggest challenges of the next 10 years. VA has got over a thousand systems and to really leverage these tools effectively, you’d ideally like to see these capabilities integrated tightly with those systems so that it’s all kind of one workflow, and it appears naturally as a way that can assist the person with the task they’re trying to achieve, as opposed to something that’s in a different window that they’ve got to flip back and forth between. 

I feel like right now, we’re in that awkward stage where most of these tools are a different window … where there’s a lot of flipping back and forth between tools and figuring out how best to integrate those AI tools with the more traditional systems. I think that’s just kind of a relatively unfigured-out problem. Especially, if you think of a place like VA, where we have a lot of legacy systems, things that have been built over the past number of decades, oftentimes updating those is not the easiest thing. So I think it really speaks to the importance of modernizing our software systems to make them easier to change, more flexible, so that we can add things like AI or just other enhancements.

The post VA’s technical infrastructure is ‘on pretty good footing,’ CAIO and CTO says appeared first on FedScoop.

]]>
78663
Ernst seeks information about SBA’s artificial intelligence use cases, IT work https://fedscoop.com/ernst-seeks-sba-ai-use-case-it-information/ Fri, 31 May 2024 21:24:47 +0000 https://fedscoop.com/?p=78615 In a letter, the Senate Republican questioned why the SBA hadn’t disclosed artificial intelligence uses in its inventory, alleging the agency was out of compliance.

The post Ernst seeks information about SBA’s artificial intelligence use cases, IT work appeared first on FedScoop.

]]>
Sen. Joni Ernst, R-Iowa, is seeking information about the Small Business Administration’s IT investments and alleged undisclosed artificial intelligence use cases.

In a letter dated May 9 and made public this week, Ernst primarily requested details about how the SBA is managing IT investments through its IT Working Capital Fund, which the Iowa Republican said it hasn’t used appropriately. But she also probed the agency for details about its AI use cases, alleging the SBA had uses it hadn’t reported publicly in its annual inventory.

“In a recent interview, you stated that the SBA has embraced AI. Despite this, the SBA has not been transparent and reports that it has not used AI,” wrote Ernst, ranking member of the Senate Committee on Small Business and Entrepreneurship. 

AI use case inventories, which were required initially under a Trump-era executive order and later enshrined into statute, are intended to provide information about agency uses of the technology in disclosures posted on their websites. 

However, Stanford research, a Government Accountability Office review, and FedScoop reporting have found that AI inventories have lacked consistency and, in some cases, have omitted uses that should be made public. The Biden administration has recently expanded reporting requirements for those inventories and is looking to improve them.

While the SBA’s AI use case inventory currently shows no uses of the technology, Ernst cited several instances in which the agency had publicly touted AI use cases at the agency. 

She highlighted a May 2023 press release that stated “SBA will use advanced data analytics, third party data checks, and artificial intelligence tools for fraud review on all loans in the 7(a) and 504 Loan Programs prior to approval, starting August 1, 2023.” 

Ernst also pointed to a June 2023 press release that said the agency had used “several tools, including first-of-its-kind artificial intelligence,” to block millions of applications for pandemic relief that were ineligible, duplicative, or attempts at fraud.

In addition to IT investment information and AI disclosure, Ernst also requested information about how SBA planned to use its IT Working Capital Fund to improve its score for the Federal Information Technology Acquisition Reform Act.

Ernst said despite the establishment of the fund — which was created under the Modernizing Government Technology Act that became law in 2017 — SBA “has had declining performance in its efforts to manage IT and implement” FITARA. In the past three years, the agency hasn’t achieved higher than a “C” on its FITARA score, which tracks agency IT modernization progress.

The SBA confirmed to FedScoop that it had received the letter but didn’t provide further comment. Ernst had requested a response by May 23.

The post Ernst seeks information about SBA’s artificial intelligence use cases, IT work appeared first on FedScoop.

]]>
78615