AI executive order Archives | FedScoop https://fedscoop.com/tag/ai-executive-order/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 15 May 2024 16:01:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 AI executive order Archives | FedScoop https://fedscoop.com/tag/ai-executive-order/ 32 32 House bill calls on CISA to form AI task force https://fedscoop.com/house-bill-calls-on-cisa-to-form-ai-task-force/ Wed, 15 May 2024 16:01:29 +0000 https://fedscoop.com/?p=78323 The legislation from Reps. Carter and Thompson would require the cyber agency to create an internal task force focused on safety and security concerns posed by AI.

The post House bill calls on CISA to form AI task force appeared first on FedScoop.

]]>
Two Democrats on the House Homeland Security Committee are calling on the Cybersecurity and Infrastructure Security Agency to create an internal task force to address safety and security concerns presented by artificial intelligence.

The CISA Securing AI Task Force Act, introduced Tuesday by Reps. Troy Carter, D-La., and Bennie Thompson, D-Miss., would require the agency’s director to assemble an AI-focused task force, made up of personnel across CISA’s offices and divisions, within one year of the bill’s enactment. 

That task force would be charged with coordinating CISA directives called out in President Joe Biden’s AI executive order governing use of the technology. The EO has a specific note for CISA to coordinate with federal agencies on red-teaming for generative AI.

“This Task Force will enhance the safe and secure design, development, adoption, and deployment of AI across critical sectors by bringing together diverse expertise within CISA,” Reps. Carter and Thompson said in a statement

Following the formation of the CISA AI group, members would be tasked with evaluating agency security initiatives, guidance and programs dealing with the technology, providing recommendations for changes as necessary.

The task force would also advise stakeholders on cyber risks tied to AI-based software and coordinate the implementation of secure AI products. Recommendations to CISA’s director on related initiatives would also be expected from the task force, as would support for the publication of the agency’s AI use case inventory. 

Carter, a member of the Cybersecurity and Infrastructure Protection Subcommittee, and Thompson, ranking member on the House Homeland Security Committee, said that as AI evolves and is increasingly integrated into the everyday lives of Americans, this bill underlines a “commitment to proactive risk mitigation and preparedness.” 

“The CISA Securing AI Task Force Act will strengthen America’s cybersecurity framework, safeguarding against emerging threats and ensuring the responsible advancement of AI technologies,” the lawmakers said.

The legislation comes months ahead of the November presidential election, which could have substantial implications on the cyber agency. In a February interview with Politico, Thompson expressed concern about CISA’s future in the event of a second term for Donald Trump, saying that the former president “politicized the national security apparatus” and represents “a threat to CISA” and to democracy. 

The post House bill calls on CISA to form AI task force appeared first on FedScoop.

]]>
78323
The CAIO’s role in driving AI success across the federal government https://fedscoop.com/the-caios-role-in-driving-ai-success-across-the-federal-government/ Tue, 07 May 2024 18:55:12 +0000 https://fedscoop.com/?p=78203 In this commentary, former federal AI leaders Lt. Gen. Jack Shanahan and Joel Meyer share five actions newly appointed chief AI officers should take to set the stage for the successful adoption of AI.

The post The CAIO’s role in driving AI success across the federal government appeared first on FedScoop.

]]>
Artificial intelligence isn’t just a buzzword — it’s a revolution transforming societies and the backbone of both private and public sector innovation.

While federal agencies have lagged commercial industry in recognizing AI’s potential impacts and adapting accordingly, the U.S. government is now rushing to catch up. On March 28, the White House Office of Management and Budget released its new AI governance memo as a follow-up to the October 2023 White House Executive Order on the Safe, Secure, and Trustworthy Use of Artificial Intelligence, and federal agencies have completed all required actions to date under the Executive Order on schedule.

As required by the executive order, all federal agencies must now designate a Chief AI Officer (CAIO) to coordinate their agency’s use of AI, promote AI innovation in their agency, and manage risks from their agency’s use of AI. As a consequence, the government is looking for 400 CAIOs and many federal departments and agencies have already named one.

The creation of CAIO positions is a significant step toward an AI-enabled federal government. However, it presents challenges akin to those faced in the private sector. To navigate these challenges successfully, CAIOs should take five immediate actions to set the stage for success:

Lead the Mission: CAIOs must articulate a clear vision for AI adoption within their agencies, ensuring alignment and serving as the focal point for implementing AI priorities. The Chief AI Officer should report directly to the department or agency head to demonstrate that they have their full-throated support.

Balance Innovation and Risk: Many government functions are considered no-fail missions—protecting the nation, providing uninterrupted financial and medical benefits, securing domestic and international travel, building weapon systems, and serving as the nation’s eyes and ears through intelligence collection and analysis. Even seemingly small error rates may be intolerable. Yet with AI, risk aversion offers a path to stagnation and obsolescence. CAIOs should fight to strike a balance between each agency’s legitimate concerns about risks, and the imperative to accelerate AI adoption and integration.

Quick Wins and Strategy: CAIOs should identify low-hanging fruit that, with focused senior-level attention and a burst of resources, can deliver demonstrable outcomes that are clearly AI-driven. This creates a virtuous cycle of success that opens the aperture for the more difficult and ambitious work to come. AI pilots can be chosen thoughtfully to demonstrate hypotheses that can then be affirmed in each department’s AI strategy. These quick wins can build momentum for broader AI strategy implementation.

Budgeting and Procurement: The budgets that CAIOs are working with now were likely built in early 2022 before large language models or generative AI were widely available. CAIOs should work with agency chief financial officers and department comptrollers to identify current-year funds for reprogramming. At the same time, they need to shape future year budgets in ways that reflect the required infusion of resources in support of the entire AI lifecycle.

Yet even when funds are identified, procurement processes often move slower than the pace of technology — a product on the cutting edge today may be on the path to obsolescence tomorrow. CAIOs should work with acquisition and contracting officials to take full advantage of extant authorities while seeking new and more flexible authorities to accelerate AI procurement.

Talent Acquisition: The scarcity of AI talent necessitates creative approaches to recruitment and retention within the public sector. CAIOs should push to hire AI experts directly, but to move faster they should also hire outside AI experts for temporary assignments through pathways such as fellowships from corporations, think tanks, and academia, or in excepted service or special government employee roles. CAIOs can pursue a strategy of establishing a centralized AI talent hub that the rest of the department or agency can access, or of placing talent in key directorates and offices that are leveraging AI. A blend of different human capital solutions will help accelerate AI adoption across the government.

These strategies are not only aimed at integrating AI into federal operations but also at leveraging its potential to enhance public service delivery. The CAIO’s role is pivotal in this process, requiring a blend of visionary leadership, strategic planning, and operational acumen.

The experiences of the Defense Department’s Joint AI Center and Chief Digital and AI Office and the Department of Homeland Security’s AI Task Force exemplify the multifaceted opportunities and challenges AI presents. These initiatives highlighted the necessity for a centralized strategy to provide direction, coupled with the flexibility to foster innovation and experimentation within a decentralized framework. Absent the proper balance between centralization and decentralization, one of two things will happen: AI will never scale beyond pilot projects — overly decentralized — or the end users’ needs will be marginalized to the point of failure — overly centralized. The balancing act between rapid technological adoption and the careful management of associated risks underscores the complex landscape that CAIOs navigate.

The decision to institutionalize the role of CAIOs demonstrates a clear acknowledgment of AI’s strategic significance. This action signifies a deeper commitment to keeping the United States at the forefront of technological innovation, emphasizing the use of AI to improve public service delivery, enhance operational efficiency, and safeguard national interests. As we navigate this still-uncharted territory, leadership, innovation, and responsible governance will be essential in realizing the full promise of AI within the federal realm. CAIOs will play an indispensable role in shaping the government’s AI-enhanced future.

Joel Meyer served as the Deputy Assistant Secretary of Homeland Security for Strategic Initiatives in the Biden Administration, where he drove the creation of DHS’s Artificial Intelligence Task Force and the Third Quadrennial Homeland Security Review. He has led public sector businesses at three artificial intelligence technology startups, including currently serving as President of Public Sector at Domino Data Lab, provider of the leading enterprise AI platform trusted by over 20% of the Fortune 100 and major government agencies.

Lieutenant General John (Jack) N.T. Shanahan, United States Air Force, Retired, retired in 2020 after a 36-year military career. Jack served in a variety of operational and staff positions in various fields including flying, intelligence, policy, and command and control. As the first Director of the Algorithmic Warfare Cross-Functional Team (Project Maven), Jack established and led DoD’s pathfinder AI fielding program charged with bringing AI capabilities to intelligence collection and analysis. In his final assignment, he served as the inaugural Director of the U.S. Department of Defense Joint Artificial Intelligence Center.

Both authors serve as Commissioners on the Atlantic Council’s Commission on Software-Defined Warfare.

The post The CAIO’s role in driving AI success across the federal government appeared first on FedScoop.

]]>
78203
CISA’s chief data officer: Bias in AI models won’t be the same for every agency https://fedscoop.com/ai-models-bias-datasets-cisa-chief-data-officer/ Wed, 24 Apr 2024 20:24:19 +0000 https://fedscoop.com/?p=77573 Monitoring and logging are critical for agencies as they assess datasets, though “bias-free data might be a place we don’t get to,” the federal cyber agency’s CDO says.

The post CISA’s chief data officer: Bias in AI models won’t be the same for every agency appeared first on FedScoop.

]]>
As chief data officer for the Cybersecurity and Infrastructure Security Agency, Preston Werntz has made it his business to understand bias in the datasets that fuel artificial intelligence systems. With a dozen AI use cases listed in CISA’s inventory and more on the way, one especially conspicuous data-related realization has set in.

“Bias means different things for different agencies,” Werntz said during a virtual agency event Tuesday. Bias that “deals with people and rights” will be relevant for many agencies, he added, but for CISA, the questions become: “Did I collect data from a number of large federal agencies versus a small federal agency [and] did I collect a lot of data in one critical infrastructure sector versus in another?”

Internal gut checks of this kind are likely to become increasingly important for chief data officers across the federal government. CDO Council callouts in President Joe Biden’s AI executive order cover everything from the hiring of data scientists to the development of guidelines for performing security reviews.

For Werntz, those added AI-related responsibilities come with an acknowledgment that “bias-free data might be a place we don’t get to,” making it all the more important for CISA to “have that conversation with the vendors internally about … where that bias is.”

“I might have a large dataset that I think is enough to train a model,” Werntz said. “But if I realize that data is skewed in some way and there’s some bias … I might have to go out and get other datasets that help fill in some of the gaps.”

Given the high-profile nature of agency AI use cases — and critiques that inventories are not fully comprehensive or accurate — Werntz said there’s an expectation of additional scrutiny on data asset purchases and AI procurement. As CISA acquires more data to train AI models, that will have to be “tracked properly” in the agency’s inventory so IT officials “know which models have been trained by which data assets.” 

Adopting “data best practices and fundamentals” and monitoring for model drift and other potentially problematic AI concepts is also top of mind for Werntz, who emphasized the importance of performance security logging. That comes back to having an awareness of AI models’ “data lineage,” especially as data is “handed off between systems.” 

Beyond CISA’s walls, Werntz said he’s focused on sharing lessons learned with other agencies, especially when it comes to how they acquire, consume, deploy and maintain AI tools. He’s also keeping an eye out for technologies that will support data-specific efforts, including those involving tagging, categorization and lineage.

“There’s a lot of onus on humans to do this kind of work,” he said. “I think there’s a lot of AI technologies that can help us with the volume of data we’ve got.” CISA wants “to be better about open data,” Werntz added, making more of it available to security researchers and the general public. 

The agency also wants its workforce to be trained on commercial generative AI tools, with some guardrails in place. As AI “becomes more prolific,” Werntz said internal trainings are all about “changing the culture” at CISA to instill more comfort in working with the technology.

“We want to adopt this. We want to embrace this,” Werntz said. “We just need to make sure we do it in a secure, smart way where we’re not introducing privacy and safety and ethical kinds of concerns.” 

The post CISA’s chief data officer: Bias in AI models won’t be the same for every agency appeared first on FedScoop.

]]>
77573
Scientists must be empowered — not replaced — by AI, report to White House argues https://fedscoop.com/pcast-white-house-science-advisors-ai-report-recommendations/ Tue, 23 Apr 2024 21:15:59 +0000 https://fedscoop.com/?p=77551 The upcoming report from the President's Council of Advisors on Science and Technology pushes for the “empowerment of human scientists,” responsible AI use and shared resources.

The post Scientists must be empowered — not replaced — by AI, report to White House argues appeared first on FedScoop.

]]>
The team of technologists and academics charged with advising President Joe Biden on science and technology is set to deliver a report to the White House next week that emphasizes the critical role that human scientists must play in the development of artificial intelligence tools and systems.

The President’s Council of Advisors on Science and Technology voted unanimously in favor of the report Tuesday following a nearly hourlong public discussion of its contents and recommendations. The delivery of PCAST’s report will fulfill a requirement in Biden’s executive order on AI, which called for an exploration of the technology’s potential role in “research aimed at tackling major societal and global challenges.”

“Empowerment of human scientists” was the first goal presented by PCAST members, with a particular focus on how AI assistants should play a complementary role to human scientists, rather than replacing them altogether. The ability of AI tools to process “huge streams of data” should free up scientists “to focus on high-level directions,” the report argued, with a network of AI assistants deployed to take on “large, interdisciplinary, and/or decentralized projects.”

AI collaborations on basic and applied research should be supported across federal agencies, national laboratories, industry and academia, the report recommends. Laura H. Greene, a Florida State University physics professor and chief scientist at the National High Magnetic Field Laboratory, cited the National Science Foundation’s Materials Innovation Platforms as an example of AI-centered “data-sharing infrastructures” and “community building” that PCAST members envision. 

“We can see future projects that will include collaborators to develop next-generation quantum computing qubits, wholesale modeling, whole Earth foundation models” and an overall “handle on high-quality broad ranges of scientific databases across many disciplines,” Greene said.

The group also recommended that “innovative approaches” be explored on how AI assistance can be integrated into scientific workflows. Funding agencies should keep AI in mind when designing and organizing scientific projects, the report said.

The second set of recommendations from PCAST centered on the responsible and transparent use of AI, with those principles employed in all stages of the scientific research process. Funding agencies “should require responsible AI use plans from researchers that would assess potential AI-related risks,” the report states, matching the principles called out in the White House’s AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework.

Eric Horvitz, chief scientific officer at Microsoft, said PCAST’s emphasis on responsible AI use means putting forward “our best efforts to making sure these tools are used in the best ways possible and keeping an eye on possible downsides, whether the models are open source or not open source models. … We’re very optimistic about the wondrous, good things we can expect, but we have to sort of make sure we keep an eye on the rough edges.”

The potential for identifying those “rough edges” rests at least partially in the group’s third recommendation of having shared and open resources. PCAST makes its case in the report for an expansion of existing efforts to “broadly and equitably share basic AI resources.” There should be more secure access granted to federal datasets to aid critical research needs, the report noted, with the requisite protections and guardrails in place.

PCAST members included a specific callout for an expansion of NSF’s National Secure Data Service Demonstration project and the Census Bureau’s Federal Statistical Research Data Centers. The National Artificial Intelligence Research Resource should also be “fully funded,” given its potential as a “stepping-stone for even more ambitious ‘moonshot’ programs,” the report said.

AI-related work from the scientists who make up PCAST won’t stop after the report is edited and posted online next week. Bill Press, a computer science and integrative biology professor at the University of Texas at Austin, said it’s especially important now in this early developmental stage for scientists to test AI systems and learn to use them responsibly. 

“We’re dealing with tools that, at least right now, are ethically neutral,” Press said. “They’re not necessarily biased in the wrong direction. And so you can ask them to check these things. And unlike human people who write code, these tools don’t have pride of ownership. They’re just as happy to try to reveal biases that might have incurred as they are to create them. And that’s where the scientists are going to have to learn to use them properly.”

The post Scientists must be empowered — not replaced — by AI, report to White House argues appeared first on FedScoop.

]]>
77551
AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says https://fedscoop.com/ai-transparency-creates-big-cultural-challenge-for-parts-of-dhs-ai-chief-says/ Wed, 20 Mar 2024 16:25:46 +0000 https://fedscoop.com/?p=76678 Transparency around AI may result in issues for DHS elements that are more discreet in their operations and the information they share publicly, CIO Eric Hysen said.

The post AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says appeared first on FedScoop.

]]>
As the Department of Homeland Security ventures deeper into the adoption of artificial intelligence — while doing so in a transparent, responsible way in line with policies laid out by the Biden administration — it’s likely to result in friction for some of the department’s elements that don’t typically operate in such an open manner, according to DHS’s top AI official.

Eric Hysen, CIO and chief AI officer for DHS, said Tuesday at the CrowdStrike Gov Threat Summit that “transparency and responsible use [of AI] is critical to get right,” especially for applications in law enforcement and national security settings where the “permission structure in the public eye, in the public mind” faces a much higher bar.

But that also creates a conundrum for those DHS elements that are more discreet in their operations and the information they share publicly, Hysen acknowledged.

“What’s required to build and maintain trust with the public in our use of AI, in many cases, runs counter to how law enforcement and security agencies generally tend to operate,” he said. “And so I think we have a big cultural challenge in reorienting how we think about privacy, civil rights, transparency as not something that we do but that we tack on” to technology as an afterthought, but instead “something that has to be upfront and throughout every stage of our workplace.”

While President Joe Biden’s AI executive order gave DHS many roles in leading the development of safety and security in the nation’s use of AI applications, internally, Hysen said, the department is focused on “everything from using AI for cybersecurity to keeping fentanyl and other drugs out of the country or assisting our law enforcement officers and investigators in investigating crimes and making sure that we’re doing all of that responsibly, safely and securely.”

Hysen’s comments came a day after DHS on Monday published its first AI roadmap, spelling out the agency’s current use of the technology and its plans for the future. Responsible use of AI is a key part of the roadmap, pointing to policies DHS issued in 2023 promoting transparency and responsibility in the department’s AI adoption and adding that “[a]s new laws and government-wide policies are developed and there are new advances in the field, we will continue to update our internal policies and procedures.”

“There are real risks to using AI in mission spaces that we are involved in. And it’s incumbent on us to take those concerns incredibly seriously and not put out or use new technologies unless we are confident that we are doing everything we can, even more than what would be required by law or regulation, to ensure that it is responsible,” Hysen said, adding that his office worked with DHS’s Privacy Office, the Office for Civil Rights and Civil Liberties and the Office of the General Counsel to develop those 2023 policies.

To support the responsible development and adoption of AI, Hysen said DHS is in the midst of hiring 50 AI technologists to stand up a new DHS AI Corp, which the department announced last month.

“We are still hiring if anyone is interested,” Hysen said, “and we are moving aggressively expand our skill sets there.”

The post AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says appeared first on FedScoop.

]]>
76678
Raimondo announces picks for U.S. AI Safety Institute’s director, CTO https://fedscoop.com/us-ai-safety-institute-usaisi-kelly-tabassi-raimondo-nist/ Wed, 07 Feb 2024 20:08:27 +0000 https://fedscoop.com/?p=75956 National Economic Council adviser Elizabeth Kelly will lead NIST’s new AI group, while Elham Tabassi is on board as chief technology officer.

The post Raimondo announces picks for U.S. AI Safety Institute’s director, CTO appeared first on FedScoop.

]]>
The U.S. AI Safety Institute will be led by a key White House National Economic Council adviser, and an artificial intelligence official at the National Institute for Standards and Technology will also join the new group’s executive leadership team, Commerce Secretary Gina Raimondo announced Wednesday.

Elizabeth Kelly, special assistant to the president for economic policy at the NEC, will serve as the inaugural director of the USAISI, established under the NIST umbrella by President Joe Biden’s AI executive order. 

Kelly, who with the NEC helps guide the Biden administration’s financial regulation and technology policy, including AI, will be charged with “providing executive leadership, management, and oversight of the AI Safety Institute and coordinating with other AI policy and technical initiatives throughout the Department, NIST, and across the government,” per a Commerce Department press release.

Kelly was described in the release as a “driving force” behind Biden’s AI EO, taking the lead on domestic efforts to spur competition, protect privacy and back workers and consumers. 

The AI Safety Institute’s “ambitious mandate to develop guidelines, evaluate models, and pursue fundamental research will be vital to addressing the risks and seizing the opportunities of AI,” Kelly said in a statement. “I am thrilled to work with the talented NIST team and the broader AI community to advance our scientific understanding and foster AI safety. While our first priority will be executing the tasks assigned to NIST in President Biden’s executive order, I look forward to building the Institute as a long-term asset for the country and the world.”

The USAISI’s chief technology officer will be Elham Tabassi, NIST’s chief AI adviser. Tabassi led the development of NIST’s AI Risk Management Framework and also served as the associate director for emerging technologies in the agency’s Information Technology Laboratory. 

In her new role as CTO, Tabassi will oversee critical technical programs and “be responsible for shaping efforts at NIST and with the broader AI community to conduct research, develop guidance, and conduct evaluations of AI models including advanced large language models in order to identify and mitigate AI safety risks,” the release stated.

“The USAISI will advance American leadership globally in responsible AI innovations that will make our lives better,” Tabassi said in a statement. “We must have a firm understanding of the technology, its current and emerging capabilities, and limitations. NIST is taking the lead to create the science, practice, and policy of AI safety and trustworthiness. I am thrilled to be part of this remarkable team, leading the effort to develop science-based, and empirically backed guidelines and standards for AI measurement and policy.”

The post Raimondo announces picks for U.S. AI Safety Institute’s director, CTO appeared first on FedScoop.

]]>
75956
House Republicans scrutinize VA for lack of AI disclosures, inadequate contractor sanctions https://fedscoop.com/house-republicans-scrutinize-va-for-lack-of-ai-disclosures-inadequate-contractor-sanctions/ Tue, 30 Jan 2024 22:26:20 +0000 https://fedscoop.com/?p=75809 Members of the Veterans Affairs Technology Modernization subcommittee urged the department to offer disclosures for AI use and issue more “severe” consequences for contractors.

The post House Republicans scrutinize VA for lack of AI disclosures, inadequate contractor sanctions appeared first on FedScoop.

]]>
The Department of Veterans Affairs should provide disclosures to patients when the agency uses artificial intelligence to analyze sensitive information, and the VA should also be prepared to levy greater sanctions against contractors who misuse veterans’ data, House lawmakers recommended during a Monday hearing. 

The VA does not currently disclose the use of AI when the technology is used for diagnostic purposes in a health care setting, lawmakers noted during the House subcommittee hearing on Technology Modernization Oversight for the VA.  as Chairman Matt Rosendale, R-Mont., said that there is not a “good, consistent disclosure process that is being utilized and being signed off by our veterans,” a point that Dr. Gil Alterovitz, the director of the agency’s National Artificial Intelligence Institute and its chief AI officer, confirmed. 

Alterovitz did confirm that the department is piloting “model cards” that offer patients and providers information about the AI that is being applied to their care, along with informed consent forms that patients are given when the tools are being researched in health care settings. 

“I would highly recommend that if that disclosure [about AI use] is going out and someone’s information is going to be analyzed by AI, that certainly the patient should be made aware of that,” Rosendale said. “It could present all types of issues going forward. If the groups that are doing all that analysis of what is and what is not acceptable, a disclosure at the very beginning would be a good place to start.”

In the VA’s use case inventory, the department cites the use of a large language model that can help predict risks that patients might have before they enter surgery and another tool to “optimize surgical outcomes.” While the department notes that that tool has not been applied to a patient’s pre-surgery period, the VA reports that “large amounts of pharmacogenomic and phenotypic data have been analyzed by machine learning/(artificial intelligence and has produced interesting results” in various clinical settings.

Alterovitz responded to a question about ethical concerns from Rosendale, stating that the department is looking at “trustworthy AI,” and emphasized that the department is researching the surgical outcome LLM and not using it operationally. Rosendale raised more ethical concerns, citing false positives and restrictions on veterans’ freedoms, including gun ownership. 

However, in a different application, the VA is attempting to “extract signals” of suicidal risk from clinical notes through the use of natural language processing. The tool is used by the department in conjunction with current risk prediction methods.

Alterovitz reported that veterans sign a consent form when they are involved in a health care-centered process that utilizes an AI tool, and for AI tools used in operations, “generally there are tools used that have been publicized” on the VA’s use case inventory. 

“Everything that [Rosendale] said are concerns that need to be looked at,” Alterovitz said. “Where this uses AI is in the natural language processing, looking at those notes and extracting potential meaning out of it. There’s always a human in the loop that looks at the results. So this is a way to help them sift through a large amount of text.”

Rep. Keith Self, R-Texas, said he was “not satisfied” with the VA’s answers to questions about sanctions for contractors. 

The VA’s witnesses reported that the current sanctions for contractors that accidentally or purposefully leak sensitive information about veterans’ medical records are losses of contracts. 

“A general contract acquisition answer is not satisfactory because of the importance, the potential devastating consequences of a breach of 1,100 petabytes of data, sensitive data,” Self said. “First, you’ve got to identify some sanctions and they’ve got to be fairly severe sanctions, and they’ve got to be in policy upfront. This is something you have got to settle in policy early. Frankly, in my mind, it is not going to be sufficient to say, ‘we’re going to cancel a contract.’”

Self also scrutinized the VA for its unclear amount of AI use cases, stating that the agency reported 128 use cases to the subcommittee, 300 use cases to the Senate and 21 given during the hearing that “advanced to implementation.”

Charles Worthington, the VA’s chief technology officer, responded by saying that the department’s use case inventory is a “work in progress” and that the inventory has been “at different points in time, created to comply” with memorandums from the White House Office of Management and Budget.

The post House Republicans scrutinize VA for lack of AI disclosures, inadequate contractor sanctions appeared first on FedScoop.

]]>
75809
OMB seeks input on privacy impact assessments for AI use https://fedscoop.com/omb-seeks-input-on-privacy-impact-assessments-for-ai-use/ Fri, 26 Jan 2024 23:26:05 +0000 https://fedscoop.com/?p=75750 The requests for information would inform potential updates to the Office of Management and Budget’s guidance for privacy risk assessments.

The post OMB seeks input on privacy impact assessments for AI use appeared first on FedScoop.

]]>
The White House Office of Management and Budget is looking for input on how federal agency privacy impact assessments could more effectively mitigate risks as technologies, such as artificial intelligence, become more advanced.

The request for information, which is required by President Joe Biden’s recent AI executive order, appeared on the Federal Register public inspection Friday and is set for official publication Jan. 30. Comments are due within 60 days of publication.

OMB is asking specifically for comments on topics such as risks related to AI that agencies might consider when completing privacy impact statements — which agencies use to analyze the handling of information — and updates OMB might make to guidance to improve how agencies address and mitigate those risks.

“Existing privacy risks are escalating, and new privacy risks are emerging,” OMB said in the request. “It is important to hear from the public as OMB considers what updates to PIA guidance may be necessary to ensure that PIAs continue to facilitate robust analysis and transparency about how agencies address these evolving privacy risks.”

The post OMB seeks input on privacy impact assessments for AI use appeared first on FedScoop.

]]>
75750
Nuclear Regulatory Commission CIO David Nelson set to retire https://fedscoop.com/nuclear-regulatory-commission-cio-david-nelson-set-to-retire/ Wed, 24 Jan 2024 23:34:34 +0000 https://fedscoop.com/?p=75717 Scott Flanders, the NRC’s deputy chief information officer, will serve as the acting CIO and acting chief AI officer until a permanent one is selected.

The post Nuclear Regulatory Commission CIO David Nelson set to retire appeared first on FedScoop.

]]>
The Nuclear Regulatory Commission’s chief information officer, David Nelson, will be retiring at the end of the week, according to an agency spokesperson. 

In an email to FedScoop, the NRC spokesperson said Nelson will be leaving the agency effective Jan. 26. Taking his place as acting chief AI officer and CIO is Scott Flanders, the commission’s current deputy CIO. 

Nelson was appointed as the regulatory agency’s CIO in 2016, leaving his previous position as CIO and director of the Office of Enterprise Information for the Centers for Medicare and Medicaid Services. 

Nelson was recently appointed as the NRC’s CAIO, in light of a long-awaited executive order on AI from President Joe Biden. While the order did not include the NRC as an agency that will be required to eventually name a CAIO, the commission told FedScoop previously that it was “assessing whether and how it applies.”

Additionally, the NRC spokesperson confirmed that Victor Hall, the deputy director of the Division of Systems Analysis in the Office of Nuclear Regulatory Research, serves as the responsible AI official under Executive Order 13960, issued by the Trump administration. The NRC was also exempted from that requirement as an independent regulatory agency.

The post Nuclear Regulatory Commission CIO David Nelson set to retire appeared first on FedScoop.

]]>
75717
Degree requirements are hurting government’s AI recruitment efforts, House lawmakers and experts say https://fedscoop.com/degree-requirements-hurting-gov-ai-recruitment-efforts/ Thu, 18 Jan 2024 17:12:31 +0000 https://fedscoop.com/?p=75629 Rep. Mace tells FedScoop that newly trained and upskilled workers without a four-year degree are often “more qualified” for federal AI jobs.

The post Degree requirements are hurting government’s AI recruitment efforts, House lawmakers and experts say appeared first on FedScoop.

]]>
Federal employment standards for artificial intelligence-trained employees are burdensome and end up discouraging workers who are knowledgeable in the emerging tech from seeking such jobs, lawmakers and witnesses said during a House Cybersecurity, Information Technology and Government Innovation subcommittee hearing Wednesday. 

AI-trained employees who have been upskilled and certified through intensive training programs rather than earning a degree from a four-year institution can be considered unqualified to work for the federal government, according to testimony from Timi Hadra, an IBM client partner and the company’s senior state executive for West Virginia. 

Despite the call to action from the White House through the AI executive order, Hadra said that the government’s efforts so far to hire more talent from diverse educational backgrounds are “not enough.”

Subcommittee Chair Nancy Mace, R-S.C., said in an interview with FedScoop after the hearing that Hadra’s answer was illuminating.

“Hearing that testimony today and asking that question of IBM is certainly very helpful to understand what the real world and the reality is like, on the ground with tech companies that have these federal contracts,” Mace said. “If 20% of the workforce, or more, doesn’t have that four-year degree, it’s clearly hindering our ability to meet the demands that we have in the tech, cyber and innovation AI space.”

Hadra noted that IBM has a six-month curriculum for its cybersecurity apprenticeship program that trains employees in these disciplines. She said that the workers are “ready to hit the ground running on those programs, and because they don’t meet those minimum qualifications, we are not able to put them on that contract.”

Mace added that the more recently trained and upskilled employees could be “more qualified” than those who hold a degree because “they put that skillset into practice.” 

“We have a shortage of 700,000 cybersecurity workers across the private and public sectors,” Mace said during the hearing. “We know that our traditional education system doesn’t produce nearly enough degreed graduates in the field to fill the need. We also know that that shortfall would be much worse if not for the appearance of nimble educational alternatives. That includes short-term ‘boot camp’ programs that issue non-degree credentials like certifications and badges.”

The post Degree requirements are hurting government’s AI recruitment efforts, House lawmakers and experts say appeared first on FedScoop.

]]>
75629