responsible AI Archives | FedScoop https://fedscoop.com/tag/responsible-ai/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 20 Mar 2024 16:25:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 responsible AI Archives | FedScoop https://fedscoop.com/tag/responsible-ai/ 32 32 AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says https://fedscoop.com/ai-transparency-creates-big-cultural-challenge-for-parts-of-dhs-ai-chief-says/ Wed, 20 Mar 2024 16:25:46 +0000 https://fedscoop.com/?p=76678 Transparency around AI may result in issues for DHS elements that are more discreet in their operations and the information they share publicly, CIO Eric Hysen said.

The post AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says appeared first on FedScoop.

]]>
As the Department of Homeland Security ventures deeper into the adoption of artificial intelligence — while doing so in a transparent, responsible way in line with policies laid out by the Biden administration — it’s likely to result in friction for some of the department’s elements that don’t typically operate in such an open manner, according to DHS’s top AI official.

Eric Hysen, CIO and chief AI officer for DHS, said Tuesday at the CrowdStrike Gov Threat Summit that “transparency and responsible use [of AI] is critical to get right,” especially for applications in law enforcement and national security settings where the “permission structure in the public eye, in the public mind” faces a much higher bar.

But that also creates a conundrum for those DHS elements that are more discreet in their operations and the information they share publicly, Hysen acknowledged.

“What’s required to build and maintain trust with the public in our use of AI, in many cases, runs counter to how law enforcement and security agencies generally tend to operate,” he said. “And so I think we have a big cultural challenge in reorienting how we think about privacy, civil rights, transparency as not something that we do but that we tack on” to technology as an afterthought, but instead “something that has to be upfront and throughout every stage of our workplace.”

While President Joe Biden’s AI executive order gave DHS many roles in leading the development of safety and security in the nation’s use of AI applications, internally, Hysen said, the department is focused on “everything from using AI for cybersecurity to keeping fentanyl and other drugs out of the country or assisting our law enforcement officers and investigators in investigating crimes and making sure that we’re doing all of that responsibly, safely and securely.”

Hysen’s comments came a day after DHS on Monday published its first AI roadmap, spelling out the agency’s current use of the technology and its plans for the future. Responsible use of AI is a key part of the roadmap, pointing to policies DHS issued in 2023 promoting transparency and responsibility in the department’s AI adoption and adding that “[a]s new laws and government-wide policies are developed and there are new advances in the field, we will continue to update our internal policies and procedures.”

“There are real risks to using AI in mission spaces that we are involved in. And it’s incumbent on us to take those concerns incredibly seriously and not put out or use new technologies unless we are confident that we are doing everything we can, even more than what would be required by law or regulation, to ensure that it is responsible,” Hysen said, adding that his office worked with DHS’s Privacy Office, the Office for Civil Rights and Civil Liberties and the Office of the General Counsel to develop those 2023 policies.

To support the responsible development and adoption of AI, Hysen said DHS is in the midst of hiring 50 AI technologists to stand up a new DHS AI Corp, which the department announced last month.

“We are still hiring if anyone is interested,” Hysen said, “and we are moving aggressively expand our skill sets there.”

The post AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says appeared first on FedScoop.

]]>
76678
NIST releases expanded artificial intelligence risk management framework draft https://fedscoop.com/nist-ai-risk-management-framework-second-draft/ Fri, 19 Aug 2022 19:31:24 +0000 https://fedscoop.com/?p=58552 The second draft contains further details on developing trustworthy and responsible AI systems and be finalized in January 2023.

The post NIST releases expanded artificial intelligence risk management framework draft appeared first on FedScoop.

]]>
The National Institute of Standards and Technology released an expanded second draft of its artificial intelligence risk management framework with more details on developing trustworthy and responsible AI systems, said a spokesperson Friday.

NIST consulted experts, held discussions and workshops, and solicited comments before clarifying AI Risk Management Framework core outcomes, adding an audience section tied to the Organisation for Economic Co-operation and Development AI system life cycle, and explaining how AI and traditional software risks differ within the guidance.

The OECD AI system life cycle is part of a classification framework developed by the intergovernmental organization to help policymakers and regulators assess opportunities and risks that different types of AI systems present.

The latest iteration follows a first draft — in which NIST recognized for the first time that a socio-technical approach to building and deploying AI systems is needed — in March for voluntary use, and plans to officially publish AI RMF 1.0 in January 2023.

NIST’s second draft simplifies the risk section explaining how agencies and other organizations can establish or reconfigure their risk thresholds, as well as trustworthy AI characteristics and categories.

In addition to spelling out the AI RMF’s benefits, the latest draft invites evaluations of its own effectiveness and contributions of use cases, which would shed light on how risk is being managed in specific sectors or applications.

Along with the second draft, NIST released a new playbook that recommends actions framework users can take to ensure trustworthiness in AI systems’ design, development, deployment and use.

“We include guidance for two of the four functions in the AI RMF and are working on the other two functions,” a NIST spokesperson told FedScoop. “Stakeholder feedback on the early draft will help us as we complete the draft of this online resource.”

Public comments are due via email to AIframework@nist.gov by Sept. 29, 2022, or respondents can save their feedback for a third workshop Oct. 18 and 19, 2022.

The post NIST releases expanded artificial intelligence risk management framework draft appeared first on FedScoop.

]]>
58552
Pentagon unveils long-awaited plan for implementing ‘responsible AI’ https://fedscoop.com/pentagon-unveils-long-awaited-plan-for-implementing-responsible-ai/ Wed, 22 Jun 2022 20:37:46 +0000 https://fedscoop.com/?p=54293 This new pathway makes DOD’s responsible AI policy tractable for implementation, according to its second in command.

The post Pentagon unveils long-awaited plan for implementing ‘responsible AI’ appeared first on FedScoop.

]]>
Deputy Secretary of Defense Kathleen Hicks signed the Responsible Artificial Intelligence Strategy and Implementation Pathway (RAI S&I pathway) on Tuesday marking a highly anticipated next step in the Defense Department’s carrying out of its AI Ethical Principles adopted more than two years ago.

The 47-page document directs the sprawling Pentagon’s strategic approach for operationalizing those foundational principles and, more broadly, communicates a framework for how DOD will deliberately leverage AI in a lawful, ethical and accountable manner.

“It is imperative that we establish a trusted ecosystem that not only enhances our military capabilities but also builds confidence with end-users, warfighters, the American public, and international partners. The pathway affirms the department’s commitment to acting as a responsible AI-enabled organization,” Hicks said in a statement shared with FedScoop.

Modern computer systems rely on AI to perform tasks that typically demand at least some human intelligence. Though it’s not new, “technological breakthroughs in the last decade have drastically changed the national security landscape,” Hicks noted in her foreword to the pathway. For that and other reasons, the Pentagon has been increasingly deploying AI in recent years to enable a wide range of functions both on and off the battlefield.

But along with the heaps of benefits the technology can provide, it also holds the potential to introduce high-risk, unintended consequences if not used carefully.

“As the DOD embraces Al, it remains focused on the imperative of harnessing this technology in a manner consistent with our national values, shared democratic ideals, and our military’s steadfast commitment to lawful and ethical behavior,” officials wrote in the new pathway. 

After consulting with leading AI experts for more than a year, the Pentagon in February 2020 officially adopted a series of ethical principles to govern its use of the technology based on the recommendations received. DOD reaffirmed its commitment to the principles in May 2021 and published six foundational tenets that serve as priority areas to guide responsible AI implementation across all its components. 

The tenets include: RAI governance, warfighter trust, AI product and acquisition lifecycle, requirements validation, responsible Al ecosystem, and Al workforce.

This new RAI S&I pathway is founded on and organized around those tenets — and according to Hicks, it “makes [DOD’s] RAI policy tractable for implementation.”

Notably, the pathway also adds new, brief goals to each tenet to more deeply communicate the department’s desired result in each priority area. Under “RAI governance,” for example, the pathway directs officials to modernize governance structures and processes that allow for continuous oversight of DOD’s AI use, and to produce “clear mechanisms” to support users and developers in their RAI implementation as well as provide them with a means to report potential concerns. 

DOD’s release of this pathway comes on the heels of a significant structural shakeup that placed a number of its technology-driving components under a newly established Chief Digital and Artificial Intelligence Office (CDAO). In the document, officials outline the roles of that office and other DOD components as they complete the associated work. 

The CDAO’s recently named RAI Chief Diane Staheli will steer and directly support DOD’s ongoing implementation effort, providing day-to-day expertise to those involved.

“Ultimately, DOD cannot maintain its competitive advantage without transforming itself into an AI-ready and data-centric organization, with RAI as a prominent feature,” officials wrote.

The post Pentagon unveils long-awaited plan for implementing ‘responsible AI’ appeared first on FedScoop.

]]>
54293