AI Safety Institute Archives | FedScoop https://fedscoop.com/tag/ai-safety-institute/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Fri, 24 May 2024 21:15:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 AI Safety Institute Archives | FedScoop https://fedscoop.com/tag/ai-safety-institute/ 32 32 NIST would ‘have to consider’ workforce reductions if appropriations cut goes through https://fedscoop.com/nist-budget-cuts-ai-safety-institute/ Fri, 24 May 2024 21:15:01 +0000 https://fedscoop.com/?p=78501 Director Laurie Locascio said the agency is “fully on track” to meet its AI executive order requirements, but proposed cuts loom over its work.

The post NIST would ‘have to consider’ workforce reductions if appropriations cut goes through appeared first on FedScoop.

]]>
Recent reductions to the National Institute of Standards and Technology’s budget have forced the agency’s chief to do some “cutting to the bone,” though the workforce has so far been protected. That could change if another proposed cut goes through. 

During a House Science, Space and Technology Committee hearing Wednesday, ranking member Zoe Lofgren, D-Calif., asked NIST Director Laurie Locascio if a 6% cut, proposed by Republicans on the House Appropriations Committee, would result in staff reductions.

“We will have to look at that, for sure. Yes, we will have to consider that,” Locascio said. “It was said that we were lean and mighty, and we’re proud of that — we are lean and mighty and we’ve worked very hard to be the best bang for your buck. … But it really does cut into the bone when we have to get into these kind of deep cuts.”

In response to NIST’s fiscal year 2024 cuts, Locascio said the agency was forced to “stop hiring and filling gaps,” noting specific pauses in adding to its CORE standards program, building out new electric vehicle standards and pursuing new capacity for clinical and biological standards.

“It really put a big halt on the momentum moving forward in several critical areas,” she said.

Financial uncertainties notwithstanding, the agency has been able to push forward in its artificial intelligence work. In response to questioning from committee Chair Frank Lucas, R-Okla., about NIST’s progress on President Joe Biden’s AI executive order, Locascio said the agency is “on target to meet all” of the EO’s deadlines, pointing to recent publications on synthetic content, a draft plan for international AI standards and a vision paper for the AI Safety Institute.  

The AI Safety Institute, which last month added five members to its executive leadership team, drew plenty of interest from committee members during Wednesday’s hearing. Reps. Suzanne Bonamici, D-Ore., and Gabe Amo, D-R.I., both asked Locascio how the scope of the AI Safety Institute might be scaled back if funding for the group remains low.

NIST is currently spending $6 million on the institute, Locascio said, but it will be “very, very tough” to continue its work on developing guidelines, evaluating models and engaging in research absent additional funding.

“We are fully on track to meet the president’s executive order requirements and stand up the AI Safety Institute,” Locascio added. “But so much more is asked of us and we don’t want to let down the country and we definitely are working as hard as we can to do what we can with the money that we have. We can do more with more.”

Rep. Val Foushee, D-N.C., meanwhile, expressed concerns about the “ambiguities in the scope and direction” of the AI Safety Institute, as well as whether it would focus too much on the technology’s existential threats as opposed to the “concrete tangible harms confronting us right now.”

“The AI Safety Institute is going to be focused very clearly on safety science,” Locascio said, adding that the group will also be “working with the international community and then doing testing of large language models to carry out testing and evaluation to make sure that they’re safe for use. … I can also promise you that … everything that we do will be science based.”

The post NIST would ‘have to consider’ workforce reductions if appropriations cut goes through appeared first on FedScoop.

]]>
78501
New Commerce strategy document points to the difficult science of AI safety https://fedscoop.com/new-commerce-strategy-document-points-to-the-difficult-science-of-ai-safety/ Tue, 21 May 2024 16:04:36 +0000 https://fedscoop.com/?p=78420 The Biden administration seeks international coordination on critical AI safety challenges.

The post New Commerce strategy document points to the difficult science of AI safety appeared first on FedScoop.

]]>
The Department of Commerce on Tuesday released a new strategic vision on artificial intelligence and unveiled more detailed plans about its new AI Safety Institute. 

The document, which focuses on developing a common understanding of and practices to support AI security, comes as the Biden administration seeks to build international consensus on AI safety issues. 

AI researchers continue to debate and study the potential risks of the technology, which include bias and discrimination concerns, privacy and safety vulnerabilities, and more far-reaching fears about so-called general artificial intelligence. In that vein, the strategy points to myriad definitions, metrics, and verification methodologies for AI safety issues. In particular, the document discusses developing ways of detecting synthetic content, model security best practices, and other safeguards.

It also highlights steps that the AI Safety Institute, which is housed within Commerce’s National Institute of Standards and Technology, might help promote and evaluate more advanced models, including red-teaming and A/B testing. Commerce expects the labs of NIST — which is still facing ongoing funding challenges — to conduct much of this work. 

“The strategic vision we released today makes clear how we intend to work to achieve that objective and highlights the importance of cooperation with our allies through a global scientific network on AI safety,” Commerce Secretary Gina Raimondo in a statement. “Safety fosters innovation, so it is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”

The AI Safety Institute is also looking at ways to support the work of AI safety evaluations within the broader community, including through publishing guidelines for developers and deployers and creating evaluation protocols that could be used by, for instance, third-party independent evaluators. Eventually, the institute hopes to create a “community” of evaluators and lead an international network on AI safety. 

The release of the strategy is only the latest step taken by the Commerce Department, which is leading much of the Biden administration’s work on emerging technology. 

Earlier this year, the AI Safety Institute announced the creation of a consortium to help meet goals in the Biden administration’s executive order on the technology. In April, the Commerce Department added five new people to the AI Safety Institute’s executive leadership team.

That same month, Raimondo signed a memorandum of understanding with the United Kingdom focused on artificial intelligence. This past Monday, the UK’s technology secretary said its AI Safety Institute would open an outpost in the Bay Area, its first overseas office. 

The post New Commerce strategy document points to the difficult science of AI safety appeared first on FedScoop.

]]>
78420
NIST launches GenAI evaluation program, releases draft publications on AI risks and standards https://fedscoop.com/nist-launches-genai-evaluation-program-releases-draft-ai-publications/ Mon, 29 Apr 2024 21:50:37 +0000 https://fedscoop.com/?p=77783 The actions were among several announced by the Department of Commerce at the roughly six-month mark after Biden’s executive order on artificial intelligence.

The post NIST launches GenAI evaluation program, releases draft publications on AI risks and standards appeared first on FedScoop.

]]>
The National Institute of Standards and Technology announced a new program to evaluate generative AI and released several draft documents on the use of the technology Monday, as the government hit a milestone on President Joe Biden’s AI executive order.

The Department of Commerce’s NIST was among multiple agencies on Monday that announced actions they’ve taken that correspond with the October order at the 180-day mark since its issuance. The actions were largely focused on mitigating the risks of AI and included several actions specifically focused on generative AI.

“The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time,” Commerce Secretary Gina Raimondo said in a statement. “With these resources and the previous work on AI from the Department, we are continuing to support responsible innovation in AI and America’s technological leadership.”

Among the four documents released by NIST on Monday was a draft version of a publication aimed at helping identify generative AI risks and strategy for using the technology. That document will serve as a companion to its already-published AI risk management framework, as outlined in the order, and was developed with input from a public working group with more than 2,500 members, according to a release from the agency.

The agency also released a draft of a companion resource to its Secure Software Development Framework that outlines software development practices for generative AI tools and dual-use foundation models. The EO defined dual-use foundation models as those that are “trained on broad data,” are “applicable across a wide range of contexts,” and “exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters,” among other things. 

“For all its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see with traditional software. These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation,” Laurie E. Locascio, NIST director and undersecretary of commerce for standards and technology, said in a statement.

NIST also released draft documents on reducing risks of synthetic content — that which was AI-created or altered — and a plan for developing global AI standards. All four documents have a comment period that ends June 2, according to the Commerce release.

Notably, the agency also announced its “NIST GenAI” program for evaluating generative AI technologies. According to the release, that will “help inform the work of the U.S. AI Safety Institute at NIST.” Registration for a pilot of those evaluations opens in May.

The program will evaluate generative AI with a series of “challenge problems” that will test the capabilities of the tools and use that information “promote information integrity and guide the safe and responsible use of digital content,” the release said. “One of the program’s goals is to help people determine whether a human or an AI produced a given text, image, video or audio recording.”

The release and focus on generative AI comes as other agencies similarly took action Monday on federal use of such tools. The Office of Personnel Management released its guidance for federal workers’ use of generative AI tools and the General Services Administration released a resource guide for federal acquisition of generative AI tools. 

The post NIST launches GenAI evaluation program, releases draft publications on AI risks and standards appeared first on FedScoop.

]]>
77783
Commerce adds five members to AI Safety Institute leadership https://fedscoop.com/commerce-adds-to-ai-safety-institute-leadership/ Wed, 17 Apr 2024 17:48:43 +0000 https://fedscoop.com/?p=77326 The new AI Safety Institute executive leadership team members include researchers and current administration officials.

The post Commerce adds five members to AI Safety Institute leadership appeared first on FedScoop.

]]>
The Department of Commerce has added five people to the AI Safety Institute’s leadership team, including current administration officials, a former OpenAI manager, and academics from Stanford and the University of Southern California.

In a statement announcing the hires Tuesday, Commerce Secretary Gina Raimondo called the new leaders “the best in their fields.” They join the institute’s director, Elizabeth Kelly, and chief technology officer, Elham Tabassi, who were named in February. The new leaders are:

  • Paul Christiano, founder of the nonprofit Alignment Research Center who formerly ran OpenAI’s language model alignment team, will be head of AI safety;
  • Mara Quintero Campbell, who was most recently the deputy chief operating officer of Commerce’s Economic Development Administration, will be the acting chief operating officer and chief of staff;
  • Adam Russell, director of the AI division of USC’s Information Sciences Institute, will be chief vision officer;
  • Rob Reich, a professor of political science and associate director of the Institute for Human-Centered AI at Stanford, will be a senior advisor; and
  • Mark Latonero, who was most recently deputy director of the National AI Initiative Office in the White House Office of Science and Technology Policy, will be head of international engagement.

The AI Safety Institute, which is housed in the National Institute of Standards and Technology, is tasked with advancing safety of the technology through research, evaluation and developing guidelines for those assessments. That work includes actions listed in President Joe Biden’s executive order on AI outlined for NIST, such as developing guidance, red-teaming and watermarking synthetic contact. 

In February, the AI Safety Institute launched a consortium, which will contribute to the agency’s work carrying out the executive order actions. That consortium is made up of more than 200 stakeholders, including academic institutions, unions, nonprofits, and other organizations. Earlier this month, the department also announced a partnership with the U.K. to have their AI Safety bodies work together.

“I am very pleased to welcome these talented experts to the U.S. AI Safety Institute leadership team to help establish the measurement science that will support the development of AI that is safe and trustworthy,” said Laurie Locascio, NIST’s director and undersecretary of commerce for standards and technology. “They each bring unique experiences that will help the institute build a solid foundation for AI safety going into the future.”

“I am very pleased to welcome these talented experts to the U.S. AI Safety Institute leadership team to help establish the measurement science that will support the development of AI that is safe and trustworthy,” said Laurie Locascio, NIST’s director and undersecretary of commerce for standards and technology. “They each bring unique experiences that will help the institute build a solid foundation for AI safety going into the future.”

The post Commerce adds five members to AI Safety Institute leadership appeared first on FedScoop.

]]>
77326
NIST seeks participants for new artificial intelligence consortium https://fedscoop.com/nist-seeks-ai-consortium-participants/ Thu, 02 Nov 2023 17:51:06 +0000 https://fedscoop.com/?p=74366 The National Institute of Standards and Technology is looking to collaborate with nonprofits, academia, tech companies and other government entities to help promote the responsible use of AI.

The post NIST seeks participants for new artificial intelligence consortium appeared first on FedScoop.

]]>
The Department of Commerce’s National Institute of Standards and Technology is looking for collaborators to be part of a newly announced AI Safety Institute Consortium following the release of the Biden administration’s executive order on the technology.

In a post to the Federal Register and a corresponding press release Thursday, NIST invited interested organizations to write letters describing their expertise in developing or deploying trustworthy AI, and/or creating models or products that support trustworthy AI.

The agency called the consortium a “core element of the new NIST-led U.S. AI Safety Institute,” which was announced Wednesday at the U.K. AI Safety Summit 2023, and said the group would be essential to its efforts to work with stakeholders to carry out its new responsibilities under the administration’s AI executive order (EO 14110). 

The order, among other things, requires that NIST develop a companion resource to its AI Risk Management Framework that’s focused on generative AI, create guidance on differentiating between human and AI-generated content, and establish benchmarks for AI evaluation and auditing. 

“The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies and impacted communities to help ensure that AI systems are safe and trustworthy,” NIST Director and Under Secretary of Commerce for Standards and Technology Laurie E. Locascio said in a release.

The consortium, NIST said in a frequently asked questions page, will help establish “a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote development and responsible use of safe and trustworthy AI.”

NIST said the consortium’s activities will begin after enough organizations have completed and signed letters of interest that meet all the requirements, but not earlier than Dec. 4. It will also hold a workshop for organizations interested in participating on Nov. 17.

The post NIST seeks participants for new artificial intelligence consortium appeared first on FedScoop.

]]>
74366