National Institute of Standards and Technology (NIST) Archives | FedScoop https://fedscoop.com/tag/national-institute-of-standards-and-technology-nist/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Fri, 24 May 2024 21:15:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 National Institute of Standards and Technology (NIST) Archives | FedScoop https://fedscoop.com/tag/national-institute-of-standards-and-technology-nist/ 32 32 NIST would ‘have to consider’ workforce reductions if appropriations cut goes through https://fedscoop.com/nist-budget-cuts-ai-safety-institute/ Fri, 24 May 2024 21:15:01 +0000 https://fedscoop.com/?p=78501 Director Laurie Locascio said the agency is “fully on track” to meet its AI executive order requirements, but proposed cuts loom over its work.

The post NIST would ‘have to consider’ workforce reductions if appropriations cut goes through appeared first on FedScoop.

]]>
Recent reductions to the National Institute of Standards and Technology’s budget have forced the agency’s chief to do some “cutting to the bone,” though the workforce has so far been protected. That could change if another proposed cut goes through. 

During a House Science, Space and Technology Committee hearing Wednesday, ranking member Zoe Lofgren, D-Calif., asked NIST Director Laurie Locascio if a 6% cut, proposed by Republicans on the House Appropriations Committee, would result in staff reductions.

“We will have to look at that, for sure. Yes, we will have to consider that,” Locascio said. “It was said that we were lean and mighty, and we’re proud of that — we are lean and mighty and we’ve worked very hard to be the best bang for your buck. … But it really does cut into the bone when we have to get into these kind of deep cuts.”

In response to NIST’s fiscal year 2024 cuts, Locascio said the agency was forced to “stop hiring and filling gaps,” noting specific pauses in adding to its CORE standards program, building out new electric vehicle standards and pursuing new capacity for clinical and biological standards.

“It really put a big halt on the momentum moving forward in several critical areas,” she said.

Financial uncertainties notwithstanding, the agency has been able to push forward in its artificial intelligence work. In response to questioning from committee Chair Frank Lucas, R-Okla., about NIST’s progress on President Joe Biden’s AI executive order, Locascio said the agency is “on target to meet all” of the EO’s deadlines, pointing to recent publications on synthetic content, a draft plan for international AI standards and a vision paper for the AI Safety Institute.  

The AI Safety Institute, which last month added five members to its executive leadership team, drew plenty of interest from committee members during Wednesday’s hearing. Reps. Suzanne Bonamici, D-Ore., and Gabe Amo, D-R.I., both asked Locascio how the scope of the AI Safety Institute might be scaled back if funding for the group remains low.

NIST is currently spending $6 million on the institute, Locascio said, but it will be “very, very tough” to continue its work on developing guidelines, evaluating models and engaging in research absent additional funding.

“We are fully on track to meet the president’s executive order requirements and stand up the AI Safety Institute,” Locascio added. “But so much more is asked of us and we don’t want to let down the country and we definitely are working as hard as we can to do what we can with the money that we have. We can do more with more.”

Rep. Val Foushee, D-N.C., meanwhile, expressed concerns about the “ambiguities in the scope and direction” of the AI Safety Institute, as well as whether it would focus too much on the technology’s existential threats as opposed to the “concrete tangible harms confronting us right now.”

“The AI Safety Institute is going to be focused very clearly on safety science,” Locascio said, adding that the group will also be “working with the international community and then doing testing of large language models to carry out testing and evaluation to make sure that they’re safe for use. … I can also promise you that … everything that we do will be science based.”

The post NIST would ‘have to consider’ workforce reductions if appropriations cut goes through appeared first on FedScoop.

]]>
78501
New Commerce strategy document points to the difficult science of AI safety https://fedscoop.com/new-commerce-strategy-document-points-to-the-difficult-science-of-ai-safety/ Tue, 21 May 2024 16:04:36 +0000 https://fedscoop.com/?p=78420 The Biden administration seeks international coordination on critical AI safety challenges.

The post New Commerce strategy document points to the difficult science of AI safety appeared first on FedScoop.

]]>
The Department of Commerce on Tuesday released a new strategic vision on artificial intelligence and unveiled more detailed plans about its new AI Safety Institute. 

The document, which focuses on developing a common understanding of and practices to support AI security, comes as the Biden administration seeks to build international consensus on AI safety issues. 

AI researchers continue to debate and study the potential risks of the technology, which include bias and discrimination concerns, privacy and safety vulnerabilities, and more far-reaching fears about so-called general artificial intelligence. In that vein, the strategy points to myriad definitions, metrics, and verification methodologies for AI safety issues. In particular, the document discusses developing ways of detecting synthetic content, model security best practices, and other safeguards.

It also highlights steps that the AI Safety Institute, which is housed within Commerce’s National Institute of Standards and Technology, might help promote and evaluate more advanced models, including red-teaming and A/B testing. Commerce expects the labs of NIST — which is still facing ongoing funding challenges — to conduct much of this work. 

“The strategic vision we released today makes clear how we intend to work to achieve that objective and highlights the importance of cooperation with our allies through a global scientific network on AI safety,” Commerce Secretary Gina Raimondo in a statement. “Safety fosters innovation, so it is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”

The AI Safety Institute is also looking at ways to support the work of AI safety evaluations within the broader community, including through publishing guidelines for developers and deployers and creating evaluation protocols that could be used by, for instance, third-party independent evaluators. Eventually, the institute hopes to create a “community” of evaluators and lead an international network on AI safety. 

The release of the strategy is only the latest step taken by the Commerce Department, which is leading much of the Biden administration’s work on emerging technology. 

Earlier this year, the AI Safety Institute announced the creation of a consortium to help meet goals in the Biden administration’s executive order on the technology. In April, the Commerce Department added five new people to the AI Safety Institute’s executive leadership team.

That same month, Raimondo signed a memorandum of understanding with the United Kingdom focused on artificial intelligence. This past Monday, the UK’s technology secretary said its AI Safety Institute would open an outpost in the Bay Area, its first overseas office. 

The post New Commerce strategy document points to the difficult science of AI safety appeared first on FedScoop.

]]>
78420
NIST issues final guidance update for protecting sensitive information https://fedscoop.com/nist-issues-final-update-protecting-sensitive-information/ Tue, 14 May 2024 17:12:58 +0000 https://fedscoop.com/?p=78314 The publications are aimed at providing clearer and unambiguous guidance to private-sector partners, according to the agency.

The post NIST issues final guidance update for protecting sensitive information appeared first on FedScoop.

]]>
Final versions of two publications that the National Institute of Standards and Technology issued Tuesday are aimed at helping contractors and other organizations protect and secure controlled unclassified information they handle.

The guidance comes after the agency solicited feedback on drafts of the documents last year, and clarifies previous NIST guidance that included language inconsistent with the agency’s source catalog of security and privacy controls. In a Tuesday release, NIST said that wording potentially created “ambiguity” and “uncertainty.”

“For the sake of our private sector customers, we want our guidance to be clear, unambiguous and tightly coupled with the catalog of controls and assessment procedures used by federal agencies,” Ron Ross, an author of the publications, said in the release. “This update is a significant step toward that goal.”

The two publications are Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations (Special Publication 800-171r3) and Assessing Security Requirements for Controlled Unclassified Information (SP 800-171Ar3). The latter is a companion publication to help people assess the requirements outlined in the former and includes updated assessment procedures and new examples of how to conduct those assessments, according to the release.

Controlled unclassified information, which includes things like intellectual property and employee health information, can be enticing for bad actors. “Systems that process, store and transmit CUI often support government programs involving critical assets, such as weapons systems and communications systems, which are potential targets for adversaries,” according to the release. 

In the release of the draft versions last year, Ross noted CUI had recently “been a target of state-level espionage.”

The updates take into account commenters’ interest in machine-readable formats of the guidance, like JSON and Excel, to make them easier to use and reference, according to the release.

“Providing the guidance in these additional formats will allow them to do that. It will help a wider group of users to understand the requirements and implement them more quickly and efficiently,” Ross said.

In addition to issuing the new publications, NIST said it plans to revise other publications related to CUI in “coming months.” Those updates will include publications on enhanced security requirements (SP 800-172) and assessments (SP 800-172A)

The post NIST issues final guidance update for protecting sensitive information appeared first on FedScoop.

]]>
78314
Bipartisan Senate bill on AI security would bolster voluntary cyber reporting processes https://fedscoop.com/senate-bill-on-ai-security-bolster-voluntary-cyber-reporting/ Thu, 02 May 2024 19:09:30 +0000 https://fedscoop.com/?p=77965 The AI Act of 2024 from Sens. Warner and Tillis calls on NIST and CISA to update databases and NSA to launch an AI security center.

The post Bipartisan Senate bill on AI security would bolster voluntary cyber reporting processes appeared first on FedScoop.

]]>
A bipartisan Senate bill released Wednesday would strengthen security measures around artificial intelligence, overhauling a series  of actions including cyber vulnerability tracking and a public database for AI incident reports.

The Secure AI Act of 2024, introduced by Sens. Mark Warner, D-Va., and Thom Tillis, R-N.C., requires the National Institute of Standards and Technology to update the National Vulnerability Database (NVD) and the Cybersecurity and Infrastructure Security Agency to update the Common Vulnerabilities and Exposure (CVE) program, or create a new process, according to a summary of the bill

Additionally, the bill would charge the National Security Agency with establishing an AI Security Center that would provide an AI test-bed for research for private-sector and academic researchers, and develop guidance to prevent or mitigate “counter AI-techniques.”

“Safeguarding organizations from cybersecurity risks involving AI requires collaboration and innovation from both the private and public sector,” Tillis said in a press release. “This commonsense legislation creates a voluntary database for reporting AI security and safety incidents and promotes best practices to mitigate AI risks.” 

Under the legislation, CISA and NIST would have one year to develop and implement a voluntary database for tracking AI security and safety incidents, which would be available to the public. 

Similarly, NIST would only have 30 days after the enactment of this legislation to initiate a “multi-stakeholder process” to evaluate if the consensus standards for vulnerability reporting accommodate AI security vulnerabilities. After establishing this process, NIST would have 180 days to submit a report to Congress about the sufficiency of reporting processes. 

“By ensuring that public-private communications remain open and up-to-date on current threats facing our industry, we are taking the necessary steps to safeguard against this new generation of threats facing our infrastructure,” Warner said in the press release.

The post Bipartisan Senate bill on AI security would bolster voluntary cyber reporting processes appeared first on FedScoop.

]]>
77965
In deploying AI, the Federal Aviation Administration faces unique challenges https://fedscoop.com/in-deploying-ai-the-federal-aviation-administration-faces-unique-challenges/ Tue, 30 Apr 2024 10:00:00 +0000 https://fedscoop.com/?p=77753 As federal agencies ramp up their AI work, observers say the FAA is taking a “cautious” approach as it wrestles with safety questions.

The post In deploying AI, the Federal Aviation Administration faces unique challenges appeared first on FedScoop.

]]>
The Biden administration has made the deployment of artificial intelligence a priority, directing federal agencies to look for ways to integrate the technology into their operations. But the Federal Aviation Administration faces unique challenges with that goal.

Through partners, its own internal research staff, and work w​ith NASA, the country’s aviation safety regulator is looking at a range of AI applications. The FAA has a chief scientific and technical advisor for artificial intelligence — machine learning, who is charged with expanding the country’s role in understanding how AI might be deployed in aviation contexts. And the agency is working on a plan, along with NASA, for certifying AI technologies for use in the national airspace system.

“We are harnessing predictive analytics, machine learning, and artificial intelligence to develop streams of data,” Polly Trottenberg, the FAA’s acting administrator, said in a note within one of the agency’s recent four-year research plans. “These capabilities allow us to create new tools and techniques and adopt new technologies.”

But hurdles remain for actually deploying AI. While the FAA has implemented risk management standards for the safety of national airspace, the agency told FedScoop it still needs to “adapt AI risk management methodologies and best practices from the National Institute of Science and Technology,” along with other institutions. The FAA has released several use cases in its AI inventory, but many of them are still somewhat modest, experts told FedScoop. Other uses are still in the research phase. 

There are further constraints, too. While the FAA is investing in research and development related to artificial intelligence, the aviation industry is more broadly facing ongoing safety issues with Boeing aircraft and an overworked population of air traffic controllers. And then there’s the matter of ensuring that flying stays safe, despite excitement about using artificial intelligence.

“It’s still very early days,” noted Anand Rao, a Carnegie Mellon data science and AI professor. “They’re taking a conservative, cautious approach.” 

The FAA declined to make Dr. Trung T. Pham, the agency’s chief AI leader, available for comment, nor did it answer FedScoop’s questions about staff within the agency focused specifically on artificial intelligence. The FAA, along with the Department of Transportation, have also declined to provide further detail about a mention of ChatGPT for software coding that agency staff removed from its AI inventory last year. Still, documents about several AI use cases from the agency, along with interviews with experts, provide insight into the FAA’s approach to the technology.

FAA pursues no-frills approach to AI

When asked about the most promising use cases for AI, a spokesperson for the FAA pointed to several, including predictive analytics that could help mitigate safety risks, assistance with decision support, automating certain processes, and improving engagement through virtual assistants. Some of those use cases have already been disclosed in the Department of Transportation’s executive order-required AI inventory while others are discussed in the agency’s four-year research plan. The DOT recently edited its inventory and some of the use cases appear to have been redacted, though the agency did not respond to a request for comment. 

Some of these AI applications are related to the weather, including a convective weather avoidance model meant to analyze how pilots navigate thunderstorms. The agency is also looking at an effort to use AI to support air traffic controllers, per the four-year research plan, as well as using artificial intelligence to address aviation cybersecurity. And the FAA is studying the use of AI and voice recognition technology to improve flight simulations used in pilot training. Still, many of the AI use cases identified by FedScoop are rudimentary or still relatively early in their deployment, while others remain in the research phase. 

Several that are in use are relatively modest — and reflect the agency’s circumspect approach. The FAA’s Office of Safety and Technical Training, which conducts data analysis and investigations, has already deployed a model for use by the runway safety team. The internal tool assists the team with automatically classifying runway incursions as part of their analysis. FedScoop obtained documents describing how this system works —  but the technology discussed in those documents, Rao said, represent well-tested algorithms that have been around since the 1990s and early 2000s, and not the newer technology used for systems like ChatGPT. 

Another is the “regulatory compliance mapping tool,” which is essentially an internal search engine-esque system for regulatory concepts. The tool is built off a database of documents provided by organizations like the FAA, federal agencies, and the International Civil Aviation Organization, a branch of the United Nations that focuses on aviation. The idea for the tool, which leverages natural language processing, is to reduce “research time from days or weeks to hours,” according to a presentation by the Aeronautical Information Standards Branch dated Sept. 20. 

Still, the tool is “essentially just a database,” said Syed A.M. Shihab, an assistant professor of aeronautics and engineering at Kent State University, and not particularly advanced. While around 175 FAA employees can access the tool, the agency told FedScoop, the platform is used fewer than 20 times a week, according to that same presentation. The FAA, which said the “internal FAA tool” is in the “development phase,” appears to have spent more than $1 million with a company called iCatalyst — which did not respond to a request for comment —  to build it, according to a federal government contracts database

“The FAA is continually working to make our processes more efficient. The Regulatory Compliance Mapping Tool (RCMT) is an initiative that can significantly speed up safety research,” the agency said in a statement. In March, the agency said security authorization would kick off later that month and that it had completed a Section 508 self-assessment process. 

Other systems disclosed in the AI inventory either don’t use the technology yet or haven’t been deployed. These include a tool to help transcribe conversations between pilots and another, called ROMIO, meant to help pilots understand cloud structures, according to FAA documents.

FAA’s AI work goes beyond disclosed use cases

Other AI work is ongoing, but it’s not clear if or how it’s been deployed. The FAA has worked with researchers at Georgia Tech and the University of Maryland to use AI for measuring collision risk, according to federal contract records. It also appears to have procured the development and implementation of a machine learning model from a company called Deep AI Solutions for its safety information sharing system. 

The FAA’s work with NASA, meanwhile, includes looking at AI for “runway configuration management, digitization of standard operating practices and letters of agreements, and natural language processing,” per a spokesperson. It also represents NASA’s machine learning airport surface model, which was supposed to help the FAA capture the location of routes, taxiways, and runways using a real-time machine learning system. NASA said this work has helped contribute to a framework it’s working on with the aviation agency. 

And at the MIT-based Lincoln Laboratory, which is funded by the Defense Department and the FAA, researchers aren’t focusing on AI for safety-critical applications, according to Tom Reynolds, who leads the lab’s air traffic control systems group. For example, the lab is researching a technology called “the offshore precipitation capability” to assist with weather radar coverage gaps. “Things that are more advisory and not directly in the loop of deciding where individual aircraft fly, but rather helping air traffic controllers with situational awareness and strategic decision making,” Reynolds said. 

Technically, the FAA has been looking at AI for decades — and lots of preliminary work with the technology does seem to be underway. For example, in March, the FAA announced a data challenge meant to help use artificial intelligence to address problems concerning the national airspace, and it’s recently hosted workshops on machine learning, too. Email records show that the FAA is invited to monthly meetings of the Department of Transportation’s AI task force. 

The FAA is working with industry and international counterparts on an AI roadmap, and developing a certification research framework for artificial intelligence applications with NASA. The plan is focused on developing a way of certifying AI applications that could be deployed in the national airspace in a highly safe way. It’s expected to launch later this year, the space agency said. 

Still, most of the AI work at the FAA isn’t for direct use in aviation. That reality reflects the broader challenge of using the technology in a safety critical context. In meetings with industry, the agency’s chief adviser for aircraft computer software has highlighted the challenge of approving AI software, while Pham, the agency’s AI chief AI, has detailed concerns about traceability, per a blog post on the website of RCTA, a nonprofit aviation modernization group. 

Similarly, a roadmap the FAA is working on with other aviation agencies around the world has encountered several challenges, including issues with predictability and explainability, the tracking of datasets that might feed AI models, training humans to work alongside AI, model bias, and safety.

“Because aviation is a safety critical industry and domain, in general, stakeholders involved in this industry are slower to adapt AI models and tools for decision-making and prediction tasks,” said Shihab, the Kent State professor. “It’s all good when the AI model is performing well, but all it takes is one missed prediction or one inaccurate classification, concerning the use cases, to compromise safety of flight operations.”

The post In deploying AI, the Federal Aviation Administration faces unique challenges appeared first on FedScoop.

]]>
77753
NIST launches GenAI evaluation program, releases draft publications on AI risks and standards https://fedscoop.com/nist-launches-genai-evaluation-program-releases-draft-ai-publications/ Mon, 29 Apr 2024 21:50:37 +0000 https://fedscoop.com/?p=77783 The actions were among several announced by the Department of Commerce at the roughly six-month mark after Biden’s executive order on artificial intelligence.

The post NIST launches GenAI evaluation program, releases draft publications on AI risks and standards appeared first on FedScoop.

]]>
The National Institute of Standards and Technology announced a new program to evaluate generative AI and released several draft documents on the use of the technology Monday, as the government hit a milestone on President Joe Biden’s AI executive order.

The Department of Commerce’s NIST was among multiple agencies on Monday that announced actions they’ve taken that correspond with the October order at the 180-day mark since its issuance. The actions were largely focused on mitigating the risks of AI and included several actions specifically focused on generative AI.

“The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time,” Commerce Secretary Gina Raimondo said in a statement. “With these resources and the previous work on AI from the Department, we are continuing to support responsible innovation in AI and America’s technological leadership.”

Among the four documents released by NIST on Monday was a draft version of a publication aimed at helping identify generative AI risks and strategy for using the technology. That document will serve as a companion to its already-published AI risk management framework, as outlined in the order, and was developed with input from a public working group with more than 2,500 members, according to a release from the agency.

The agency also released a draft of a companion resource to its Secure Software Development Framework that outlines software development practices for generative AI tools and dual-use foundation models. The EO defined dual-use foundation models as those that are “trained on broad data,” are “applicable across a wide range of contexts,” and “exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters,” among other things. 

“For all its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see with traditional software. These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation,” Laurie E. Locascio, NIST director and undersecretary of commerce for standards and technology, said in a statement.

NIST also released draft documents on reducing risks of synthetic content — that which was AI-created or altered — and a plan for developing global AI standards. All four documents have a comment period that ends June 2, according to the Commerce release.

Notably, the agency also announced its “NIST GenAI” program for evaluating generative AI technologies. According to the release, that will “help inform the work of the U.S. AI Safety Institute at NIST.” Registration for a pilot of those evaluations opens in May.

The program will evaluate generative AI with a series of “challenge problems” that will test the capabilities of the tools and use that information “promote information integrity and guide the safe and responsible use of digital content,” the release said. “One of the program’s goals is to help people determine whether a human or an AI produced a given text, image, video or audio recording.”

The release and focus on generative AI comes as other agencies similarly took action Monday on federal use of such tools. The Office of Personnel Management released its guidance for federal workers’ use of generative AI tools and the General Services Administration released a resource guide for federal acquisition of generative AI tools. 

The post NIST launches GenAI evaluation program, releases draft publications on AI risks and standards appeared first on FedScoop.

]]>
77783
Scientists must be empowered — not replaced — by AI, report to White House argues https://fedscoop.com/pcast-white-house-science-advisors-ai-report-recommendations/ Tue, 23 Apr 2024 21:15:59 +0000 https://fedscoop.com/?p=77551 The upcoming report from the President's Council of Advisors on Science and Technology pushes for the “empowerment of human scientists,” responsible AI use and shared resources.

The post Scientists must be empowered — not replaced — by AI, report to White House argues appeared first on FedScoop.

]]>
The team of technologists and academics charged with advising President Joe Biden on science and technology is set to deliver a report to the White House next week that emphasizes the critical role that human scientists must play in the development of artificial intelligence tools and systems.

The President’s Council of Advisors on Science and Technology voted unanimously in favor of the report Tuesday following a nearly hourlong public discussion of its contents and recommendations. The delivery of PCAST’s report will fulfill a requirement in Biden’s executive order on AI, which called for an exploration of the technology’s potential role in “research aimed at tackling major societal and global challenges.”

“Empowerment of human scientists” was the first goal presented by PCAST members, with a particular focus on how AI assistants should play a complementary role to human scientists, rather than replacing them altogether. The ability of AI tools to process “huge streams of data” should free up scientists “to focus on high-level directions,” the report argued, with a network of AI assistants deployed to take on “large, interdisciplinary, and/or decentralized projects.”

AI collaborations on basic and applied research should be supported across federal agencies, national laboratories, industry and academia, the report recommends. Laura H. Greene, a Florida State University physics professor and chief scientist at the National High Magnetic Field Laboratory, cited the National Science Foundation’s Materials Innovation Platforms as an example of AI-centered “data-sharing infrastructures” and “community building” that PCAST members envision. 

“We can see future projects that will include collaborators to develop next-generation quantum computing qubits, wholesale modeling, whole Earth foundation models” and an overall “handle on high-quality broad ranges of scientific databases across many disciplines,” Greene said.

The group also recommended that “innovative approaches” be explored on how AI assistance can be integrated into scientific workflows. Funding agencies should keep AI in mind when designing and organizing scientific projects, the report said.

The second set of recommendations from PCAST centered on the responsible and transparent use of AI, with those principles employed in all stages of the scientific research process. Funding agencies “should require responsible AI use plans from researchers that would assess potential AI-related risks,” the report states, matching the principles called out in the White House’s AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework.

Eric Horvitz, chief scientific officer at Microsoft, said PCAST’s emphasis on responsible AI use means putting forward “our best efforts to making sure these tools are used in the best ways possible and keeping an eye on possible downsides, whether the models are open source or not open source models. … We’re very optimistic about the wondrous, good things we can expect, but we have to sort of make sure we keep an eye on the rough edges.”

The potential for identifying those “rough edges” rests at least partially in the group’s third recommendation of having shared and open resources. PCAST makes its case in the report for an expansion of existing efforts to “broadly and equitably share basic AI resources.” There should be more secure access granted to federal datasets to aid critical research needs, the report noted, with the requisite protections and guardrails in place.

PCAST members included a specific callout for an expansion of NSF’s National Secure Data Service Demonstration project and the Census Bureau’s Federal Statistical Research Data Centers. The National Artificial Intelligence Research Resource should also be “fully funded,” given its potential as a “stepping-stone for even more ambitious ‘moonshot’ programs,” the report said.

AI-related work from the scientists who make up PCAST won’t stop after the report is edited and posted online next week. Bill Press, a computer science and integrative biology professor at the University of Texas at Austin, said it’s especially important now in this early developmental stage for scientists to test AI systems and learn to use them responsibly. 

“We’re dealing with tools that, at least right now, are ethically neutral,” Press said. “They’re not necessarily biased in the wrong direction. And so you can ask them to check these things. And unlike human people who write code, these tools don’t have pride of ownership. They’re just as happy to try to reveal biases that might have incurred as they are to create them. And that’s where the scientists are going to have to learn to use them properly.”

The post Scientists must be empowered — not replaced — by AI, report to White House argues appeared first on FedScoop.

]]>
77551
404 page: the error sites of federal agencies https://fedscoop.com/404-page-the-error-sites-of-federal-agencies/ Tue, 23 Apr 2024 20:55:39 +0000 https://fedscoop.com/?p=77481 Technology doesn’t always work in expected ways. Some agencies are using a creative touch to soften an error message.

The post 404 page: the error sites of federal agencies appeared first on FedScoop.

]]>
Infusing a hint of humor or a dash of “whimsy” in government websites, including error messages, could humanize a federal agency to visitors. At least that’s how the National Park Service approaches its digital offerings, including its 404 page. 

“Even a utilitarian feature, such as a 404 page, can be fun — and potentially temper any disappointment at having followed a link that is no longer active,” an NPS spokesperson said in an email to FedScoop. “Similar to our voice and tone on other digital platforms, including social media, our main goal is to always communicate important information that helps visitors stay safe and have the best possible experience.”

404 pages are what appear when a server cannot locate a website or resource at a specific URL. Hitting a 404 could be due to a number of reasons: a spelling error in the URL, the page may not exist anymore, or the server moved a page without having the link redirect. As a result of the error, many different entities with websitessuch as state and local governments have had a stroke of creative genius to make users aware of an issue while also having a bit of fun — which rings true for some federal agencies as well. 

While 404 pages could seem like a silly or boring part of the federal government’s use of technology, there has been a significant push in the Biden administration, specifically out of the Office of Management and Budget, to enhance the user experience of federal agencies’ online presence — with a focus on accessibility

NPS’s spokesperson said the agency strives to make its website “as user-friendly as possible” and “have processes in place” to make sure that the links are working. 

Currently, the park service’s site has a revolving 404 page that showcases several different nature-themed images, with puns or quotes alongside information on how to get back on the right track for whatever online adventure a visitor seeks. 

NPS said that it doesn’t have any plans to update its error page, “but we’re always working to better understand our users and to improve the user experience of NPS.gov and all our digital products.”

So, until further notice, visitors can still see an artistic rendering of a bear — complete with a relevant pun — if they get a little turned around on NPS’s site.

NPS isn’t alone in walking a line of informing the public about website miscommunications and simultaneously showcasing a bit of humor. The Federal Bureau of Prisons, for one, told FedScoop in an email that it “seeks to optimize the user experience in performance, access and comprehension.”

FBOP error page message

“The design of the FBOP’s 404 page was meant to be both functional and informative; by combining imagery with text, we educate the user as to the nature of a 404 error beyond standard system language and provide explanations as to why the error occurred,” Benjamin O’Cone, a spokesperson for FBOP, said in an email to FedScoop. 

Unlike other agencies, the FBOP’s 404 imagery is not totally relevant to the mission of the bureau. Instead, it offers something a bit more meta than the others — referring to the 404 page as a “door that leads to nowhere.”

“While the Federal Bureau of Prisons (FBOP) seeks to ensure a fully responsive and evolving website, we recognize that there may be occasions where search engine indexing is outdated and may contain links to expired pages,” O’Cone said.

Similarly, NASA has a specific area of its 404 page that shares information about its updated, or “improved,” site, with an option to look at a sitemap and submit feedback. “Rockets aren’t the only thing we launch,” the agency muses.


This also comes with an equally creative 404 page, stating that the “cosmic object you were looking for has disappeared beyond the horizon,” against the backdrop of outer space. 

Other websites, like the National Institute of Standards and Technology’s site, may not have artistic renderings or out-of-this-world visuals, but NIST instead shares a joke centered around the agency’s area of interest. 

As NIST releases significant frameworks and updated guidance for different areas of federal technology use and deployment, it only makes sense that the agency refers to its error page as a request that isn’t standard.

While this collection of websites represents just a handful that add a creative touch to error messages, many government entities lack the same information and resources that others have. 


For example, see the Department of Energy, which simply states that “the requested page could not be found” and offers no further clue as to what a user could be experiencing.

The post 404 page: the error sites of federal agencies appeared first on FedScoop.

]]>
77481
Cybersecurity executive order requirements are nearly complete, GAO says https://fedscoop.com/cybersecurity-executive-order-requirements-gao-omb-cisa/ Mon, 22 Apr 2024 20:20:47 +0000 https://fedscoop.com/?p=77495 CISA and OMB have just a handful of outstanding tasks to finish as part of the president’s 2021 order.

The post Cybersecurity executive order requirements are nearly complete, GAO says appeared first on FedScoop.

]]>
Just a half-dozen leadership and oversight requirements from the 2021 executive order on improving the nation’s cybersecurity remain unfinished by the agencies charged with implementing them, according to a new Government Accountability Office report.

Between the Cybersecurity and Infrastructure Security Agency, the National Institute of Standards and Technology and the Office of Management and Budget, 49 of the 55 requirements in President Joe Biden’s order aimed at safeguarding federal IT systems from cyberattacks have been fully completed. Another five have been partially finished and one was deemed to be “not applicable” because of “its timing with respect to other requirements,” per the GAO.

“Completing these requirements would provide the federal government with greater assurance that its systems and data are adequately protected,” the GAO stated

Under the order’s section on “removing barriers to threat information,” OMB only partially incorporated into its annual budget process a required cost analysis.

“OMB could not demonstrate that its communications with pertinent federal agencies included a cost analysis for implementation of recommendations made by CISA related to the sharing of cyber threat information,” the GAO said. “Documenting the results of communications between federal agencies and OMB would increase the likelihood that agency budgets are sufficient to implement these recommendations.”

OMB also was unable to demonstrate to GAO that it had “worked with agencies to ensure they had adequate resources to implement” approaches for the deployment of endpoint detection and response, an initiative to proactively detect cyber incidents within federal infrastructure. 

“An OMB staff member stated that, due to the large number of and decentralized nature of the conversations involved, it would not have been feasible for OMB to document the results of all EDR-related communications with agencies,” the GAO said.

OMB still has work to do on logging as well. The agency shared guidance with other agencies on how best to improve log retention, log management practices and logging capabilities but did not demonstrate to the GAO that agencies had proper resources for implementation. 

CISA, meanwhile, has fallen a bit short on identifying and making available to agencies a list of “critical software” in use or in the acquisition process. OMB and NIST fully completed that requirement, but a CISA official told the GAO that the agency “was concerned about how agencies and private industry would interpret the list and planned to review existing criteria needed to validate categories of software.” A new version of the category list and a companion document with clearer explanations is forthcoming, the official added. 

CISA also has some work to do concerning the Cyber Safety Review Board. The multi-agency board, made up of representatives from the public and private sectors, has felt the heat from members of Congress and industry leaders over what they say is a lack of authority and independence. According to the GAO, CISA hasn’t fully taken steps to implement recommendations on how to improve the board’s operations. 

“CISA officials stated that it has made progress in implementing the board’s recommendations and is planning further steps to improve the board’s operational policies and procedures,” the GAO wrote. “However, CISA has not provided evidence that it is implementing these recommendations. Without CISA’s implementation of the board’s recommendations, the board may be at risk of not effectively conducting its future incident reviews.”

Federal agencies have, however, checked off the vast majority of boxes in the EO’s list. “For example, they have developed procedures for improving the sharing of cyber threat information, guidance on security measures for critical software, and a playbook for conducting incident response,” the GAO wrote. Additionally, the Office of the National Cyber Director, “in its role as overall coordinator of the order, collaborated with agencies regarding specific implementations and tracked implementation of the order.”

The GAO issued two recommendations to the Department of Homeland Security, CISA’s parent agency, and three to OMB on full implementation of the EO’s requirements. OMB did not respond with comments, while DHS agreed with GAO recommendations on defining critical software and improving the Cyber Safety Review Board’s operations.

The post Cybersecurity executive order requirements are nearly complete, GAO says appeared first on FedScoop.

]]>
77495
Commerce adds five members to AI Safety Institute leadership https://fedscoop.com/commerce-adds-to-ai-safety-institute-leadership/ Wed, 17 Apr 2024 17:48:43 +0000 https://fedscoop.com/?p=77326 The new AI Safety Institute executive leadership team members include researchers and current administration officials.

The post Commerce adds five members to AI Safety Institute leadership appeared first on FedScoop.

]]>
The Department of Commerce has added five people to the AI Safety Institute’s leadership team, including current administration officials, a former OpenAI manager, and academics from Stanford and the University of Southern California.

In a statement announcing the hires Tuesday, Commerce Secretary Gina Raimondo called the new leaders “the best in their fields.” They join the institute’s director, Elizabeth Kelly, and chief technology officer, Elham Tabassi, who were named in February. The new leaders are:

  • Paul Christiano, founder of the nonprofit Alignment Research Center who formerly ran OpenAI’s language model alignment team, will be head of AI safety;
  • Mara Quintero Campbell, who was most recently the deputy chief operating officer of Commerce’s Economic Development Administration, will be the acting chief operating officer and chief of staff;
  • Adam Russell, director of the AI division of USC’s Information Sciences Institute, will be chief vision officer;
  • Rob Reich, a professor of political science and associate director of the Institute for Human-Centered AI at Stanford, will be a senior advisor; and
  • Mark Latonero, who was most recently deputy director of the National AI Initiative Office in the White House Office of Science and Technology Policy, will be head of international engagement.

The AI Safety Institute, which is housed in the National Institute of Standards and Technology, is tasked with advancing safety of the technology through research, evaluation and developing guidelines for those assessments. That work includes actions listed in President Joe Biden’s executive order on AI outlined for NIST, such as developing guidance, red-teaming and watermarking synthetic contact. 

In February, the AI Safety Institute launched a consortium, which will contribute to the agency’s work carrying out the executive order actions. That consortium is made up of more than 200 stakeholders, including academic institutions, unions, nonprofits, and other organizations. Earlier this month, the department also announced a partnership with the U.K. to have their AI Safety bodies work together.

“I am very pleased to welcome these talented experts to the U.S. AI Safety Institute leadership team to help establish the measurement science that will support the development of AI that is safe and trustworthy,” said Laurie Locascio, NIST’s director and undersecretary of commerce for standards and technology. “They each bring unique experiences that will help the institute build a solid foundation for AI safety going into the future.”

“I am very pleased to welcome these talented experts to the U.S. AI Safety Institute leadership team to help establish the measurement science that will support the development of AI that is safe and trustworthy,” said Laurie Locascio, NIST’s director and undersecretary of commerce for standards and technology. “They each bring unique experiences that will help the institute build a solid foundation for AI safety going into the future.”

The post Commerce adds five members to AI Safety Institute leadership appeared first on FedScoop.

]]>
77326