Department of Commerce (DOC) Archives | FedScoop https://fedscoop.com/tag/department-of-commerce-doc/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Thu, 30 May 2024 21:24:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Department of Commerce (DOC) Archives | FedScoop https://fedscoop.com/tag/department-of-commerce-doc/ 32 32 Economic Development Administration on ‘brink of collapse’ amid telework dispute, union tells leadership https://fedscoop.com/eda-on-brink-of-collapse-amid-telework-dispute-union-tells-leadership/ Thu, 30 May 2024 21:22:45 +0000 https://fedscoop.com/?p=78603 Union members called on EDA leaders to engage in meetings and to come to an agreement on return-to-office policies in a letter obtained by FedScoop.

The post Economic Development Administration on ‘brink of collapse’ amid telework dispute, union tells leadership appeared first on FedScoop.

]]>
A letter penned by union officers at the Commerce Department’s Economic Development Administration describes unrest over the agency’s telework policy and tensions in communications with leadership.

The correspondence from the officers of American Federation of Government Employees Local 3810 to Assistant Secretary Alejandra Y. Castillo, which was obtained by FedScoop, alleges that agency leaders haven’t taken union input seriously. It seeks several actions to remedy the situation, including a memorandum of understanding on the agency’s “return-to-work” policy and mediation services to “reestablish a healthy relationship.”

If those goals aren’t met in a “reasonable timeframe,” the union “will not hesitate to file an unfair labor charge against EDA with the National Labor Relations Board,” citing support from AFGE and the AFL-CIO, with which it’s affiliated.

“We have continuously expressed that the Agency’s ‘return-to-office’ policy will result in increased turnover and seriously jeopardize the Agency’s ability to function, yet conversations on how to best address our concerns have completely stalled,” according to the letter dated May 28. 

The union further stated the agency is losing workers to other organizations that provide telework and cited a recent member survey that it said paints “a picture of an agency on the brink of collapse.” That survey, the letter said, found more than half of respondents were applying for positions outside the agency and nearly two-thirds would accept a job outside EDA.

“I don’t think this is a problem that’s specific to EDA. I think this is a governmentwide problem,” Ryan Zamarripa, the vice president of Local 3810, said of agency telework policies in an interview with FedScoop. “I think that we’re going to see a pretty decreased ability in the executive branch to carry out its duties if we continue to go against the grain on what we know is an effective way to work.”

Zamarripa, who noted he was speaking in his capacity as an officer for the local and not for the agency, said the union has received a response from Castillo that the letter was received and she plans to respond in full.

In a statement emailed to FedScoop addressing the letter, Castillo and Ben Page, EDA’s chief operating officer and deputy assistant secretary for economic development, said leadership “has strived to engage and maintain a healthy working relationship with the union and the employees it represents.”

“Throughout this period of exponential programmatic growth, EDA’s leadership team, our union, and our stakeholders have engaged in frequent, real-time dialogue about where we are, and where we are headed, including in the thoughtful planning for our required increased office attendance,” Castillo and Page wrote. “These conversations have happened both privately, as well as through frequent leadership team calls and open town halls.”

As with the private sector, pandemic-era telework policies in federal agencies and plans to bring workers back into physical offices have been the subject of occasionally contentious debate. The Biden administration has recently expanded overseas telework efforts and officials have noted benefits of preserving at least some remote work options in hearings on Capitol Hill

Those policies have come under fire from congressional Republicans who have questioned agency oversight of remote workers and their levels of productivity. But there have been some bipartisan efforts on telework policy, including bills aimed at establishing transparency and management practices for remote positions and promoting better data collection to provide insights about telework performance. 

The EDA focuses on supporting economic development in regions across the country by providing funding and resources to communities. According to Zamarripa, the union represents roughly 70 EDA workers in roles throughout the United States. Over the past few years, those workers have rolled out billions of dollars in federal initiatives such as the CARES Act, the American Rescue Plan Act, and the CHIPS and Science Act.

During the height of the pandemic, the EDA, like other agencies, was fully remote, Zamarripa said. Currently, the agency requires workers to come in two days per pay period, which is roughly once a week for most people, he said. While workers have been told there will be an increase in required in-office presence, Zamarripa said “we haven’t really received clear guidance on what the future holds.”

The union’s letter said that the EDA “is rapidly approaching a staffing level inconsistent with its current workload” and alleged mismanagement of funds and retention issues were the cause.

“This is not only due to the gross mismanagement of federal monies at the Agency leadership level that resulted in mass layoffs but also due to the Agency leadership’s inability to retain staff,” the union officers said.

Zamarripa said the layoffs mentioned in the letter were announced in September. Citing funding issues, the agency gave some workers three-month notices, told others they would be getting notices in the future, or informed workers that their contract wouldn’t be extended, he said.

Castillo and Page pushed back on the accusation of mismanagement in their statement, contending that “EDA appropriately managed the resources that were provided, leveraging term employees to address an unprecedented surge in work without leaving an unsustainable fiscal burden.” 

In addition to the memorandum of understanding and the mediation services, the union also requested that Castillo attend Labor Management Council meetings until the “quality of dialogue” is to the satisfaction of the union officers. It cited a March executive order from the Biden administration that, among other things, directed agencies to “allow employees and their union representatives to have pre-decisional involvement in workplace matters, including … discussions with management for the development of joint solutions to workplace challenges.” 

“We just want a reset,” Zamarripa said, adding it isn’t clear the information they’re relaying in meetings is getting to the assistant secretary. He said the union wants Castillo to know “what’s actually happening in these meetings and how the labor side of these conversations is perceiving them.”

The post Economic Development Administration on ‘brink of collapse’ amid telework dispute, union tells leadership appeared first on FedScoop.

]]>
78603
New TMF investments boost agency projects in generative AI, digital service delivery, accessibility https://fedscoop.com/new-tmf-investments-boost-agency-projects-in-generative-ai-digital-service-delivery-accessibility/ Thu, 16 May 2024 18:49:43 +0000 https://fedscoop.com/?p=78355 Nearly $50 million in targeted investments awarded to the Departments of State, Education and Commerce.

The post New TMF investments boost agency projects in generative AI, digital service delivery, accessibility appeared first on FedScoop.

]]>
The latest targeted investments from the Technology Modernization Fund support agency efforts to leverage generative artificial intelligence, improve security and enhance digital services, according to a Thursday announcement from the General Services Administration

TMF investments to the Departments of Education, Commerce and State total just under $50 million. 

The State Department received two investments: $18.2 million to increase diplomacy through generative AI and $13.1 million to transition its identity and access management systems to a zero-trust architecture model.

The AI investment is intended to “empower its widely dispersed team members to work more efficiently and improve access to enhanced information resources,” including diplomatic cables, media summaries and reports. On the zero trust investment, State said it is planning to expedite the creation of a comprehensive consolidated identity trust system, as well as centralizing workflows for the onboarding and offboarding process.

Clare Martorana, the federal CIO and TMF board chair, said in a statement that she’s “thrilled to see our catalytic funding stream powering the use of AI and improving security at the State Department.” 

State recently announced a chatbot for internal uses and revised its public AI use case inventory to remove nine items from the agency website. Additionally, the agency has started to encourage its workforce to use generative AI tools like ChatGPT. 

The Department of Education, meanwhile, is using a $5.9 million allocation to assist the Federal Student Aid office on a new StudentAid.gov feature called “My Activity” to centralize documents and data to track activities and status updates. The FSA is anticipating “a reduction in wait times and the need for customer care inquiries,” per the GSA release. 

Education also recently announced an RFI for cloud computing capabilities for the FSA office, a follow-on contract for its Next Generation Cloud. 

Finally, the Department of Commerce’s National Oceanic and Atmospheric Administration will put its $12 million TMF investment toward modernizing weather.gov through a redesign to “enhance information accessibility” and “establish a sustainable, mobile-first infrastructure.” NOAA reported plans to integrate translation capabilities for underserved communities’ benefit. 

The release noted that NOAA’s associated application programming interface “faces challenges, causing disruptions in accessing dependable weather information for the American public.”

Martorana said she was “equally excited about the TMF’s two other critical investments — with students getting more modern access to manage their education journeys and the public gaining access to life-saving weather information in an accessible manner for all.”

These investments come after a second appropriations package to fund the government for fiscal year 2024 threatened to claw back $100 million from the TMF. Both the GSA and the Office of Management and Budget have faced challenges in convincing lawmakers to meet funding levels proposed by the Biden administration.

Martorana recently called on Congress to fund the TMF, pointing to the funding vehicle as a way to improve service delivery for the public across the government.

The post New TMF investments boost agency projects in generative AI, digital service delivery, accessibility appeared first on FedScoop.

]]>
78355
NIST issues final guidance update for protecting sensitive information https://fedscoop.com/nist-issues-final-update-protecting-sensitive-information/ Tue, 14 May 2024 17:12:58 +0000 https://fedscoop.com/?p=78314 The publications are aimed at providing clearer and unambiguous guidance to private-sector partners, according to the agency.

The post NIST issues final guidance update for protecting sensitive information appeared first on FedScoop.

]]>
Final versions of two publications that the National Institute of Standards and Technology issued Tuesday are aimed at helping contractors and other organizations protect and secure controlled unclassified information they handle.

The guidance comes after the agency solicited feedback on drafts of the documents last year, and clarifies previous NIST guidance that included language inconsistent with the agency’s source catalog of security and privacy controls. In a Tuesday release, NIST said that wording potentially created “ambiguity” and “uncertainty.”

“For the sake of our private sector customers, we want our guidance to be clear, unambiguous and tightly coupled with the catalog of controls and assessment procedures used by federal agencies,” Ron Ross, an author of the publications, said in the release. “This update is a significant step toward that goal.”

The two publications are Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations (Special Publication 800-171r3) and Assessing Security Requirements for Controlled Unclassified Information (SP 800-171Ar3). The latter is a companion publication to help people assess the requirements outlined in the former and includes updated assessment procedures and new examples of how to conduct those assessments, according to the release.

Controlled unclassified information, which includes things like intellectual property and employee health information, can be enticing for bad actors. “Systems that process, store and transmit CUI often support government programs involving critical assets, such as weapons systems and communications systems, which are potential targets for adversaries,” according to the release. 

In the release of the draft versions last year, Ross noted CUI had recently “been a target of state-level espionage.”

The updates take into account commenters’ interest in machine-readable formats of the guidance, like JSON and Excel, to make them easier to use and reference, according to the release.

“Providing the guidance in these additional formats will allow them to do that. It will help a wider group of users to understand the requirements and implement them more quickly and efficiently,” Ross said.

In addition to issuing the new publications, NIST said it plans to revise other publications related to CUI in “coming months.” Those updates will include publications on enhanced security requirements (SP 800-172) and assessments (SP 800-172A)

The post NIST issues final guidance update for protecting sensitive information appeared first on FedScoop.

]]>
78314
NIST launches GenAI evaluation program, releases draft publications on AI risks and standards https://fedscoop.com/nist-launches-genai-evaluation-program-releases-draft-ai-publications/ Mon, 29 Apr 2024 21:50:37 +0000 https://fedscoop.com/?p=77783 The actions were among several announced by the Department of Commerce at the roughly six-month mark after Biden’s executive order on artificial intelligence.

The post NIST launches GenAI evaluation program, releases draft publications on AI risks and standards appeared first on FedScoop.

]]>
The National Institute of Standards and Technology announced a new program to evaluate generative AI and released several draft documents on the use of the technology Monday, as the government hit a milestone on President Joe Biden’s AI executive order.

The Department of Commerce’s NIST was among multiple agencies on Monday that announced actions they’ve taken that correspond with the October order at the 180-day mark since its issuance. The actions were largely focused on mitigating the risks of AI and included several actions specifically focused on generative AI.

“The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time,” Commerce Secretary Gina Raimondo said in a statement. “With these resources and the previous work on AI from the Department, we are continuing to support responsible innovation in AI and America’s technological leadership.”

Among the four documents released by NIST on Monday was a draft version of a publication aimed at helping identify generative AI risks and strategy for using the technology. That document will serve as a companion to its already-published AI risk management framework, as outlined in the order, and was developed with input from a public working group with more than 2,500 members, according to a release from the agency.

The agency also released a draft of a companion resource to its Secure Software Development Framework that outlines software development practices for generative AI tools and dual-use foundation models. The EO defined dual-use foundation models as those that are “trained on broad data,” are “applicable across a wide range of contexts,” and “exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters,” among other things. 

“For all its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see with traditional software. These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation,” Laurie E. Locascio, NIST director and undersecretary of commerce for standards and technology, said in a statement.

NIST also released draft documents on reducing risks of synthetic content — that which was AI-created or altered — and a plan for developing global AI standards. All four documents have a comment period that ends June 2, according to the Commerce release.

Notably, the agency also announced its “NIST GenAI” program for evaluating generative AI technologies. According to the release, that will “help inform the work of the U.S. AI Safety Institute at NIST.” Registration for a pilot of those evaluations opens in May.

The program will evaluate generative AI with a series of “challenge problems” that will test the capabilities of the tools and use that information “promote information integrity and guide the safe and responsible use of digital content,” the release said. “One of the program’s goals is to help people determine whether a human or an AI produced a given text, image, video or audio recording.”

The release and focus on generative AI comes as other agencies similarly took action Monday on federal use of such tools. The Office of Personnel Management released its guidance for federal workers’ use of generative AI tools and the General Services Administration released a resource guide for federal acquisition of generative AI tools. 

The post NIST launches GenAI evaluation program, releases draft publications on AI risks and standards appeared first on FedScoop.

]]>
77783
Commerce requests information about AI, open data assets, data dissemination https://fedscoop.com/commerce-rfi-ai-data-dissemination/ Wed, 17 Apr 2024 22:19:14 +0000 https://fedscoop.com/?p=77345 The agency said in a Federal Register posting that it seeks information from the public regarding how to enhance capabilities through AI while ensuring data quality.

The post Commerce requests information about AI, open data assets, data dissemination appeared first on FedScoop.

]]>
The Department of Commerce is requesting information concerning AI-ready open data assets, alongside the development of data dissemination standards. 

In a Federal Register posting Wednesday, Commerce calls on industry experts, civil society organizations, researchers and members of the public to share information about the challenges that data providers and users face in light of the emergence of generative AI and general AI technologies. 

In describing itself as “an authoritative provider of data,” the agency said it is looking to ensure the accuracy and integrity of data as AI intermediaries access and consume data.

The agency, according to the notice, is looking to specifically explore how it can make its own data assets “AI-ready” through the improvement of guidance and metadata concerning data usage, as well as licensing for purposes like text-and-data mining, AI system ingestion and research analytics. The notice also has callouts on allowing systems to link human terms to data variables through knowledge graphs for variable level metadata, and on using open standards for APIs with capabilities for linking knowledge graphs. 

“The challenge for Commerce, as an authoritative provider of data, is to ensure that these new AI intermediaries can appropriately access its data without losing the integrity, including quality, of said data,” the notice states. “Commerce hopes to ensure that the data these tools consume is easily accessible and ‘machine understandable,’ versus just ‘machine readable.’”

Within the notice, the agency states that it seeks to “adhere to its strategic mission,” which involves expanding opportunities and discoveries through data by disseminating already public data in AI-ready formats, “while ensuring no semantic meaning is lost.”

Commerce requested information on topics including data dissemination standards, data accessibility and retrieval, partnership engagement, data integrity and quality, data ethics and more. 


The agency on Tuesday announced an expansion to its AI Safety Institute leadership team, with five new members spanning knowledge in the federal government, academia and industry. The safety institute is housed within Commerce’s National Institute of Standards and Technology.

The post Commerce requests information about AI, open data assets, data dissemination appeared first on FedScoop.

]]>
77345
Department of Commerce announces US, UK AI safety partnership https://fedscoop.com/us-uk-announce-ai-safety-partnership/ Tue, 02 Apr 2024 18:11:55 +0000 https://fedscoop.com/?p=76963 AI safety bodies in the U.S. and the U.K. will work together on AI safety research, evaluations and guidance under partnership.

The post Department of Commerce announces US, UK AI safety partnership appeared first on FedScoop.

]]>
The U.S. and U.K. on Monday signed an agreement to have their AI safety institutes work together on research, evaluations and guidance, furthering the Biden administration’s commitment to work with other countries on regulating the technology.

Under a memorandum of understanding signed by Commerce Secretary Gina Raimondo and U.K. Technology Secretary Michelle Donelan, both countries will work “to align their scientific approaches” and “accelerate and rapidly iterate robust suites of evaluations for AI models, systems, and agents,” according to a release from the Department of Commerce. The agreement is effective immediately.

“Our partnership makes clear that we aren’t running away from these concerns – we’re running at them,” Raimondo said in a statement. “Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.”

The announcement comes as the Biden administration has emphasized its desire to work with other countries on AI. The administration’s October executive order on the technology, for example, directed the Department of Commerce to establish international AI frameworks.  

AI safety institutes from both countries have plans to create “a common approach to AI safety testing.” They also plan to conduct “at least one joint testing exercise on a publicly accessible model” and “tap into a collective pool of expertise by exploring personnel exchanges between the Institutes,” according to the release. 

The Department of Commerce’s National Institute of Standards and Technology houses the AI Safety Institute in the U.S. That body got its leadership and launched a consortium with participation from over 200 stakeholders in February. 

Partnering with the U.K. likely isn’t the end of the collaboration. According to Commerce’s announcement, the two countries “have also committed to develop similar partnerships with other countries to promote AI safety across the globe.”

“We have always been clear that ensuring the safe development of AI is a shared global issue,” the U.K.’s Donelan said. “Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

The post Department of Commerce announces US, UK AI safety partnership appeared first on FedScoop.

]]>
76963
NTIA calls for independent audits of AI systems in new accountability report https://fedscoop.com/ntia-calls-for-independent-audits-of-ai-systems-in-new-accountability-report/ Wed, 27 Mar 2024 22:04:00 +0000 https://fedscoop.com/?p=76874 The Department of Commerce bureau also pushed for federal guardrails for AI systems, including additional disclosures, in the report.

The post NTIA calls for independent audits of AI systems in new accountability report appeared first on FedScoop.

]]>
The National Telecommunications and Information Administration on Wednesday called for independent audits of high-risk artificial intelligence systems, part of a new report from the Commerce Department bureau that also included eight recommendations for federal agency use of AI. 

The NTIA’s AI Accountability Policy Report recommends that the federal government take action to establish guidance, support and regulations for AI systems. Within those three categories, NTIA calls for agencies to increase transparency through disclosures, such as AI nutrition labels, encourage research and evaluations on AI tools, require contractors and suppliers to “adopt sound AI governance and assurance practices” and more. 

In addition to its focus on federal involvement in guidelines for AI audits and auditors, NTIA recommends that the government strengthen its capacity to “address risks and practices related to AI across sectors of the economy,” which includes maintaining a registry of “high-risk AI deployments, AI adverse incidents and AI system audits.”

“NTIA’s AI Accountability Policy recommendations will empower businesses, regulators and the public to hold AI developers and deployers accountable for AI risks, while allowing society to harness the benefits that AI tools offer,” NTIA Administrator Alan Davidson said in a statement

Significantly, the NTIA called for the creation of AI disclosure cards that mimic “nutrition labels” detailing a product’s name, whether or not there is a human in the loop, the model type, the data retention frequency, base model and more. NTIA stressed in the report that the standardization of accessible and plain language labeling could “enhance the comparability and legibility of disclosures.”

The agency noted that the report is just “one element” of its work to meet the Biden administration’s commitment to establishing guardrails and promoting innovation regarding AI. The report follows a request for comment submitted by the agency last year. 

The request sought feedback about policy development for AI mechanisms (such as audits and assessments) meant to encourage trustworthiness. In particular, the NTIA inquired about what data would be necessary to conduct audits and what approaches might be needed in various industry environments. 

Hodan Omaar, senior policy analyst at the Center for Data Innovation, said in a statement that the focus on regulatory frameworks throughout the report “will not help the United States become a leading global adopter of AI.”

“The United States should pursue policies that encourage U.S. businesses to hire more AI developers, integrators and engineers, not divert those resources to hiring more auditors and lawyers,” Omaar added. “Policymakers should instead rely on voluntary frameworks because they are more adaptable, dynamic, and effective at addressing risks in a rapidly evolving AI landscape.”

When asked for comment in response to Omaar’s statement, NTIA directed FedScoop to its press release and fact sheet.

The post NTIA calls for independent audits of AI systems in new accountability report appeared first on FedScoop.

]]>
76874
From research to talent: Five AI takeaways from Biden’s budget https://fedscoop.com/five-ai-takeaways-bidens-budget/ Tue, 12 Mar 2024 18:56:16 +0000 https://fedscoop.com/?p=76569 The National Science Foundation, Department of Energy and Department of Commerce would get some of the highest investments for artificial intelligence-related work under the latest budget released by the White House.

The post From research to talent: Five AI takeaways from Biden’s budget appeared first on FedScoop.

]]>
President Joe Biden’s fiscal year 2025 budget announced Monday seeks billions in funding to support the administration’s artificial intelligence work, putting premiums on research, talent acquisition, and ensuring safety of the technology.

The roughly $3 billion requested for AI investments largely reflects the priorities in Biden’s October executive order on the budding technology, which outlined a path forward to harness AI’s power while also creating standards for responsible use. The request would direct some of the biggest sums to agencies like the National Science Foundation, the Department of Energy and the Department of Commerce.

In total, the Biden administration requested $75.1 billion for IT spending across civilian agencies in fiscal 2025, a small uptick from the $74.4 billion it asked for in 2024.

The president’s budget comes a week after Congress avoided a shutdown by passing a package of six appropriations bills for the current fiscal year. Notably, those bills included cuts for agencies like NSF and Commerce’s National Institute of Standards and Technology, which were both given key tasks under Biden’s AI order.

Here are five AI-related takeaways from the request:

1: Research at NSF

The budget includes more than $2 billion in funding for NSF’s research and development in AI and other emerging technology areas, including “advanced manufacturing, advanced wireless, biotechnologies, microelectronics and semiconductors, and quantum information science.” It also includes $30 million to fund a second year of the pilot for the National AI Research Resource, which is designed to improve access to resources needed to conduct AI research. The pilot, which began in January, was required under Biden’s order and bipartisan, bicameral legislation pending in Congress seeks to authorize the full-scale NAIRR.

2: AI cybersecurity at DOE

The budget also includes$455 million to extend the frontiers of AI for science and technology and to increase AI’s safety, security, and resilience” at DOE. The funding would support efforts “to build foundation models for energy security, national security, and climate resilience as well as tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical-infrastructure, and energy security threats or hazards,” according to the document. It would also support the training of researchers.

3: AI guardrails at Commerce

The budget seeks $65 million for Commerce “to safeguard, regulate, and promote AI, including protecting the American public against its societal risks.” Specifically, that funding would support the agency’s work under the AI executive order, such as NIST’s efforts to establish an AI Safety Institute. The recently passed fiscal year 2024 appropriations from Congress included up to $10 million to establish that institute.

4: AI talent surge

The request also seeks funding for the U.S. Digital Service, General Services Administration and Office of Personnel Management “to support the National AI Talent Surge across the Federal Government.” The budget estimated that funding to be $32 million, while the analytical perspectives released by the White House put it at $40 million. Those talent surge efforts were outlined in Biden’s executive order and have so far included establishing a task force to accelerate AI hiring, authorizing direct-hire authority for AI positions, and outlining incentives to maintain and attract AI talent in the federal government. 

5: Supporting chief AI officers

Finally, Biden’s request also provides funding for agencies to establish chief AI officers (CAIOs). According to an analytical perspectives document released by the White House, those investments would total $70 million. Agencies are required to designate a CAIO to promote the use of AI and manage its risks under Biden’s executive order. So far, many of those designees have been agency chief data, technology or information officials. Specifically, the budget mentioned support for CAIOs at the Departments of Treasury and Agriculture, in addition to funding a new AI policy office at the Department of Labor that would be led by its CAIO.

The post From research to talent: Five AI takeaways from Biden’s budget appeared first on FedScoop.

]]>
76569
Updated NIST cybersecurity framework adds core function, focuses on supply chain risk management https://fedscoop.com/updated-nist-cybersecurity-framework-adds-core-function-focuses-on-supply-chain-risk-management/ Tue, 27 Feb 2024 00:04:49 +0000 https://fedscoop.com/?p=76211 10 years after the agency’s first cybersecurity framework, version 2.0 includes “govern” as a core function to set the tone for implementation and oversight of cyber strategies.

The post Updated NIST cybersecurity framework adds core function, focuses on supply chain risk management appeared first on FedScoop.

]]>
A decade after releasing its landmark national cybersecurity framework, the National Institute of Standards and Technology on Monday released version 2.0, an updated document that emphasizes governance and supply chain issues for both public and private sector entities. 

The new guidance, which outlines “high-level cybersecurity outcomes that can be used by any organization … to better understand, assess, prioritize and communicate its cybersecurity efforts,” adds a sixth core function — “govern” — to the previously stated pillars: “identify,” “protect,” “detect,” “respond,” and “recover.” 

“Govern” focuses on how an organization’s “cybersecurity risk management strategy, expectations and policy are established, communicated and monitored,” the framework stated, and is intended to address the implementation and oversight of a cybersecurity strategy. 

“‘Govern’ really represents the fact that we have to bring this into the boardroom for discussion,” Laurie Locascio, director of NIST and under secretary of Commerce for Standards and Technology, said during an Aspen Digital event Monday. “That took a lot of discussion really across all the stakeholders, because it is a big change” going from five core functions to six in the framework. 

Locascio noted that 10 years ago, before NIST’s initial CSF was launched, there was discussion about the elements of “govern,” but agency leaders “really weren’t ready yet to incorporate it.” But it was a priority for the latest iteration of the framework, especially the focus on the supply chain, which is listed underneath the “govern” pillar.

The document’s spotlight on supply chain risks covers how various types of technologies rely on a complex ecosystem for outsourcing, which involves geographically diverse routes for both private and public sector organizations that offer a variety of services. In the updated CSF, NIST points to Cybersecurity Supply Chain Risk Management (C-SCRM) as a systemic process to manage exposure to cybersecurity risks by developing appropriate “strategies, policies, processes and procedures.”

Along with the overall framework, NIST released the CSF’s Quick Start Guides (QSG) with implementation examples that allow entities to “view and download notional examples of concise, action-oriented steps to help achieve the outcomes of the CSF 2.0 subcategories in addition to the guidance provided in the informative references.”

In creating the new framework, Locascio said NIST fielded comments from stakeholders regarding the draft CSF document, but was not able to accept every single comment. 

“You come to a consensus, you have a larger discussion, but every single conversation, I think, led to a better place,” Locascio said. “When we didn’t accept something verbatim … there was a reason and we talked through it together. I think that also engenders trust because we were very transparent about the process, very openly engaged and really valued your feedback.”

The post Updated NIST cybersecurity framework adds core function, focuses on supply chain risk management appeared first on FedScoop.

]]>
76211
Commerce launches AI safety consortium with more than 200 stakeholders https://fedscoop.com/commerce-launches-ai-safety-consortium/ Thu, 08 Feb 2024 20:58:02 +0000 https://fedscoop.com/?p=75984 The consortium operates under NIST’s AI Safety Institute and will contribute to actions in Biden’s AI executive order, the agency said.

The post Commerce launches AI safety consortium with more than 200 stakeholders appeared first on FedScoop.

]]>
The Department of Commerce announced a new consortium for AI safety that has participation from more than 200 companies and organizations, as the Biden administration continues its push to develop guardrails for the technology.

The consortium, which was launched Thursday, is part of the National Institute of Standards and Technology’s AI Safety Institute and will contribute to actions outlined in President Joe Biden’s October AI executive order, the department said in an announcement. That will include the creation of “guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content,” the agency said.

“The job of the consortia is to ensure that the AI Safety Institute’s research and testing is fully integrated with the broad community,” Secretary of Commerce Gina Raimondo said at a press conference announcing the consortium. The work that the safety institute is doing can’t be “done in a bubble separate from industry and what’s happening in the real world,” she added. 

Raimondo also highlighted the range of participants in the consortium, calling it “the largest collection of frontline AI developers, users, researchers, and interested groups in the world.”

The consortium’s participants are companies, academic institutions, unions, nonprofits, and other organizations. They include entities such as Amazon, IBM, Apple, OpenAI, Anthropic, Massachusetts Institute of Technology, and AFL-CIO Technology Institute (which is listed as a provisional member).

The announcement comes after the safety institute officially got its first leaders. On Wednesday, the Department of Commerce announced Elizabeth Kelly would lead the institute as its director and named Elham Tabassi to serve as chief technology officer. The institute was established last year at the direction of the administration.

After the Thursday press conference, Tabassi told reporters that as the department makes progress on the actions outlined in Biden’s AI order, they are looking to the consortium and institute to “continue to give a long-lasting approach” to those actions.

Participants applauded the announcement, lauding it as a positive step toward responsible AI.

“The new AI Safety Institute will play a critical role in ensuring that artificial intelligence made in the United States will be used responsibly and in ways people can trust,” Arvind Krishna, IBM’s chairman and chief executive officer, said in a statement. 

John Brennan, Scale AI’s public sector general manager, said in a statement that the company “applauds the Administration and its Executive Order on AI for recognizing that test & evaluation and red teaming are the best ways to ensure that AI is safe, secure, and trustworthy.” 

Meanwhile, David Zapolsky, Amazon’s senior vice president of global public policy and general counsel, said in a blog that the company is working with NIST in the consortium “to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote development of trustworthy AI and its responsible use.”

The post Commerce launches AI safety consortium with more than 200 stakeholders appeared first on FedScoop.

]]>
75984