Department of Energy (DOE) Archives | FedScoop https://fedscoop.com/tag/department-of-energy-doe/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 05 Jun 2024 17:53:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Department of Energy (DOE) Archives | FedScoop https://fedscoop.com/tag/department-of-energy-doe/ 32 32 National lab official highlights role of government datasets in AI work https://fedscoop.com/national-lab-official-highlights-role-of-government-datasets-in-ai-work/ Wed, 05 Jun 2024 17:53:45 +0000 https://fedscoop.com/?p=78683 Jennifer Gaudioso of Sandia’s Center for Computing Research touted the work Department of Energy labs have done to support AI advances.

The post National lab official highlights role of government datasets in AI work appeared first on FedScoop.

]]>
The Department of Energy’s national labs have an especially critical role to play in the advancement of artificial intelligence systems and research into the technology, a top federal official said Tuesday during a Joint Economic Committee hearing on AI and economic growth.

Jennifer Gaudioso, director of the Sandia National Laboratory’s Center for Computing Research, emphasized during her testimony the role that DOE’s national labs could have in both accelerating computing capacity and helping support advances in AI technology. She pointed to her own lab’s work in securing the U.S. nuclear arsenal — and the national labs’ historical role in promoting high-performance computing. 

“Doing AI at the frontier and at scale is crucial for maintaining competitiveness and solving complex global challenges,“ Gaudioso said. “Breakthroughs in one area beget discoveries in others.”

Gaudioso also noted the importance of building AI systems based on more advanced data than the internet-based sources used to build systems like ChatGPT. That includes government datasets, she added.

“What I get really excited about is the transformative potential of training models on science data,” she said. “We can then do new manufacturing. We can make digital twins of the human body to take drug discovery from decades down to months. Maybe 100 days for the next vaccine.” 

The national labs’ current work on artificial intelligence includes AI and nuclear deterrence, national security, non-proliferation, and advanced science and technology, Gaudioso shared. She also referenced the Frontiers in Artificial Intelligence for Science, Security and Technology — a DOE effort focused on using supercomputing for AI. The FASST initiative was announced last month. 

Last November, FedScoop reported on how the Oak Ridge National Laboratory in Tennessee was preparing its supercomputing resources — including the world’s fastest supercomputer, Frontier — for AI work. 

Tuesday’s hearing follows the White House’s continued promotion of new AI-focused policies, and as Congress mulls legislation focused on both regulating and incubating artificial intelligence

The post National lab official highlights role of government datasets in AI work appeared first on FedScoop.

]]>
78683
NSF, Energy announce first 35 projects to access National AI Research Resource pilot https://fedscoop.com/nsf-energy-announce-first-projects-for-nairr-pilot/ Mon, 06 May 2024 15:13:09 +0000 https://fedscoop.com/?p=78145 The projects will get computational time through NAIRR pilot program, which is meant to provide students and researchers with access to AI resources needed for their work.

The post NSF, Energy announce first 35 projects to access National AI Research Resource pilot appeared first on FedScoop.

]]>
The National Science Foundation and the Department of Energy on Monday announced the first 35 projects to access the pilot for the National AI Research Resource, allowing computational time for a variety of investigations and studies.

The projects range from research into language model safety and synthetic data generation for privacy, to developing a model for aquatic sciences and using AI for identifying agricultural pests, according to a release from the NSF. Of those projects, 27 will be supported on NSF-funded advanced computing systems and eight projects will have access to those supported by DOE, including the Summit supercomputer at Oak Ridge National Laboratory.

“You will see among these 35 projects’ unbelievable span in terms of geography, in terms of ideas, core ideas, as well as application interests,” NSF Director Sethuraman Panchanathan said at a White House event. 

The NAIRR, which launched earlier this year in pilot form as part of President Joe Biden’s executive order on AI, is aimed at providing researchers with the resources needed to carry out their work on AI by providing access to advanced computing, data, software, and AI models.

The pilot is composed of contributions from multiple federal agencies and private sector partners, including Microsoft, Amazon Web Services, NVIDIA, Intel, and IBM. Those contributions include access to supercomputers; datasets from NASA and the National Oceanic and Atmospheric Administration; and access to models from OpenAI, Anthropic, and Meta.

In addition to the project awards, NSF also announced the NAIRR pilot has opened the next opportunity to apply for access to research resources, including cloud computing platforms and access to foundation models, according to the release. That includes resources from nongovernmental partners and NSF-supported platforms.

Panchanathan described the appetite for the resource as “pretty strong,” noting that 50 projects have been reviewed as positive. But he said there aren’t yet resources to scale those 50 projects. “There is so much need, and so we need more resources to be brought to the table,” Panchanathan said.

While the pilot continues, there are also bipartisan efforts in Congress to codify and fully fund a full-scale NAIRR. Panchanathan and Office of Science and Technology Policy Director Arati Prabhakar underscored the need for that legislation Monday.

“Fully establishing NAIRR is going to take significant funding, and we’re happy to see that Congress has initiated action,” Prabhaker said, adding that the White House is hopeful “that full funding will be achieved.”

The post NSF, Energy announce first 35 projects to access National AI Research Resource pilot appeared first on FedScoop.

]]>
78145
Five takeaways from the AI executive order’s 180-day deadline https://fedscoop.com/five-takeaways-from-the-ai-executive-orders-180-day-deadline/ Tue, 30 Apr 2024 19:48:31 +0000 https://fedscoop.com/?p=77824 AI talent recruiting is surging, while DOE, USDA, DOL and other agencies issue new AI-related guidance.

The post Five takeaways from the AI executive order’s 180-day deadline appeared first on FedScoop.

]]>
Many federal agencies were up against the clock this weekend to complete requirements outlined in the October artificial intelligence executive order, ahead of a Monday announcement from the White House that all 180-day actions in the order had been completed. 

The order’s requirements span the tech talent surge to guidance for various types of AI. Announcements from this deadline include guidance on generative AI tools for hiring, a safety and security board focused on AI and a new generative AI guidance for federal purchasers

The White House credited federal agencies with the completion of requirements for the deadline, and included announcements for requirements in the executive order that were due at a later date. Additionally, the executive branch reported that “agencies also progressed on other work tasked by the E.O. over longer timeframes.”

Here are five takeaways from the White House’s 180-day announcement:

1. The AI talent surge’s progress report

    The AI and Tech Talent Task Force reported a 288% increase in AI job applications via a combination of agency hiring, the U.S. Digital Corps, the U.S. Digital Service and the presidential innovation fellows program. 

    Additionally, the task force offered 10 recommendations throughout the federal government for “further increasing AI capacity.”

    The task force recommends institutionalizing the U.S. Digital Corps and other technology recruitment programs, enhancing user experience on USAJOBS through the updating of digital service capabilities, exploring a talent exchange engagement with foreign partners that are also looking to invest in AI-related talent and more. 

    The report calls on Congress to grant agencies the ability to use flexible hiring authorities for the AI-talent surge, while also offering pay incentives and support for rotational practices. 

    Significantly, the task force reported that the Office of Personnel Management has “developed a legislative proposal” that aims to enhance compensation flexibilities. That proposal “has been transmitted to Congress.”

    2. New actions from the Department of Energy

      The DOE announced several AI-related actions at the deadline that focused on both cybersecurity and environmental concerns, including a new website that exhibits agency-developed AI tools and models

      The agency’s Office of Critical and Emerging Technologies released a report addressing the potential AI has to “significantly enhance how we manage the [electric] grid” and how climate change’s effect on the environment “will require a substantial increase in the rate of modernization and decarbonization” of the grid. The report offers considerations for how large language models might assist compliance with federal permitting, how AI could enhance resilience and more. 

      DOE has also announced a $13 million investment to build AI-powered tools to improve the siting and permitting of clean energy infrastructure for a new VoltAlc initiative. Significantly, the agency announced that it is establishing a working group to make recommendations by June on meeting the energy demands for AI and data center infrastructure. 

      Additionally, the agency’s Cybersecurity, Energy Security and Emergency Response (CESER)  unit worked with energy sector partners — with support from the Lawrence Livermore National Laboratory — to create an interim assessment to identify opportunities and potential risks regarding AI use within the sector.

      3. Department of Labor guidance on AI and tech-based hiring systems

        The DOL was six months early on meeting its requirement to publish guidance for contractors regarding non-discrimination in talent acquisition that involves AI and other technology-based hiring programs. 

        The report points to the use of AI systems as having the potential to continue discrimination and unlawful bias. It requires federal contractors to cooperate with the Office of Federal Contract Compliance Programs (OFCCP) by providing requested information on their AI systems in order to prevent discrimination.

        Contractors are not insulated from the risk of violating equal employment opportunity or obligations if they use automated systems, the agency states in the report. OFCCP also noted obligations related to AI with regard to  investigations into compliance evaluations and complaints  to identify if a contractor is abiding by nondiscrimination requirements. 

        While OFCCP reported that it does not endorse products or issue compliance certifications, it does encourage federal contractors to be transparent about AI use in the hiring process and with employment decisions, while nd safeguarding private information of all involved parties. 

        4. USDA’s framework for state, local, tribal and territorial (SLTT) public administrative use of AI

          The U.S. Department of Agriculture issued a framework for SLTTs to use AI to administer the agency’s Food and Nutrition Service (FNS) programs, which include school breakfast, summer food service, emergency food assistance and more. 

          The guidance states that FNS will work with SLTTs for risk management, and lays out four categories of risk for AI usage in regard to the service, ranging from low to high.

          USDA recommends a “human in the loop” in AI implementation for risk mitigation. The framework recommends that  staffers who provide human oversight for AI-enabled functions “should receive sufficient training” to assess AI models or functions for accurate outputs. 

          The agency also outlines how other uses of the technology may be “rights-impacting” or “safety-impacting,” as designated by FNS.

          5. A framework for nucleic acid synthesis screening

            The Office of Science and Technology Policy, the National Science and Technology Council and the Fast Track Action Committee for Synthetic Nucleic Acid Procurement Screening released a framework to encourage synthetic nucleic acid providers to implement screening mechanisms to prevent the misuse of AI for “engineering dangerous biological materials.” 

            This guidance builds on a Department of Health and Human Services strategy document released in October 2023

            OSTP said in a release that the National Institute of Standards and Technology “will further support implementation of this framework” through engagement with industry entities to “develop technical standards for screening.”

            The post Five takeaways from the AI executive order’s 180-day deadline appeared first on FedScoop.

            ]]>
            77824
            404 page: the error sites of federal agencies https://fedscoop.com/404-page-the-error-sites-of-federal-agencies/ Tue, 23 Apr 2024 20:55:39 +0000 https://fedscoop.com/?p=77481 Technology doesn’t always work in expected ways. Some agencies are using a creative touch to soften an error message.

            The post 404 page: the error sites of federal agencies appeared first on FedScoop.

            ]]>
            Infusing a hint of humor or a dash of “whimsy” in government websites, including error messages, could humanize a federal agency to visitors. At least that’s how the National Park Service approaches its digital offerings, including its 404 page. 

            “Even a utilitarian feature, such as a 404 page, can be fun — and potentially temper any disappointment at having followed a link that is no longer active,” an NPS spokesperson said in an email to FedScoop. “Similar to our voice and tone on other digital platforms, including social media, our main goal is to always communicate important information that helps visitors stay safe and have the best possible experience.”

            404 pages are what appear when a server cannot locate a website or resource at a specific URL. Hitting a 404 could be due to a number of reasons: a spelling error in the URL, the page may not exist anymore, or the server moved a page without having the link redirect. As a result of the error, many different entities with websitessuch as state and local governments have had a stroke of creative genius to make users aware of an issue while also having a bit of fun — which rings true for some federal agencies as well. 

            While 404 pages could seem like a silly or boring part of the federal government’s use of technology, there has been a significant push in the Biden administration, specifically out of the Office of Management and Budget, to enhance the user experience of federal agencies’ online presence — with a focus on accessibility

            NPS’s spokesperson said the agency strives to make its website “as user-friendly as possible” and “have processes in place” to make sure that the links are working. 

            Currently, the park service’s site has a revolving 404 page that showcases several different nature-themed images, with puns or quotes alongside information on how to get back on the right track for whatever online adventure a visitor seeks. 

            NPS said that it doesn’t have any plans to update its error page, “but we’re always working to better understand our users and to improve the user experience of NPS.gov and all our digital products.”

            So, until further notice, visitors can still see an artistic rendering of a bear — complete with a relevant pun — if they get a little turned around on NPS’s site.

            NPS isn’t alone in walking a line of informing the public about website miscommunications and simultaneously showcasing a bit of humor. The Federal Bureau of Prisons, for one, told FedScoop in an email that it “seeks to optimize the user experience in performance, access and comprehension.”

            FBOP error page message

            “The design of the FBOP’s 404 page was meant to be both functional and informative; by combining imagery with text, we educate the user as to the nature of a 404 error beyond standard system language and provide explanations as to why the error occurred,” Benjamin O’Cone, a spokesperson for FBOP, said in an email to FedScoop. 

            Unlike other agencies, the FBOP’s 404 imagery is not totally relevant to the mission of the bureau. Instead, it offers something a bit more meta than the others — referring to the 404 page as a “door that leads to nowhere.”

            “While the Federal Bureau of Prisons (FBOP) seeks to ensure a fully responsive and evolving website, we recognize that there may be occasions where search engine indexing is outdated and may contain links to expired pages,” O’Cone said.

            Similarly, NASA has a specific area of its 404 page that shares information about its updated, or “improved,” site, with an option to look at a sitemap and submit feedback. “Rockets aren’t the only thing we launch,” the agency muses.


            This also comes with an equally creative 404 page, stating that the “cosmic object you were looking for has disappeared beyond the horizon,” against the backdrop of outer space. 

            Other websites, like the National Institute of Standards and Technology’s site, may not have artistic renderings or out-of-this-world visuals, but NIST instead shares a joke centered around the agency’s area of interest. 

            As NIST releases significant frameworks and updated guidance for different areas of federal technology use and deployment, it only makes sense that the agency refers to its error page as a request that isn’t standard.

            While this collection of websites represents just a handful that add a creative touch to error messages, many government entities lack the same information and resources that others have. 


            For example, see the Department of Energy, which simply states that “the requested page could not be found” and offers no further clue as to what a user could be experiencing.

            The post 404 page: the error sites of federal agencies appeared first on FedScoop.

            ]]>
            77481
            AI won’t replace cybersecurity workforce, agency leaders say https://fedscoop.com/ai-cybersecurity-workforce-automation/ Mon, 01 Apr 2024 21:15:10 +0000 https://fedscoop.com/?p=76926 DOE, GSA cyber experts said automation will help the workforce, not replace it.

            The post AI won’t replace cybersecurity workforce, agency leaders say appeared first on FedScoop.

            ]]>
            For cybersecurity specialists working in the federal government, the flood of artificial intelligence tools in recent years has had a transformative effect on agencies’ work. 

            In these relatively nascent days, some federal cyber officials have said they believe that AI provides more of an advantage to defenders than attackers in cyberspace, while others warn that the pace of innovation looms as a threat to the country. 

            But from a workforce standpoint, agency cyber experts believe that the worst fears of AI replacing humans won’t be realized. 

            Speaking during an Advanced Technology Academic Research Center event last week on intelligent data and cyber resilience, federal IT leaders delivered a clear message to the cyber workforce: “Automation will not replace humans,” said Amy Hamilton, senior cybersecurity adviser for policy and programs at the Department of Energy. 

            “What it’s going to do is enable us and make it better. Every single time I see the stats on the cybersecurity workforce — trust me, there is more than enough work to go around. Don’t worry about your job going away from AI. AI is just going to be your personal assistant and help you even more.”

            Hamilton, who previously served as a cybersecurity policy analyst with the Office of Management and Budget, pointed to the 2021 breach of a water treatment plant in Oldsmar, Fla., as an example of the need for human response. An Oldsmar plant operator flagged the issue of dangerous levels of sodium hydroxide before they were released into the system. 

            “It happened that somebody was monitoring it, they noticed it, they prevented chemicals from” entering the system, Hamilton said. “We have to make sure that we’re putting all the checks and balances in place.”

            Though subsequent reporting questioned whether an outside hacker was actually responsible for the Oldsmar incident, Hamilton’s point about the importance of continuous monitoring remains.

            “One of the things about sites that are mostly based on operational technology is they are designed for failover to manual, and a lot of people are like ‘automate, automate,’” she said. “You can do that, but is that a lot of risk? By having humans monitoring these systems as well as what we’ve talked about with the importance of the automation, it’s going to come into play.”

            In DOE’s 16-page AI inventory, four use cases employ robotic processing automation, while another from the Lawrence Livermore National Laboratory leverages automation and robotics for “accelerating hardware development and interpretation of sensor data to improve process reliability.”

            Alyssa Feola, a cybersecurity adviser at the General Services Administration, also expressed concern about removing humans from the cyber workforce. Leaving all system reviews to AI tools could lead to “really tainted stuff,” she said. 

            “We need these people to do this work,” Feola said. “We’re not going to automate people out of these jobs because it is going to take people doing the work, and I think that’s what’s really most important.”

            Working with AI in federal agencies is just one piece of the current technological evolution that the government and society more broadly are undergoing. These “new challenges” are a lot to process, Hamilton said, but there’s really only one path forward.

            “Now, we have to change the way that we’re thinking and as older people need to be much more open to the next generation and opening up these concepts, because technology is going to keep changing,” she said. “We have to change with it.”

            The post AI won’t replace cybersecurity workforce, agency leaders say appeared first on FedScoop.

            ]]>
            76926
            From research to talent: Five AI takeaways from Biden’s budget https://fedscoop.com/five-ai-takeaways-bidens-budget/ Tue, 12 Mar 2024 18:56:16 +0000 https://fedscoop.com/?p=76569 The National Science Foundation, Department of Energy and Department of Commerce would get some of the highest investments for artificial intelligence-related work under the latest budget released by the White House.

            The post From research to talent: Five AI takeaways from Biden’s budget appeared first on FedScoop.

            ]]>
            President Joe Biden’s fiscal year 2025 budget announced Monday seeks billions in funding to support the administration’s artificial intelligence work, putting premiums on research, talent acquisition, and ensuring safety of the technology.

            The roughly $3 billion requested for AI investments largely reflects the priorities in Biden’s October executive order on the budding technology, which outlined a path forward to harness AI’s power while also creating standards for responsible use. The request would direct some of the biggest sums to agencies like the National Science Foundation, the Department of Energy and the Department of Commerce.

            In total, the Biden administration requested $75.1 billion for IT spending across civilian agencies in fiscal 2025, a small uptick from the $74.4 billion it asked for in 2024.

            The president’s budget comes a week after Congress avoided a shutdown by passing a package of six appropriations bills for the current fiscal year. Notably, those bills included cuts for agencies like NSF and Commerce’s National Institute of Standards and Technology, which were both given key tasks under Biden’s AI order.

            Here are five AI-related takeaways from the request:

            1: Research at NSF

            The budget includes more than $2 billion in funding for NSF’s research and development in AI and other emerging technology areas, including “advanced manufacturing, advanced wireless, biotechnologies, microelectronics and semiconductors, and quantum information science.” It also includes $30 million to fund a second year of the pilot for the National AI Research Resource, which is designed to improve access to resources needed to conduct AI research. The pilot, which began in January, was required under Biden’s order and bipartisan, bicameral legislation pending in Congress seeks to authorize the full-scale NAIRR.

            2: AI cybersecurity at DOE

            The budget also includes$455 million to extend the frontiers of AI for science and technology and to increase AI’s safety, security, and resilience” at DOE. The funding would support efforts “to build foundation models for energy security, national security, and climate resilience as well as tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical-infrastructure, and energy security threats or hazards,” according to the document. It would also support the training of researchers.

            3: AI guardrails at Commerce

            The budget seeks $65 million for Commerce “to safeguard, regulate, and promote AI, including protecting the American public against its societal risks.” Specifically, that funding would support the agency’s work under the AI executive order, such as NIST’s efforts to establish an AI Safety Institute. The recently passed fiscal year 2024 appropriations from Congress included up to $10 million to establish that institute.

            4: AI talent surge

            The request also seeks funding for the U.S. Digital Service, General Services Administration and Office of Personnel Management “to support the National AI Talent Surge across the Federal Government.” The budget estimated that funding to be $32 million, while the analytical perspectives released by the White House put it at $40 million. Those talent surge efforts were outlined in Biden’s executive order and have so far included establishing a task force to accelerate AI hiring, authorizing direct-hire authority for AI positions, and outlining incentives to maintain and attract AI talent in the federal government. 

            5: Supporting chief AI officers

            Finally, Biden’s request also provides funding for agencies to establish chief AI officers (CAIOs). According to an analytical perspectives document released by the White House, those investments would total $70 million. Agencies are required to designate a CAIO to promote the use of AI and manage its risks under Biden’s executive order. So far, many of those designees have been agency chief data, technology or information officials. Specifically, the budget mentioned support for CAIOs at the Departments of Treasury and Agriculture, in addition to funding a new AI policy office at the Department of Labor that would be led by its CAIO.

            The post From research to talent: Five AI takeaways from Biden’s budget appeared first on FedScoop.

            ]]>
            76569
            How risky is ChatGPT? Depends which federal agency you ask https://fedscoop.com/how-risky-is-chatgpt-depends-which-federal-agency-you-ask/ Mon, 05 Feb 2024 17:20:57 +0000 https://fedscoop.com/?p=75907 A majority of civilian CFO Act agencies have come up with generative AI strategies, according to a FedScoop analysis.

            The post How risky is ChatGPT? Depends which federal agency you ask appeared first on FedScoop.

            ]]>
            From exploratory pilots to temporary bans on the technology, most major federal agencies have now taken some kind of action on the use of tools like ChatGPT. 

            While many of these actions are still preliminary, growing focus on the technology signals that federal officials expect to not only govern but eventually use generative AI. 

            A majority of the civilian federal agencies that fall under the Chief Financial Officers Act have either created guidance, implemented a policy, or temporarily blocked the technology, according to a FedScoop analysis based on public records requests and inquiries to officials. The approaches vary, highlighting that different sectors of the federal government face unique risks — and unique opportunities — when it comes to generative AI. 

            As of now, several agencies, including the Social Security Administration, the Department of Energy, and Veterans Affairs, have taken steps to block the technology on their systems. Some, including NASA, have or are working on establishing secure testing environments to evaluate generative AI systems. The Agriculture Department has even set up a board to review potential generative AI use cases within the agency. 

            Some agencies, including the U.S. Agency for International Development, have discouraged employees from inputting private information into generative AI systems. Meanwhile, several agencies, including Energy and the Department of Homeland Security, are working on generative AI projects. 

            The Departments of Commerce, Housing and Urban Development, Transportation, and Treasury did not respond to requests for comment, so their approach to the technology remains unclear. Other agencies, including the Small Business Administration, referenced their work on AI but did not specifically address FedScoop’s questions about guidance, while the Office of Personnel Management said it was still working on guidance. The Department of Labor didn’t respond to FedScoop’s questions about generative AI. FedScoop obtained details about the policies of Agriculture, USAID, and Interior through public records requests. 

            The Biden administration’s recent executive order on artificial intelligence discourages agencies from outright banning the technology. Instead, agencies are encouraged to limit access to the tools as necessary and create guidelines for various use cases. Federal agencies are also supposed to focus on developing “appropriate terms of service with vendors,” protecting data, and “deploying other measures to prevent misuse of Federal Government information in generative AI.”

            Agency policies on generative AI differ
            AgencyPolicy or guidanceRisk assessmentSandboxRelationship with generative AI providerNotes
            USAIDNeither banned nor approved, but employees discouraged from using private data in memo sent in April.Didn’t respond to a request for comment. Document was obtained via FOIA.
            AgricultureInterim guidance distributed in October 2023 prohibits employee or contactor use in official capacity and on government equipment. Established review board for approving generative AI use cases.A March risk determination by the agency rated ChatGPT’s risk as “high.”OpenAI disputed the relevance of a vulnerability cited in USDA’s risk assessment, as FedScoop first reported.
            EducationDistributed initial guidance to employees and contractors in October 2023. Developing comprehensive guidance and policy. Conditionally approved use of public generative AI tools.Is working with vendors to establish an enterprise platform for generative AI.Not at the time of inquiry.Agency isn’t aware of generative AI uses in the department and is establishing a review mechanism for future proposed uses.
            EnergyIssued a temporary block of Chat GPT but said it’s making exceptions based on needs.Sandbox enabled.Microsoft Azure and Google Cloud.
            Health and Human ServicesNo specific vendor or technology is excluded, though subagencies, like National Institutes of Health, prevent use of generative AI in certain circumstances.“The Department is continually working on developing and testing a variety of secure technologies and methods, such as advanced algorithmic approaches, to carry out federal missions,” Chief AI Officer Greg Singleton told FedScoop.
            Homeland SecurityFor public, commercial tools, employees might seek approval and attend training. Four systems, ChatGPT, Bing Chat, Claude 2 and DALL-E2, are conditionally approved.Only for use with public information.In conversations.DHS is taking a separate approach to generative AI systems integrated directly into its IT assets, CIO and CAIO Eric Hysen told FedScoop.
            InteriorEmployees “may not disclose non-public data” in a generative AI system “unless or until” the system is authorized by the agency. Generative AI systems “are subject to the Department’s prohibition on installing unauthorized software on agency devices.”Didn’t respond to a request for comment. Document was obtained via FOIA.
            JusticeThe DOJ’s existing IT policies cover artificial intelligence, but there is no separate guidance for AI. No use cases have been ruled out.No plans to develop an environment for testing currently.No formal agreements beyond existing contracts with companies that now offer generative AI.DOJ spokesperson Wyn Hornbuckle said the department’s recently established Emerging Technologies Board will ensure that DOJ “remains alert to the opportunities and the attendant risks posed by artificial intelligence (AI) and other emerging technologies.”
            StateInitial guidance doesn’t automatically exclude use cases. No software type is outright forbidden and generative AI tools can be used with unclassified information.Currently developing a tailored sandbox.Currently modifying terms of service with AI service providers to support State’s mission and security standards.A chapter in the Foreign Affairs Manual, as well as State’s Enterprise AI strategy, apply to generative AI, according to the department.
            Veterans AffairsDeveloped internal guidance in July 2023 based on the agency’s existing ban on using sensitive data on unapproved systems. ChatGPT and similar software are not available on the VA network.Didn’t directly address but said the agency is  pursuing low-risk pilotsVA has contracts with cloud companies offering generative AI services.
            Environmental Protection AgencyReleased a memo in May 2023 that personnel were prohibited from  using generative AI tools while the agency reviewed “legal, information security and privacy concerns.” Employees with “compelling” uses are directed to work with the information security officer on an exception.Conducting a risk assessment.No testbed currently.EPA is “considering several vendors and options in accordance with government acquisition policy,” and is “also considering open-source options,” a spokesperson said.The department intends to create a more formal policy in line with Biden’s AI order.
            General Services AdministrationPublicly released policy in June 2023 saying it blocked third-party generative AI tools on government devices. According to a spokesperson, employees and contractors can only use public large language models for “research or experimental purposes and non-sensitive uses involving data inputs already in the public domain or generalized queries. LLM responses may not be used in production workflows.”Agency has “developed a secured virtualized data analysis solution that can be used for generative AI systems,” a spokesperson said.
            NASAMay 2023 policy says public generative AI tools are not cleared for widespread use on sensitive data. Large language models can’t be used in production workflows.Cited security challenges and limited accuracy as risks.Currently testing the technology in a secure environment.
            National Science FoundationGuidance for generative AI use in proposal reviews expected soon; also released guidance for the technology’s use in merit review. Set of acceptable use cases is being developed.“NSF is exploring options for safely implementing GAI technologies within NSF’s data ecosystem,” a spokesperson said.No formal relationships.
            Nuclear Regulatory CommissionIn July 2023, the agency issued an internal policy statement to all employees on generative AI use.Conducted “some limited risk assessments of publicly available gen-AI tools” to develop policy statement, a spokesperson said. NRC plans to continue working with government partners on risk management, and will work on security and risk mitigation for internal implementation.NRC is “talking about starting with testing use cases without enabling for the entire agency, and we would leverage our development and test environments as we develop solutions,” a spokesperson said.Has Microsoft for Azure AI license. NRC is also exploring the implementation of Microsoft Copilot when it’s added to the Government Community Cloud.“The NRC is in the early stages with generative AI. We see potential for these tools to be powerful time savers to help make our regulatory reviews more efficient,” said Basia Sall, deputy director of the NRC’s IT Services Development & Operations Division.
            Office of Personnel ManagementThe agency is currently working on generative AI guidance.“OPM will also conduct a review process with our team for testing, piloting, and adopting generative AI in our operations,” a spokesperson said.
            Small Business AdministrationSBA didn’t address whether it had a specific generative AI policy.A spokesperson said the agency “follows strict internal and external communication practices to safeguard the privacy and personal data of small businesses.”
            Social Security AdministrationIssued temporary block on the technology on agency devices, according to a 2023 agency reportDidn’t respond to a request for comment.
            Sources: U.S. agency responses to FedScoop inquiries and public records.
            Note: Chart displays information obtained through records requests and responses from agencies. The Departments of Commerce, Housing and Urban Development, Transportation, and Treasury didn’t respond to requests for comment. The Department of Labor didn’t respond to FedScoop’s questions about generative AI.

            The post How risky is ChatGPT? Depends which federal agency you ask appeared first on FedScoop.

            ]]>
            75907
            Democratic lawmakers propose legislation to study AI’s environmental impacts https://fedscoop.com/democratic-lawmakers-propose-legislation-to-study-ais-environmental-impacts/ Thu, 01 Feb 2024 18:34:47 +0000 https://fedscoop.com/?p=75862 The legislation would give new responsibilities to the EPA, the DOE, and NIST.

            The post Democratic lawmakers propose legislation to study AI’s environmental impacts appeared first on FedScoop.

            ]]>
            A group of Democratic lawmakers have introduced legislation focused on studying artificial intelligence’s impact on the climate. 

            The proposal, which is called the Artificial Intelligence Environmental Impacts Act of 2024, would have the National Institute of Standards and Technology create a methodology to evaluate the environmental consequences AI might create. Critically, the bill comes amid growing focus on AI’s energy consumption and compute requirements. 

            “There is a Dickensian quality to the use of AI when it comes to our environment: It can make our planet better, and it can make our planet worse,” Sen. Ed Markey, D-Mass., said in a statement. “Our AI Environmental Impacts Act would set clear standards and voluntary reporting guidelines to measure AI’s impact on our environment. The development of the next generation of AI tools cannot come at the expense of the health of our planet.” 

            The bill was introduced by Sens. Markey and Martin Heinrich, D-N.M., as well as Reps. Anna Eshoo, D-Calif., and Don Beyer, D-Va. The proposal has several components: The Environmental Protection Agency would conduct a study into the climate impact of AI, while NIST would create an AI Environmental Impact consortium and develop a voluntary reporting system for companies to disclose potential climate impacts of their models.

            The legislation would also have the Department of Energy, NIST, and the EPA submit a joint report within four years. Collectively, the agencies would also be expected to provide recommendations for future legislation. 

            The bill has the support of several climate groups, as well as the AI company Hugging Face. It also comes as Congress ramps up its effort to create new legislation focused on the emerging technology. Notably, Rep. Beyer told FedScoop last month that he expects that AI legislation could finally be approved in 2024. 

            The post Democratic lawmakers propose legislation to study AI’s environmental impacts appeared first on FedScoop.

            ]]>
            75862
            Software license purchases need better agency tracking, GAO says https://fedscoop.com/federal-software-licenses-gao-report/ Mon, 29 Jan 2024 22:38:06 +0000 https://fedscoop.com/?p=75790 Report finds that agencies are missing out on cost savings with the purchases of IT products and cyber-related investments, per a new Government Accountability Office report.

            The post Software license purchases need better agency tracking, GAO says appeared first on FedScoop.

            ]]>
            Federal agencies are missing out on cost savings and making too many duplicative purchases when it comes to IT and cyber-related investments, according to a new Government Accountability Office report.

            With an annual spend of more than $100 billion on IT products, the federal government is falling short on the consistent tracking of its software licenses, leading to missed opportunities for cost reductions, the GAO found. And though there are federal initiatives in place to “better position agencies to maximize cost savings when purchasing software licenses,” the GAO noted that “selected agencies have not fully determined over- or under-purchasing of their five most widely used software licenses.”

            The GAO’s study looked at software licenses purchased by the 24 Chief Financial Officers Act agencies, finding that 10 vendors made up the majority of the most widely used licenses. For fiscal year 2021, Microsoft held by far the largest share of vendors organized by the highest amounts paid (31.3%), followed by Adobe (10.43%) and Salesforce (8.7%).

            While the GAO was able to identify and analyze vendors based on government spend, it was “unclear which products under those licenses are most widely used because of agencies’ inconsistent and incomplete data,” the report noted. “For example, multiple software products may be bundled into a single license with a vendor, and agencies may not have usage data for each product individually.”

            “Without better data, agencies also don’t know whether they have the right number of licenses for their needs,” the report continued.

            For their recommendations, the GAO focused on nine agencies based on the size of their IT budgets and then zeroed in on the five most widely used licenses within those agencies. The selected agencies were the Departments of Agriculture, Energy, Housing and Urban Development, Justice, State and Veterans Affairs, as well as the Office of Personnel Management, Social Security Administration and USAID.

            The recommendations centered most on better and more consistent inventory tracking to ensure that agencies didn’t double-dip on software license purchases and were in a better position to take advantage of cost-saving opportunities. There should be more concerted efforts to compare prices, the GAO stated.

            HUD did not say whether it agreed or disagreed with the GAO’s recommendations, while the other eight agencies said in responses that they did.

            Congress in 2023 attempted to rein in duplicative software across the government with the Strengthening Agency Management and Oversight of Software Assets Act, which aimed to consolidate federal software purchasing and give agencies greater ability to push back on restrictive software licensing. However, after passing the House in July, the bill never moved in the Senate.

            The post Software license purchases need better agency tracking, GAO says appeared first on FedScoop.

            ]]>
            75790
            Federal CDO Council selects FERC, DOE officials as new leaders https://fedscoop.com/federal-cdo-council-selects-ferc-doe-officials-as-new-leaders/ Fri, 26 Jan 2024 17:53:53 +0000 https://fedscoop.com/?p=75739 Kirsten Dalboe will take over as chair, replacing the CFTC’s Ted Kaouck, while Robert King slides into the vice chair spot vacated by DOT’s Dan Morgan.

            The post Federal CDO Council selects FERC, DOE officials as new leaders appeared first on FedScoop.

            ]]>
            The Federal Chief Data Officers Council has new leadership. Kirsten Dalboe, the Federal Energy Regulatory Commission’s chief data officer, and Robert King, the Department of Energy’s CDO, have been appointed to council chair and vice chair, respectively, the organization announced Thursday.

            The CDO Council, which was established by the Foundations for Evidence-Based Policymaking Act of 2018, is scheduled to close up shop next January, which could make Dalboe and King the potential last leaders of the group tasked with improving federal government data operations and decision-making unless new legislation is passed extending its charter.

            Before taking on the role of FERC’s first CDO, Dalboe served as the director of data operations in the Health and Human Services Office of the Inspector General, where she was the point person on the creation of the agency’s cloud-based Integrated Data Platform and Enterprise Dashboard.

            Prior to her time with the HHS watchdog, Dalboe was the Department of Homeland Security’s chief data architect and director of enterprise data management.

            King moved to his current role at DOE in July 2023 after two-plus years as the CDO and associate commissioner for the Social Security Administration’s Office of Analytics and Improvements. Before that, King spent a decade at DHS in systems modernization and information integration positions.

            Dalboe steps into the role of chair vacated by Ted Kaouk, who spent more than three and a half years in the job. Kaouk, who served as CDO at the Office of Personnel Management and the Department of Agriculture, transitioned in December to the Commodity Futures Trading Commission, where he serves as CDO and director of the agency’s Division of Data.

            King takes over for Dan Morgan, longtime CDO and assistant chief information officer for data services at the Department of Transportation. 

            The post Federal CDO Council selects FERC, DOE officials as new leaders appeared first on FedScoop.

            ]]>
            75739