Executive Order 14110 Archives | FedScoop https://fedscoop.com/tag/executive-order-14110/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Tue, 30 Apr 2024 19:48:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Executive Order 14110 Archives | FedScoop https://fedscoop.com/tag/executive-order-14110/ 32 32 Five takeaways from the AI executive order’s 180-day deadline https://fedscoop.com/five-takeaways-from-the-ai-executive-orders-180-day-deadline/ Tue, 30 Apr 2024 19:48:31 +0000 https://fedscoop.com/?p=77824 AI talent recruiting is surging, while DOE, USDA, DOL and other agencies issue new AI-related guidance.

The post Five takeaways from the AI executive order’s 180-day deadline appeared first on FedScoop.

]]>
Many federal agencies were up against the clock this weekend to complete requirements outlined in the October artificial intelligence executive order, ahead of a Monday announcement from the White House that all 180-day actions in the order had been completed. 

The order’s requirements span the tech talent surge to guidance for various types of AI. Announcements from this deadline include guidance on generative AI tools for hiring, a safety and security board focused on AI and a new generative AI guidance for federal purchasers

The White House credited federal agencies with the completion of requirements for the deadline, and included announcements for requirements in the executive order that were due at a later date. Additionally, the executive branch reported that “agencies also progressed on other work tasked by the E.O. over longer timeframes.”

Here are five takeaways from the White House’s 180-day announcement:

1. The AI talent surge’s progress report

    The AI and Tech Talent Task Force reported a 288% increase in AI job applications via a combination of agency hiring, the U.S. Digital Corps, the U.S. Digital Service and the presidential innovation fellows program. 

    Additionally, the task force offered 10 recommendations throughout the federal government for “further increasing AI capacity.”

    The task force recommends institutionalizing the U.S. Digital Corps and other technology recruitment programs, enhancing user experience on USAJOBS through the updating of digital service capabilities, exploring a talent exchange engagement with foreign partners that are also looking to invest in AI-related talent and more. 

    The report calls on Congress to grant agencies the ability to use flexible hiring authorities for the AI-talent surge, while also offering pay incentives and support for rotational practices. 

    Significantly, the task force reported that the Office of Personnel Management has “developed a legislative proposal” that aims to enhance compensation flexibilities. That proposal “has been transmitted to Congress.”

    2. New actions from the Department of Energy

      The DOE announced several AI-related actions at the deadline that focused on both cybersecurity and environmental concerns, including a new website that exhibits agency-developed AI tools and models

      The agency’s Office of Critical and Emerging Technologies released a report addressing the potential AI has to “significantly enhance how we manage the [electric] grid” and how climate change’s effect on the environment “will require a substantial increase in the rate of modernization and decarbonization” of the grid. The report offers considerations for how large language models might assist compliance with federal permitting, how AI could enhance resilience and more. 

      DOE has also announced a $13 million investment to build AI-powered tools to improve the siting and permitting of clean energy infrastructure for a new VoltAlc initiative. Significantly, the agency announced that it is establishing a working group to make recommendations by June on meeting the energy demands for AI and data center infrastructure. 

      Additionally, the agency’s Cybersecurity, Energy Security and Emergency Response (CESER)  unit worked with energy sector partners — with support from the Lawrence Livermore National Laboratory — to create an interim assessment to identify opportunities and potential risks regarding AI use within the sector.

      3. Department of Labor guidance on AI and tech-based hiring systems

        The DOL was six months early on meeting its requirement to publish guidance for contractors regarding non-discrimination in talent acquisition that involves AI and other technology-based hiring programs. 

        The report points to the use of AI systems as having the potential to continue discrimination and unlawful bias. It requires federal contractors to cooperate with the Office of Federal Contract Compliance Programs (OFCCP) by providing requested information on their AI systems in order to prevent discrimination.

        Contractors are not insulated from the risk of violating equal employment opportunity or obligations if they use automated systems, the agency states in the report. OFCCP also noted obligations related to AI with regard to  investigations into compliance evaluations and complaints  to identify if a contractor is abiding by nondiscrimination requirements. 

        While OFCCP reported that it does not endorse products or issue compliance certifications, it does encourage federal contractors to be transparent about AI use in the hiring process and with employment decisions, while nd safeguarding private information of all involved parties. 

        4. USDA’s framework for state, local, tribal and territorial (SLTT) public administrative use of AI

          The U.S. Department of Agriculture issued a framework for SLTTs to use AI to administer the agency’s Food and Nutrition Service (FNS) programs, which include school breakfast, summer food service, emergency food assistance and more. 

          The guidance states that FNS will work with SLTTs for risk management, and lays out four categories of risk for AI usage in regard to the service, ranging from low to high.

          USDA recommends a “human in the loop” in AI implementation for risk mitigation. The framework recommends that  staffers who provide human oversight for AI-enabled functions “should receive sufficient training” to assess AI models or functions for accurate outputs. 

          The agency also outlines how other uses of the technology may be “rights-impacting” or “safety-impacting,” as designated by FNS.

          5. A framework for nucleic acid synthesis screening

            The Office of Science and Technology Policy, the National Science and Technology Council and the Fast Track Action Committee for Synthetic Nucleic Acid Procurement Screening released a framework to encourage synthetic nucleic acid providers to implement screening mechanisms to prevent the misuse of AI for “engineering dangerous biological materials.” 

            This guidance builds on a Department of Health and Human Services strategy document released in October 2023

            OSTP said in a release that the National Institute of Standards and Technology “will further support implementation of this framework” through engagement with industry entities to “develop technical standards for screening.”

            The post Five takeaways from the AI executive order’s 180-day deadline appeared first on FedScoop.

            ]]>
            77824
            State Department trims several uses from public AI inventory https://fedscoop.com/state-department-removes-several-ai-uses/ Tue, 09 Apr 2024 20:01:25 +0000 https://fedscoop.com/?p=77125 Deletions include a Facebook ad system used for collecting media clips and behavioral analytics for online surveys.

            The post State Department trims several uses from public AI inventory appeared first on FedScoop.

            ]]>
            The Department of State recently removed several items from its public artificial intelligence use case inventory, including a behavioral analytics system and tools to collect and analyze media clips.

            In total, the department removed nine items from its website — several of which appeared to be identical use cases listed under two different agencies — and changed the bureau for a handful of the remaining items. The State Department didn’t provide a response to FedScoop’s requests for comment on why those uses were removed or changed.

            The deletions came roughly a week after the Office of Management and Budget released draft guidance for 2024 inventories that says, among many other requirements, that agencies “must not remove retired or decommissioned use cases that were included in prior inventories, but instead mark them as no longer in use.” OMB has previously stated that agencies “are responsible for maintaining the accuracy of their inventories.”

            AI use case inventories — which are public, annual disclosures first required by a Trump-era executive order — have so far lacked consistency. Other agencies have also made changes to their inventories outside the annual schedule, including the Department of Transportation and the Department of Homeland Security. OMB’s recent draft guidance and memo on AI governance seek to enhance and expand what is reported in those disclosures.

            OMB declined to comment on the removals or whether it’s given agencies guidance on deleting items in their current inventories.

            Notably, the department removed a use case titled “forecasting,” which was a pilot using statistical models to forecast outcomes that the agency told FedScoop last year it had shuttered. The description for the use case stated that it had been “applied to COVID cases as well as violent events in relation to tweets.” 

            Several of the other deleted State Department uses were related to media and digital content. 

            For example, the agency removed the disclosure of a “Facebook Ad Test Optimization System” that it said was used to collect media clips from around the world, a “Global Audience Segmentation Framework” it reported using to analyze “media clips reports” from embassy public affairs sections, and a “Machine-Learning Assisted Measurement and Evaluation of Public Outreach” that it said was used for “collecting, analyzing, and summarizing the global digital content footprint of the Department.” 

            State also removed its disclosure of “Behavioral Analytics for Online Surveys Test (Makor Analytics),” which the agency said was a pilot that “aims to provide additional information beyond self-reported data that reflects sentiment analysis in the country of interest.” That use case had been listed under the Bureau of Information Resource Management and the Under Secretary for Public Diplomacy and Public Affairs. Both references were removed.

            Two of the removed items had been listed under two agencies but had only one disclosure removed: an AI tool for “identifying similar terms and phrases based off a root word” and a use for “optical character recognition and natural language processing on Department cables.”

            Another removed use was for a “Verified Imagery Pilot Project” by the Bureau of Conflict and Stabilization Operations. That pilot tested “how the use of a technology service, Sealr, could verify the delivery of foreign assistance to conflict-affected areas where neither” the department nor its “implementing partner could go.”

            While the use case inventory was trimmed down, the department also appears to be adding uses of AI to its operations. State Chief Information Officer Kelly Fletcher recently announced that the department was launching an internal AI chatbot to help with things like translation after staff requested such a tool. 

            Rebecca Heilweil and Caroline Nihill contributed to this report.

            The post State Department trims several uses from public AI inventory appeared first on FedScoop.

            ]]>
            77125
            USAID seeking information about AI for global development playbook https://fedscoop.com/usaid-rfi-ai-for-global-development-playbook/ Mon, 29 Jan 2024 20:59:13 +0000 https://fedscoop.com/?p=75781 The global development agency is interested in how AI “can both accelerate and erode development progress,” an official tells FedScoop.

            The post USAID seeking information about AI for global development playbook appeared first on FedScoop.

            ]]>
            USAID and the State Department are requesting information to assist the agencies in using artificial intelligence applications for sustainable development.

            USAID and State’s public notice, posted Friday in the Federal Register, requests information on the barriers and opportunities presented by AI, focusing specifically on responsible usage, AI policy and protections and public engagement with AI governance and risks. A USAID official said in an interview with FedScoop that the agency is thinking about “equitable access” to tools that “may exacerbate gaps that we already see in the world.”

            This request for information is one step toward the agency’s sole requirement in President Joe Biden’s AI executive order: USAID has one year to “promote safe, responsible and right’s-affirming development and deployment of AI abroad” through an AI in Global Development Playbook, according to the order’s text. 

            The USAID official told FedScoop that the playbook is “really going to outline some principles, some guidelines and really best practices that are accounting for both the social, technical, economic, human rights and security conditions that are going to be impacted by artificial intelligence — specifically not just beyond the U.S. borders, but in countries that USAID works in, which I have to say aren’t always the countries that people are paying attention to.”

            Much of the agency’s use of AI has been in continuing the advancement of global development, including a current partnership with Duke University that is focused on authoritarianism and the closing of civic space that allows support organizations, members of the media and others to respond to “growing restrictions on democratic freedoms of association, assembly and expression.”

            “We’re equally focused on the potential of emerging technologies like artificial intelligence and how they can both accelerate and erode development progress,” the USAID official said. “What that means for us is just this balance of mitigation of risk and understanding that harm. This is largely something that’s important for us to learn and understand. … In some countries, you’re finding folks going all in, because they’re seeing that learning AI tools and learning how to build AI tools and how to use AI tools, in some ways, that is the way that they’re going to leapfrog in this global economy and in this rapidly changing economy.”

            The USAID official stated that the agency has been using AI “for years” and is trying to harness the technology with the agency’s mission in mind. Internally, USAID is looking to minimize time on tasks that do not directly correlate with “high-value tasks.” 

            “We’re at an agency that is quite literally trying to solve the world’s most pressing challenges,” the official said. “There will never be enough people, enough hours or enough money to do that. So these types of tools like artificial intelligence can help us be more targeted in our approach. If some tasks can be a bit more automated, that’s great, and certainly making sure that we mitigate the risk by putting human eyes on the final products to make sure that it has integrity, that the datasets we’re working on have integrity.”

            The post USAID seeking information about AI for global development playbook appeared first on FedScoop.

            ]]>
            75781
            Nuclear Regulatory Commission CIO David Nelson set to retire https://fedscoop.com/nuclear-regulatory-commission-cio-david-nelson-set-to-retire/ Wed, 24 Jan 2024 23:34:34 +0000 https://fedscoop.com/?p=75717 Scott Flanders, the NRC’s deputy chief information officer, will serve as the acting CIO and acting chief AI officer until a permanent one is selected.

            The post Nuclear Regulatory Commission CIO David Nelson set to retire appeared first on FedScoop.

            ]]>
            The Nuclear Regulatory Commission’s chief information officer, David Nelson, will be retiring at the end of the week, according to an agency spokesperson. 

            In an email to FedScoop, the NRC spokesperson said Nelson will be leaving the agency effective Jan. 26. Taking his place as acting chief AI officer and CIO is Scott Flanders, the commission’s current deputy CIO. 

            Nelson was appointed as the regulatory agency’s CIO in 2016, leaving his previous position as CIO and director of the Office of Enterprise Information for the Centers for Medicare and Medicaid Services. 

            Nelson was recently appointed as the NRC’s CAIO, in light of a long-awaited executive order on AI from President Joe Biden. While the order did not include the NRC as an agency that will be required to eventually name a CAIO, the commission told FedScoop previously that it was “assessing whether and how it applies.”

            Additionally, the NRC spokesperson confirmed that Victor Hall, the deputy director of the Division of Systems Analysis in the Office of Nuclear Regulatory Research, serves as the responsible AI official under Executive Order 13960, issued by the Trump administration. The NRC was also exempted from that requirement as an independent regulatory agency.

            The post Nuclear Regulatory Commission CIO David Nelson set to retire appeared first on FedScoop.

            ]]>
            75717
            Degree requirements are hurting government’s AI recruitment efforts, House lawmakers and experts say https://fedscoop.com/degree-requirements-hurting-gov-ai-recruitment-efforts/ Thu, 18 Jan 2024 17:12:31 +0000 https://fedscoop.com/?p=75629 Rep. Mace tells FedScoop that newly trained and upskilled workers without a four-year degree are often “more qualified” for federal AI jobs.

            The post Degree requirements are hurting government’s AI recruitment efforts, House lawmakers and experts say appeared first on FedScoop.

            ]]>
            Federal employment standards for artificial intelligence-trained employees are burdensome and end up discouraging workers who are knowledgeable in the emerging tech from seeking such jobs, lawmakers and witnesses said during a House Cybersecurity, Information Technology and Government Innovation subcommittee hearing Wednesday. 

            AI-trained employees who have been upskilled and certified through intensive training programs rather than earning a degree from a four-year institution can be considered unqualified to work for the federal government, according to testimony from Timi Hadra, an IBM client partner and the company’s senior state executive for West Virginia. 

            Despite the call to action from the White House through the AI executive order, Hadra said that the government’s efforts so far to hire more talent from diverse educational backgrounds are “not enough.”

            Subcommittee Chair Nancy Mace, R-S.C., said in an interview with FedScoop after the hearing that Hadra’s answer was illuminating.

            “Hearing that testimony today and asking that question of IBM is certainly very helpful to understand what the real world and the reality is like, on the ground with tech companies that have these federal contracts,” Mace said. “If 20% of the workforce, or more, doesn’t have that four-year degree, it’s clearly hindering our ability to meet the demands that we have in the tech, cyber and innovation AI space.”

            Hadra noted that IBM has a six-month curriculum for its cybersecurity apprenticeship program that trains employees in these disciplines. She said that the workers are “ready to hit the ground running on those programs, and because they don’t meet those minimum qualifications, we are not able to put them on that contract.”

            Mace added that the more recently trained and upskilled employees could be “more qualified” than those who hold a degree because “they put that skillset into practice.” 

            “We have a shortage of 700,000 cybersecurity workers across the private and public sectors,” Mace said during the hearing. “We know that our traditional education system doesn’t produce nearly enough degreed graduates in the field to fill the need. We also know that that shortfall would be much worse if not for the appearance of nimble educational alternatives. That includes short-term ‘boot camp’ programs that issue non-degree credentials like certifications and badges.”

            The post Degree requirements are hurting government’s AI recruitment efforts, House lawmakers and experts say appeared first on FedScoop.

            ]]>
            75629
            NIST seeks public input on its AI executive order requirements https://fedscoop.com/nist-seeks-ai-executive-order-requirement-information/ Wed, 20 Dec 2023 21:23:23 +0000 https://fedscoop.com/?p=75342 The Department of Commerce’s National Institute of Standards and Technology is seeking information to aid its implementation of AI requirements in Biden’s recent executive order.

            The post NIST seeks public input on its AI executive order requirements appeared first on FedScoop.

            ]]>
            The National Institute of Standards and Technology is looking for information to assist how it implements several requirements under President Joe Biden’s artificial intelligence executive order, including the development of evaluation capabilities and the creation of red-teaming test guidance.

            The Department of Commerce agency released a request for information for public inspection on the Federal Register on Tuesday. Comments, it said, must be received before Feb. 2, 2024. 

            “I want to invite the broader AI community to engage with our talented and dedicated team through this request for information to advance the measurement and practice of AI safety and trust,” Laurie E. Locascio, NIST’s director and the under secretary of standards and technology, said in a written statement Tuesday. “It is essential that we gather all perspectives as we work to establish a strong and unbiased scientific understanding of AI, which has the potential to impact so many areas of our lives.”

            The request specifically relates to NIST’s requirements under the order to establish best practices for industry on AI development, create guidance for evaluating AI capabilities, produce a report on reducing synthetic content from AI, and make a plan for developing global consensus standards. 

            Other requirements for the agency under the order — such as those on cybersecurity, privacy, and synthetic nucleic acid sequencing — “are being addressed separately from this RFI,” NIST said in a release.

            The request follows other ongoing AI work by the agency. Last month, NIST sent out a request seeking participants for a new AI consortium, which it said would be essential to its work under the executive order. 

            The post NIST seeks public input on its AI executive order requirements appeared first on FedScoop.

            ]]>
            75342