Rebecca Heilweil Archives | FedScoop https://fedscoop.com/author/rebecca-heilweil/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 12 Jun 2024 19:07:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Rebecca Heilweil Archives | FedScoop https://fedscoop.com/author/rebecca-heilweil/ 32 32 Bipartisan Senate bill calls on Commerce to lead AI push with small businesses https://fedscoop.com/bipartisan-senate-bills-calls-on-commerce-to-lead-ai-push-with-small-businesses/ Wed, 12 Jun 2024 19:05:29 +0000 https://fedscoop.com/?p=78778 Legislation from Sens. Cantwell and Moran tasks Commerce and SBA with the creation of AI training resources for small businesses in underserved communities.

The post Bipartisan Senate bill calls on Commerce to lead AI push with small businesses appeared first on FedScoop.

]]>
A new bill from a bipartisan pair of senators aims to accelerate small business use of artificial intelligence, assigning new responsibilities to both the Commerce Department and the Small Business Administration to provide training in the technology. 

The legislation from Sens. Maria Cantwell, D-Wash., and Jerry Moran, R-Kan., titled the Small Business Artificial Intelligence Training and Toolkit Act, would have the Commerce secretary work with the administrator of the SBA on creating AI training resources for small businesses located in rural areas, Tribal communities, or other underserved regions. The training resources would be centered on artificial intelligence and emerging technologies, including quantum technologies, among other topics.

Those trainings would be provided via grants distributed by the SBA, as well as through gifting from the private sector. The Commerce Department would also submit reports to Congress about the state of the program. The legislation requires Commerce to update these trainings, too. 

“Small businesses are the foundation of the U.S. economy, making up 99 percent of all businesses,” Cantwell said in a statement. “They drive economic growth and innovation. It is essential that all American entrepreneurs — especially our small businesses — have access to AI training and reskilling in the 21st-century marketplace. This bill gives small businesses a boost with new tools to thrive as we step into this innovative era.”

The SBA has already taken some steps to encourage businesses to deploy the technology, though the agency’s ability to inventory its AI use cases has also attracted some scrutiny from Congress.

The post Bipartisan Senate bill calls on Commerce to lead AI push with small businesses appeared first on FedScoop.

]]>
78778
OpenAI official meets with the USAID administrator https://fedscoop.com/openai-official-meets-with-the-usaid-administrator/ Tue, 11 Jun 2024 18:13:52 +0000 https://fedscoop.com/?p=78760 Samantha Power’s meeting with OpenAI’s Anna Makanju comes amid continued investments and interest from the international development agency in the technology.

The post OpenAI official meets with the USAID administrator appeared first on FedScoop.

]]>
USAID Administrator Samantha Power met this week with OpenAI’s head of global affairs, according to an agency press release, a move that comes as the international development organization continues to invest in artificial intelligence while also raising concerns about the technology’s privacy, security, bias, and risks.

The Monday meeting with OpenAI’s Anna Makanju focused on artificial intelligence’s impact on global development, the release stated. Topics included “advancing progress in key sectors like global health and food security, preventing the misuse of AI, and strengthening information integrity and resilience in USAID partner countries.” 

The announcement comes as several federal agencies, including NASA and the Department of Homeland Security, experiment with OpenAI’s technology. USAID is also prioritizing looking at artificial intelligence use cases and is in the midst of developing a playbook for AI in global development. 

“Administrator Power and Vice President Makanju also discussed USAID’s commitment to localization, and the potential for generative AI and other AI tools to support burden reduction for USAID implementing partners – in particular, burdens that disproportionately impact local organizations,” the agency said.

Meanwhile, OpenAI appears to be continuing to look for ways to work with U.S. federal agencies. Makanju, for her part, has previously said that government use of OpenAI tools is a goal for the company. At a conference hosted by the Semafor in April, she said she was “bullish” on government use of the technology because of its role in providing services to people. 

The post OpenAI official meets with the USAID administrator appeared first on FedScoop.

]]>
78760
IRS defends use of biometric verification for online FOIA filers https://fedscoop.com/irs-defends-use-of-biometric-verification-for-online-foia-filers/ Mon, 10 Jun 2024 20:54:49 +0000 https://fedscoop.com/?p=78737 The tax agency directs users to file public records requests through ID.me, a tool that has sparked concerns in Congress and from privacy advocates.

The post IRS defends use of biometric verification for online FOIA filers appeared first on FedScoop.

]]>
A few years ago, the Internal Revenue Service announced that it had begun using the identity credential service ID.me for taxpayers to access various online tools. At some point between then and now, the IRS quietly began directing people filing public records requests through its online portal to register for the private biometric verification system.

Though Freedom of Information Act requests to the tax agency can still be filed through FOIA.gov, the mail, by fax, or even in person, the IRS’s decision to point online filers to ID.me — whose facial verification technology has, in the past, drawn scrutiny from Congress — has raised some advocates’ eyebrows

Alex Howard, who directs the Digital Democracy Project and also serves on the FOIA Advisory Committee hosted out of the National Archives, said in an email to FedScoop that language on the IRS website seems to encourage ID.me use for faster service. It also doesn’t make significant references to FOIA.gov, a separate governmentwide portal that agencies are supposed to work with by law, he said. 

“While modernizing authentication systems for online portals is not inherently problematic, adding such a layer to exercising the right to request records under the FOIA is overreach at best and a violation of our fundamental human right to access information at worst, given the potential challenges doing so poses,” Howard said. 

The IRS defended its use of the service in responses to FedScoop questions, noting the other ways people can file FOIA requests and that the tool is only required of those seeking to interact with their public records electronically. The agency also said that ID.me follows National Institute of Standards and Technology guidelines for credential authentication services.

“The sole purpose of ID.me is to act as a Credential Service Provider that authenticates a user interested in using the IRS FOIA Portal to submit a FOIA request and receive responsive documents,” a spokesperson for the agency said. “The data collected by ID.me has nothing to do with the processing of a FOIA request.”

The IRS website currently directs people trying to access the agency’s online FOIA portal to use ID.me, which describes itself as a “digital passport” that “simplifies how individuals prove and share their identity online.” According to one IRS page, the “IRS Freedom of Information Act (FOIA) Public Access Portal now uses a sign-on system that requires identity verification.” Those hoping to access online FOIA portal accounts created before June 2023 also must register for ID.me, the site states. 

The ID.me login page directs users to the FOIA portal, stating that those who can’t verify their identity can try visiting the ID.me help page or pursue alternative options. From there, another page tells users to try “another method” for submitting a FOIA. 

The system requires users to upload a picture of their ID: They can choose between taking a selfie and using biometric facial verification software that compares the image to their ID — or wait for a video appointment to confirm their identity. 

The system also appears to prompt users to share their Social Security number and includes terms of service that discuss the handling of biometric data. Two FedScoop reporters tried registering with the system: one had their expired identification rejected and had to attempt again with a passport, while the other’s driver’s license could not be “read” the first time but was accepted during a second attempt in combination with the video selfie. Both FedScoop reporters later received a letter, by mail, notifying them that their personal information was used to access an IRS service using ID.me.

What an ID.me scan looks like when signing into the IRS’s FOIA portal.

The IRS spokesperson said that the collection of a Social Security number is related to the digital authentication process, not the processing of the FOIA request itself, and biometric information is not retained by the IRS. 

“The IRS requires ID.me to delete the selfie and biometric information within 24 hours for taxpayers who verify using the self-service process,” the spokesperson said, adding that “ID.me is also required to delete any video chat recording within 30 days for taxpayers who choose to verify using the video chat pathway.” 

An ID.me spokesperson said in an email to FedScoop that no state or local agency uses the system for identity verification or as authentication for FOIA portals.  

The FOIA portals for the Treasury Department and Social Security Administration do use ID.me, the company spokesperson noted, but both agencies seem to provide more information on alternative submission options to submit requests online. ID.me referred additional questions regarding the IRS’s use of the company’s FOIA portal to the tax agency. Treasury did not respond to a request for comment by the time of publication.

The Social Security Administration offers both ID.me and Login.gov — another government-run ID service — as options to log into its FOIA portal, FOIAXPress Public Access Link. Like the IRS, the SSA said in response to FedScoop questions that mail, fax, email and FOIA.gov are alternatives to filing FOIAs. A Social Security number is not required for accessing FOIAXpress, though it appears to be required for signing into ID.me, which some users might be using to file FOIA requests. 

“In the scenario where a customer uses their ID.me account to access FOIAXpress PAL, the customer selects this sign in option on the login page and is redirected to a webpage on ID.me’s website,” an agency spokesperson said. “If the customer creates an account in this session, ID.me retains info on the registration event in their records.

They continued: “Upon successful account creation, the user is routed back to SSA’s website and allowed access to FOIAXpress PAL. SSA and ID.me retain info on the transaction in our respective records.”

“Submitting a Social Security Number to ID.me is related to the digital identity authentication process; generally it is not required for the FOIA process,” the IRS spokesperson added. 

Albert Fox Cahn, a privacy-focused attorney who directs the Surveillance Technology Oversight Project, expressed concerns about the IRS’s use of ID.me. “This isn’t just creepy and discriminatory, it might break federal law,” he said in a statement to FedScoop. “Under FOIA, public records belong to the public, and no one should have to hand over their biometric data just to see the records they’re entitled to access.” 

The use of ID.me by the government has sparked concerns in the past. In 2022, some members of Congress accused the company of downplaying wait times and misleading people about the way its facial recognition technology worked. The company, meanwhile, has defended its practices, including its work on fighting fraud during the pandemic.

Matt Bracken contributed to this article.

This story was updated June 11, 2024, to update Alex Howard’s professional affiliation.

The post IRS defends use of biometric verification for online FOIA filers appeared first on FedScoop.

]]>
78737
National lab official highlights role of government datasets in AI work https://fedscoop.com/national-lab-official-highlights-role-of-government-datasets-in-ai-work/ Wed, 05 Jun 2024 17:53:45 +0000 https://fedscoop.com/?p=78683 Jennifer Gaudioso of Sandia’s Center for Computing Research touted the work Department of Energy labs have done to support AI advances.

The post National lab official highlights role of government datasets in AI work appeared first on FedScoop.

]]>
The Department of Energy’s national labs have an especially critical role to play in the advancement of artificial intelligence systems and research into the technology, a top federal official said Tuesday during a Joint Economic Committee hearing on AI and economic growth.

Jennifer Gaudioso, director of the Sandia National Laboratory’s Center for Computing Research, emphasized during her testimony the role that DOE’s national labs could have in both accelerating computing capacity and helping support advances in AI technology. She pointed to her own lab’s work in securing the U.S. nuclear arsenal — and the national labs’ historical role in promoting high-performance computing. 

“Doing AI at the frontier and at scale is crucial for maintaining competitiveness and solving complex global challenges,“ Gaudioso said. “Breakthroughs in one area beget discoveries in others.”

Gaudioso also noted the importance of building AI systems based on more advanced data than the internet-based sources used to build systems like ChatGPT. That includes government datasets, she added.

“What I get really excited about is the transformative potential of training models on science data,” she said. “We can then do new manufacturing. We can make digital twins of the human body to take drug discovery from decades down to months. Maybe 100 days for the next vaccine.” 

The national labs’ current work on artificial intelligence includes AI and nuclear deterrence, national security, non-proliferation, and advanced science and technology, Gaudioso shared. She also referenced the Frontiers in Artificial Intelligence for Science, Security and Technology — a DOE effort focused on using supercomputing for AI. The FASST initiative was announced last month. 

Last November, FedScoop reported on how the Oak Ridge National Laboratory in Tennessee was preparing its supercomputing resources — including the world’s fastest supercomputer, Frontier — for AI work. 

Tuesday’s hearing follows the White House’s continued promotion of new AI-focused policies, and as Congress mulls legislation focused on both regulating and incubating artificial intelligence

The post National lab official highlights role of government datasets in AI work appeared first on FedScoop.

]]>
78683
Inside NASA’s deliberations over ChatGPT https://fedscoop.com/inside-nasas-deliberations-over-chatgpt/ Wed, 22 May 2024 14:43:59 +0000 https://fedscoop.com/?p=78445 More than 300 pages of documents provide insight into how the space agency thought about generative AI, just as ChatGPT entered the public lexicon.

The post Inside NASA’s deliberations over ChatGPT appeared first on FedScoop.

]]>
In the months after ChatGPT’s public release, leaders inside NASA debated the merits and flaws of generative AI tools, according to more than 300 pages of emails obtained by FedScoop, revealing both excitement and concerns within an agency known for its cautious approach to emergent technologies. 

NASA has so far taken a relatively proactive approach to generative AI, which the agency is considering for tasks like summarization and code-writing. Staff are currently working with the OpenAI tools built into Microsoft’s Azure service to analyze use cases. NASA is also weighing generative AI capabilities from its other cloud providers — and it’s in discussions with Google Cloud on plans to test Gemini, the competitor AI tool formerly known as Bard. 

Though NASA policy prohibits the use of sensitive data on generative AI systems, that won’t be the case forever. Jennifer Dooren, the deputy news chief of NASA, told FedScoop that the agency is now working with “leading vendors to approve generative AI systems” for use on sensitive data and anticipates those capabilities will be available soon. While the agency’s most recent AI inventory only includes one explicit reference to OpenAI technology, an updated list with more references to generative AI could be released publicly as soon as October. 

In the first weeks of 2023, and as ChatGPT entered the public lexicon, the agency’s internal discussions surrounding generative AI appeared to focus on two core values: researching and investing in technological advances and encouraging extreme caution on safety. Those conversations also show how the agency had to factor in myriad authorities and research interests to coordinate its use. 

“NASA was like anyone else during the time that ChatGPT was rolled out: trying to understand services like these, their capabilities and competencies, and their limitations, like any of us tried to do,” said Namrata Goswami, an independent space policy expert who reviewed the emails, which were obtained via a public records request. 

She continued: “NASA did not seem to have a prior understanding of generative AI, as well as how these may be different from a platform like Google Search. NASA also had limited knowledge of the tools and source structure of AI. Neither did it have the safety, security, and protocols in place to take advantage of generative AI. Instead, like any other institution [or] individual, its policy appeared to be reactive.” 

NASA’s response

Emails show early enthusiasm and demand internally for OpenAI technology — and confusion about how and when agency staffers could use it. In one January 2023 email, Brandon Ruffridge, from the Office of the Chief Information Officer at NASA’s Glenn Research Center, expressed frustration that without access to the tool, interns would have to spend time on “less important tasks” and that engineers and scientists’ research would be held back. In another email that month, Martin Garcia Jr., an enterprise data science operations lead in the OCIO at the Johnson Space Center, wrote that there was extensive interest in getting access to the tech.

By mid-February, Ed McLarney, the agency’s AI lead, had sent a message noting that, at least informally, he’d been telling people that ChatGPT had not been approved for IT use and that NASA data should only be used on NASA-approved systems. He also raised the idea of sending a workforce-wide message, which ended up going out in May. In those opening weeks, the emails seem to show growing pressure on the agency to establish permissions for the tool. 

“We have demand and user interest through the roof for this. If we slow roll it, we run [a] high risk of our customers going around us, doing it themselves in [an] unauthorized, non-secure manner, and having to clean up the mess later,” McLarney warned in a March email to other staff focused on the technology. Another email, from David Kelldorf, chief technology officer of the Johnson Space Center, noted that “many are chomping at the bits to try it out.”

But while some members of the space agency expressed optimism, others urged caution about the technology’s potential pitfalls. In one email, Martin Steele, a member of the data stewardship and strategy team at NASA’s Information, Data, and Analytics Services division, warned against assuming that ChatGPT had “intelligence” and stressed the importance of “The Human Element.” In a separate email, Steven Crawford, senior program executive for scientific data and computing with the agency’s Science Mission Directorate, expressed concerns about the tool’s potential to spread misinformation. (Crawford later told FedScoop that he’s now satisfied by NASA’s guardrails and has joined some generative AI efforts at the agency). 

Email from Steven Crawford, April 10, 2023.

In those first weeks and months of 2023, there were also tensions surrounding security and existing IT procedures. Karen Fallon, the director of Information, Data, and Analytics Services for NASA’s Chief Information Office operations, cautioned in March that enthusiasm for the technology shouldn’t trump agency leaders’ need to follow existing IT practices. (When asked for comment, NASA called Fallon’s concerns “valid and relevant.”)

Email from Karen Fallon, March 16, 2023.

In another instance, before NASA’s official policy was publicized in May, an AI researcher at the Goddard Space Flight Center asked if it would be acceptable for their team to use their own GPT instances with code that was already in the public domain. In response, McLarney explained that researchers should not use NASA emails for personal OpenAI accounts, be conscious about data and code leaks, and make sure both the data and code were public and non-sensitive. 

NASA later told FedScoop that the conversation presented “a preview of pre-decisional, pending CIO guidance” and that it aligned with NASA IT policy — though they noted that NASA doesn’t encourage employees to spend their own funds on IT services for space agency work. 

Email from Martin Garcia, Jr., April 7, 2023.

“As NASA continues to work to onboard generative AI systems it is working through those concerns and is mitigating risks appropriately,” Dooren, the agency’s deputy news chief, said. 

Of course, NASA’s debate comes as other federal agencies and companies continue to evaluate generative AI. Organizations are still learning how to approach the technology and its impact on daily work, said Sean Costigan, managing director of resilience strategy at the cybersecurity company Red Sift. NASA is no exception, he argued, and must consider potential risks, including misinformation, data privacy and security, and reduced human oversight. 

“It is critical that NASA maintains vigilance when adopting AI in space or on earth —wherever it may be — after all, the mission depends on humans understanding and accounting for risk,” he told FedScoop. “There should be no rush to adopt new technologies without fully understanding the opportunities and risks.” 

Greg Falco, a systems engineering professor at Cornell University who has focused on space infrastructure, noted that NASA tends to play catchup on new computing technologies and can fall behind the startup ecosystem. Generative AI wouldn’t necessarily be used for the most high-stakes aspects of the space agency’s work, but could help improve efficiency, he added.

NASA generative AI campaign.

“NASA is and was always successful due to [its] extremely cautious nature and extensive risk management practices. Especially these days, NASA is very risk [averse] when it comes to truly emergent computing capabilities,” he said. “However, they will not be solved anytime soon. There is a cost/benefit scale that needs to be tilted towards the benefits given the transformative change that will come in the next [three-to-five] years with Gen AI efficiency.”

He continued: “If NASA and other similar [government] agencies fail to hop on the generative AI train, they will quickly be outpaced not just by industry but by [nation-state] competitors. China has made fantastic government supported advancements in this domain which we see publicly through their [government] funded academic publications.”

Meanwhile, NASA continues to work on its broader AI policy. The space agency published an initial framework for ethical AI in 2021 that was meant to be a “conversation-starter,” but emails obtained by FedScoop show that the initial framework received criticism — and agency leaders were told to hold off.  The agency has since paused co-development on practitioners’ guidance on AI to focus instead on federal AI work, but plans to return to that work “in the road ahead,” according to Dooren.

The space agency also drafted an AI policy in 2023, but ultimately decided to delay it to wait for federal directives. NASA now plans to refine and publish the policy this year. 

The post Inside NASA’s deliberations over ChatGPT appeared first on FedScoop.

]]>
78445
New Commerce strategy document points to the difficult science of AI safety https://fedscoop.com/new-commerce-strategy-document-points-to-the-difficult-science-of-ai-safety/ Tue, 21 May 2024 16:04:36 +0000 https://fedscoop.com/?p=78420 The Biden administration seeks international coordination on critical AI safety challenges.

The post New Commerce strategy document points to the difficult science of AI safety appeared first on FedScoop.

]]>
The Department of Commerce on Tuesday released a new strategic vision on artificial intelligence and unveiled more detailed plans about its new AI Safety Institute. 

The document, which focuses on developing a common understanding of and practices to support AI security, comes as the Biden administration seeks to build international consensus on AI safety issues. 

AI researchers continue to debate and study the potential risks of the technology, which include bias and discrimination concerns, privacy and safety vulnerabilities, and more far-reaching fears about so-called general artificial intelligence. In that vein, the strategy points to myriad definitions, metrics, and verification methodologies for AI safety issues. In particular, the document discusses developing ways of detecting synthetic content, model security best practices, and other safeguards.

It also highlights steps that the AI Safety Institute, which is housed within Commerce’s National Institute of Standards and Technology, might help promote and evaluate more advanced models, including red-teaming and A/B testing. Commerce expects the labs of NIST — which is still facing ongoing funding challenges — to conduct much of this work. 

“The strategic vision we released today makes clear how we intend to work to achieve that objective and highlights the importance of cooperation with our allies through a global scientific network on AI safety,” Commerce Secretary Gina Raimondo in a statement. “Safety fosters innovation, so it is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”

The AI Safety Institute is also looking at ways to support the work of AI safety evaluations within the broader community, including through publishing guidelines for developers and deployers and creating evaluation protocols that could be used by, for instance, third-party independent evaluators. Eventually, the institute hopes to create a “community” of evaluators and lead an international network on AI safety. 

The release of the strategy is only the latest step taken by the Commerce Department, which is leading much of the Biden administration’s work on emerging technology. 

Earlier this year, the AI Safety Institute announced the creation of a consortium to help meet goals in the Biden administration’s executive order on the technology. In April, the Commerce Department added five new people to the AI Safety Institute’s executive leadership team.

That same month, Raimondo signed a memorandum of understanding with the United Kingdom focused on artificial intelligence. This past Monday, the UK’s technology secretary said its AI Safety Institute would open an outpost in the Bay Area, its first overseas office. 

The post New Commerce strategy document points to the difficult science of AI safety appeared first on FedScoop.

]]>
78420
DHS official: AI could exacerbate chemical and biological threats https://fedscoop.com/dhs-official-ai-could-exacerbate-chemical-and-biological-threats/ Mon, 20 May 2024 10:00:00 +0000 https://fedscoop.com/?p=78378 The assistant secretary for DHS’s Countering Weapons of Mass Destruction office warned in an interview that AI could supercharge biological research — and invent new pathogens.

The post DHS official: AI could exacerbate chemical and biological threats appeared first on FedScoop.

]]>
A Department of Homeland Security team dedicated to deterring the use of weapons of mass destruction is now studying how artificial intelligence could exacerbate these kinds of threats. In the wake of a report announced last month, one of the top officials with that office is pointing to a series of potential strategies to confront the ways AI tools could be deployed — even inadvertently — to synthesize dangerous chemical and biological materials.  

In an interview with FedScoop, Mary Ellen Callahan, the assistant secretary for the DHS Countering Weapons of Mass Destruction (CWMD) office, outlined how the U.S. government could deal with this kind of challenge, including looking at intellectual property and copyright enforcement and encouraging journals with large stores of biological and chemical research to introduce more stringent access requirements. The effort needs to be whole-of-government and international, she argued. 

“Both the [DHS] secretary and the president have said that regulation in AI may not be effective or helpful because it’s reactive. It’s also answering probably yesterday’s problem,” she said. “We’re going to look to see if we can leverage the currently existing models.”

The interview comes after DHS submitted a report to the president looking at the intersection of Chemical, Biological, Radiological, and Nuclear (CBRN) threats and artificial intelligence. The president’s advisers have recommended making that report public, Callahan said, though only a fact sheet is available right now. AI labs, along with those representing the Energy Department, think tanks, and model evaluators, were consulted. The DOE is also working on a separate, classified report into AI and nuclear threats, specifically. “The effort to produce the report regarding nuclear threats and AI is ongoing,” a spokesperson for the agency told FedScoop. 

Editor’s note: The transcript has been edited for clarity and length.

FedScoop: Can you start by explaining what the threat actually is, here? 

Assistant Secretary Mary Ellen Callahan: Artificial intelligence and generative artificial intelligence is the processing of a lot of different data to try to find novel or new content. Let’s talk about biology, specifically: It is using artificial intelligence to update, enhance, and improve research. … We really want to maximize artificial intelligence for good for research while minimizing the malign actors’ ability to leverage artificial intelligence for bad. 

FS: Is the idea that someone could use something like OpenAI to just come up with something really bad instead of something really good?

MEC: We don’t want to give people ideas. But what we want to do allow the really novel research — the important biological and chemical research breakthroughs — to happen, while still providing hurdles for bad guys to try to get their hands on, say, known recipes for pathogens [and] to make sure that we are leveraging the promise of AI while minimizing the peril. 

FS: Are you making a distinction between chemical, biological, and nuclear? And is there a reason why one would be more relevant than another in terms of AI threats?

MEC: The Countering Weapons of Mass Destruction office here at DHS has been around for about five and a half years. It is intended to be the prevention and detection of all weapons of mass destruction threats. That is usually summarized as chemical, biological, radiological, and nuclear (CBRN). It’s all on the prevention and detection side. We’re really focused on how we deter people before they get to actually triggering something. … The executive order asked us to talk about CBRN threats. We do in the report that is before the president right now generally talk about CBRN threats, and the fact sheet that is out publicly does talk about that. 

We focus primarily on chemical and biological threats for two reasons: One is the access to chemical equations and bio-recipes is higher and it’s more advanced. Both the bio and the chemical [information] are pretty available in the common parlance and the common internet where they could be indexed by artificial intelligence models or frontier models.

… With regard to radiological and nuclear, the research associated with that is often on closed networks and maybe classified systems. The Department of Energy was asked to do a parallel report on nuclear threats specifically. Therefore, we’ve ceded that specific question about radiological or nuclear threats to the classified report the Department of Energy is working on right now.

FS: One of the points that’s made in the public fact sheet is the concern about companies taking heterogeneous approaches in terms of evaluation and red-teaming. Can you talk a little bit more about that?

MEC: All the frontier models have made voluntary commitments to the president from last year. Those [are] promises [like] safety and security, including focusing on high-risk threats, like CBRN. They all want to do a good job. They’re not quite sure exactly how to do that job. 

… We have to develop guidelines and procedures in collaboration with the U.S. government, the private sector, and academia to make sure that we understand how we try to approach these highly sensitive, high-risk areas of information. That we create a culture of responsibility for the AI developers — those voluntary commitments are the first step in that. … But [we need] to make sure that all the folks that are within the ecosystem are all looking at ways to deter bad actors from leveraging either new information or newly visible information that was distilled as a mosaic coming out of generative AI-identifying elements. So it’s really got a look at the whole universe on how to respond to this.

FS: Another thing that was both interesting and worrisome to me was the concern that’s highlighted about limitations in regulation and enforcement and where holes might be in terms of AI.

MEC: I am more sanguine now than I was when I started looking at that. So hopefully, that will give you some comfort. Really, we’re looking at a variety of different laws and factors. …We want to look at existing laws to see if there can be impacts taken, like, for example, export controls, intellectual property, tech transport, foreign investments. [We want to] look at things that already exist that we could already leverage to go and try to make it be successful and useful.

Some of the authorities are spread throughout the federal government, but that actually could make it stronger because then you have an ability to attack these issues in a couple of different ways, like the intellectual property of misuse [or] if somebody is using something that is copyrighted in order to leverage and create a technical or biological threat. 

The international coordination piece is very important. There’s a really significant interest in leaning in together and working on this whole-of-community effort to establish their appropriate guidelines and really to look to provide additional restraints on models, but also to amplify that culture of responsibility. 

We could look at updating regulatory requirements as the opportunity presents, but we’re not leading with regulations for a couple of reasons: Both the secretary and the president have said that regulation in AI may not be effective or helpful because it’s reactive. It’s also answering probably yesterday’s problem. 

FS: I’m curious about how you see the risks with open-source AI versus things that are not open source. I know that’s a big discussion in the AI community. 

MEC: There are pros and cons to open-source AI. From a CBRN perspective, understanding some of the weights may be helpful, but they also may reveal more information. … There’s a lot of information that’s on the internet and it’s going to be very hard to protect that existing content right now from AI. 

There are also a lot of high-end bio and chem databases that are behind firewalls, that are not on the internet, that are subscription-based, that are really very valuable for biologists. One of the things we’re recommending doing [for] data that isn’t on the internet — or that isn’t readily available to use for models —  is to actually have a higher standard, a higher customer standard, like a know-your-customer procedure. That benefits the promise of AI for good while detracting from bad actors and trying to get access to it. 

FS: Have you had conversations with some of the academic organizations and what are those conversations like? Are they open to this?

MEC: We spoke to a lot of academic organizations, a lot of think tanks, and all the major models. I don’t want to answer the question specifically about high-end databases, but I can say that across the board, people were very supportive of having appropriate controls around sensitive data. 

FS: How do we deal with companies that would not want to help with this or countries that would not want to help with this — like what’s the strategy there? 

MEC: That’s the whole idea. Everyone has to work collaboratively on this whole-of-community effort. Right now, there is a real appetite for that. All of this is early, but I think that people understand that [this is] the year and the time to try to build a governance framework in which to think about these issues.

FS: I’m curious if you would call this like a present threat or something that we should be worried about for the future, whether this is something we’re thinking about, like, this could happen tomorrow, or this could happen in a few years from now?

MEC: We tried to write the report to talk about present risk and near-term future risk. We can look at the speed and the rapidity in which AI models are developing and we can extrapolate kind of what the impact is. I want to highlight a couple of things with regard to the present-day risk to the near future. Right now, they say ChatGPT is like having an undergraduate biology student on your shoulder. There’s some discussion, as these models developed, that it would be like a graduate student on your shoulder. 

I also want to note that we’re talking about CBRN harms that are created by AI, but there also could be unintentional harm. We very much want to put in what I’m calling hurdles, or obstacles for people, who want to do harm, malign actors. But we also have to recognize that there could be unintentional harm that’s created by well-intending actors.

The other thing that we want to do with this whole-of-community effort with these guidelines and procedures that we’re encouraging to be created between international government, private sector, and academia, is to safeguard the digital to physical frontier. Right now, there’s a possibility that as I said, you could have an undergraduate student on your shoulder helping you to search to try to create a new chemical compound, but that — right now —  is mostly on the computer and then the screen and is not yet able to do it in real life. 

We’re really trying to make sure that the border between digital and frontier remains as strong as it can be. That’s probably the … three-to-five-year risk: something happens and is capable of being translated into real life. It’s still going to be hard though, hopefully. 

The post DHS official: AI could exacerbate chemical and biological threats appeared first on FedScoop.

]]>
78378
Labor Department releases principles on AI and workers, with pledges from Microsoft, Indeed https://fedscoop.com/labor-department-releases-principles-on-ai-and-workers-with-pledges-from-microsoft-indeed/ Fri, 17 May 2024 14:45:41 +0000 https://fedscoop.com/?p=78367 The White House says it “welcomes additional commitments” from tech companies on the principles.

The post Labor Department releases principles on AI and workers, with pledges from Microsoft, Indeed appeared first on FedScoop.

]]>

The Biden administration this week released a list of principles meant to govern how workers interact with artificial intelligence. The move comes in response to last year’s AI executive order and will be followed by a new list of best practices expected to be published by the Labor Department. 

The principles focus on values like ensuring responsible use of workers’ data, supporting workers who might need to be upskilled because of artificial intelligence, and committing to using transparency when deploying AI. The principles appear to be voluntary and follow another set of non-binding commitments focused on artificial intelligence announced last July that included pledges from companies like OpenAI and Anthropic.

“Workers must be at the heart of our nation’s approach to AI technology development and use,” acting Labor Secretary Julie Su said in a statement. “These principles announced [Thursday] reflect the Biden-Harris administration’s belief that, in addition to complying with existing laws, artificial intelligence should also enhance the quality of work and life for all workers. As employers and developers implement these principles, we are determined to create a future where technology serves the needs of people above all.”

Microsoft and Indeed, the online job repository platform, have agreed to these principles, according to a press release shared by the White House. The administration seemed to be courting further support for the principles in a post, noting that it “welcomes additional commitments from other technology companies.” 

Notably, the White House recently hosted an event with senior officials from the Labor Department focused on the technology’s impact on workers, according to an IBM executive’s post on LinkedIn.

Neither the White House nor the Department of Labor responded to requests for comment. 

The post Labor Department releases principles on AI and workers, with pledges from Microsoft, Indeed appeared first on FedScoop.

]]>
78367
DeRusha stepping down from federal CISO role https://fedscoop.com/chris-derusha-leaving-federal-ciso-omb-oncd/ Tue, 14 May 2024 19:48:50 +0000 https://fedscoop.com/?p=78317 He’s also leaving ONCD, where he’s served as deputy national cyber director.

The post DeRusha stepping down from federal CISO role appeared first on FedScoop.

]]>
Chris DeRusha is exiting his role as federal chief information security officer after more than three years on the job, the Office of Management and Budget confirmed Tuesday.

DeRusha, who was appointed to the federal CISO position in January 2021, played a critical role in the development of the White House’s artificial intelligence executive order, in addition to the Biden administration’s 2021 executive order on cybersecurity and the corresponding national cybersecurity strategy and implementation plan

“Since day one of the Biden Administration, Chris has been instrumental in strengthening our nation’s cybersecurity, protecting America’s critical infrastructure, and improving the digital defenses of the Federal government,” Clare Martorana, federal chief information officer, said in a statement. “I wish him the best, and know he will continue to serve as a leading voice within the cybersecurity community.”  

As the federal CISO, DeRusha oversaw the 25-member council of his chief information security officer peers and spearheaded the protection of federal networks, while also managing agencywide implementation of multifactor authentication and supporting the coordination of the nation’s broader cybersecurity as the deputy national cyber director. 

DeRusha will also leave behind that role, the Office of the National Cyber Director confirmed.

“From the beginning of the Biden-Harris Administration, and even before, Chris DeRusha has been a steady, guiding leader,” National Cyber Director Harry Coker Jr. said in a statement. “As Deputy National Cyber Director with ONCD — while continuing his excellent work as Federal CISO — he has been a trusted and valued partner. 

“Chris’s keen insights, experience, and judgement have been integral to the work we’ve done and what we will continue to do to strengthen our Nation’s cyber infrastructure. I’m grateful for his commitment to the American people and to the Biden-Harris Administration. All of us at ONCD wish him the very best in his next chapter,” Coker added.

Speaking during Scoop News Group’s CyberTalks event last November, DeRusha touted the White House’s coalition-building efforts and “meaningful cooperation” as a means to reaching its overarching cybersecurity goals.  

“We cannot achieve any meaningful progress on managing cyber risk as one nation,” DeRusha said. “And this administration is definitely committed to working with our like-minded partners on shared goals.”

A month earlier, during the Google Public Sector Forum, DeRusha said that after “decades of investments in addressing legacy modernization challenges,” the Biden administration was poised to address “massive” long-term challenges on everything from AI strategy to combating ransomware. 

“We’ve taken on pretty much every big challenge that we’ve been talking about for a couple of decades,” DeRusha said. “And we’re taking a swing and making” progress.

Prior to his current stint with the federal government, DeRusha served as CISO for the Biden presidential campaign and stayed on with the transition team’s technology strategy and delivery unit. DeRusha had previously worked as the CISO for the state of Michigan.

OMB did not reveal DeRusha’s last day or where he is headed next. 

Federal News Network first reported the news of DeRusha’s departure.

The post DeRusha stepping down from federal CISO role appeared first on FedScoop.

]]>
78317
NASA has a new chief AI officer https://fedscoop.com/nasa-has-a-new-chief-ai-officer/ Mon, 13 May 2024 18:54:25 +0000 https://fedscoop.com/?p=78284 Several CDOs have now taken on the role.

The post NASA has a new chief AI officer appeared first on FedScoop.

]]>
NASA has named David Salvagnini, the space agency’s chief data officer, as its chief artificial intelligence officer, fulfilling a requirement laid out in recent White House guidance and President Joe Biden’s executive order on AI

In a press release, NASA said that Salvagnini will help lead the agency’s work on developing AI technology, as well as its collaborations with academic institutions and other experts. Salvagnini will replace Kate Calvin, the agency’s chief scientist and former responsible AI official, in leading NASA’s efforts on the technology. 

“Artificial intelligence has been safely used at NASA for decades, and as this technology expands, it can accelerate the pace of discovery,” NASA Administrator Bill Nelson said in a statement. “It’s important that we remain at the forefront of advancement and responsible use. In this new role, David will lead NASA’s efforts to guide our agency’s responsible use of AI in the cosmos and on Earth to benefit all humanity.”  

NASA makes use of myriad forms of artificial intelligence, according to the agency’s AI inventory

NASA’s announcement comes after several agencies have already appointed individuals to the chief AI officer roles, including the National Science Foundation, the General Services Administration, and the Department of Veterans Affairs. Several others have also opted to name their chief data officers as their chief AI officers. 

The post NASA has a new chief AI officer appeared first on FedScoop.

]]>
78284