generative AI Archives | FedScoop https://fedscoop.com/tag/generative-ai/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 29 May 2024 13:07:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 generative AI Archives | FedScoop https://fedscoop.com/tag/generative-ai/ 32 32 Senate Democrat pushes for expansion to copyright act to include generative AI research https://fedscoop.com/senate-democrat-pushes-for-expansion-to-copyright-act-to-include-generative-ai-research/ Tue, 28 May 2024 20:56:28 +0000 https://fedscoop.com/?p=78529 In a letter to the Library of Congress, Sen. Mark Warner, D-Va., proposed an expansion to an exemption for generative AI “good-faith security research.”

The post Senate Democrat pushes for expansion to copyright act to include generative AI research appeared first on FedScoop.

]]>
An exemption under the Digital Millennium Copyright Act should be expanded to include generative artificial intelligence research focused specifically on embedded biases in AI systems and models, a top Senate Democrat argued in a new letter to the Library of Congress.

In the letter, shared exclusively with FedScoop, Sen. Mark Warner, D-Va., urged the LOC’s copyright office to expand an existing “good-faith security research exemption” to include research that exists outside of traditional security concerns, such as bias, arguing that it would be the best path for ensuring a “robust security ecosystem” for tools such as generative AI. 

The letter from Warner, co-chair of the Senate Cybersecurity Caucus, is in response to a petition from Jonathan Weiss, founder of the IT consulting firm Chinnu Inc., that asked the LOC to establish a new exemption to address security research on generative AI models and systems. 

A spokesperson for Warner said in an email that an expansion to the exemption rather than an entirely new exemption “is the best way to extend the existing protections that have enabled a robust cybersecurity research ecosystem to the emerging issues surrounding safe AI.”

Warner’s letter mirrors a Department of Justice response to the same petition last month. The Computer Crime and Intellectual Property Section of the DOJ’s Criminal Division wrote that “good faith research on potentially harmful outputs of AI and similar algorithmic systems should be similarly exempted from the DMCA’s circumvention provisions.”

Said Warner: “It is crucial that we allow researchers to test systems in ways that demonstrate how malfunctions, misuse and misoperation may lead to an increased risk of physical or psychological harm.”

The Virginia Democrat, who has introduced bipartisan legislation on artificial intelligence security and emerging tech standards, pointed to the National Institute of Standards and Technology’s AI Risk Management Framework to acknowledge that AI’s risks “differ from traditional software risks in key ways,” opening the door for not only security vulnerabilities but also dangerous and biased outputs. 

The use of generative AI for fraud and non-consensual image generation are among the deceptive practices Warner noted as reasons for consumer protections, such as watermarks and content credentials. Additionally, the lawmaker asked the LOC to ensure that the potential expanded exemption “does not immunize” research that would intentionally undermine protective measures. 

“Absent very clear indicia of good faith, efforts that undermine provenance technology should not be entitled to the expanded exemption,” Warner said. 

The senator also asked the LOC to include security and safety vulnerabilities, especially involving bias and additional harmful outputs, in its expanded good-faith security research definition.

In response to Warner’s letter, Weiss said in an email to FedScoop that he doesn’t “care whether the existing exemption is expanded to include research on AI bias/harmful output, or whether an entirely new exemption is created. Our main concern is to secure protections for good faith research on these emerging intelligent systems, whose inner workings even the brightest minds in the world cannot currently explain.”

The Weiss petition and letters from DOJ and Warner were prompted by the LOC Copyright Office’s ninth triennial rulemaking proceeding, which accepts public input for new exemptions to the DMCA.

The post Senate Democrat pushes for expansion to copyright act to include generative AI research appeared first on FedScoop.

]]>
78529
Inside NASA’s deliberations over ChatGPT https://fedscoop.com/inside-nasas-deliberations-over-chatgpt/ Wed, 22 May 2024 14:43:59 +0000 https://fedscoop.com/?p=78445 More than 300 pages of documents provide insight into how the space agency thought about generative AI, just as ChatGPT entered the public lexicon.

The post Inside NASA’s deliberations over ChatGPT appeared first on FedScoop.

]]>
In the months after ChatGPT’s public release, leaders inside NASA debated the merits and flaws of generative AI tools, according to more than 300 pages of emails obtained by FedScoop, revealing both excitement and concerns within an agency known for its cautious approach to emergent technologies. 

NASA has so far taken a relatively proactive approach to generative AI, which the agency is considering for tasks like summarization and code-writing. Staff are currently working with the OpenAI tools built into Microsoft’s Azure service to analyze use cases. NASA is also weighing generative AI capabilities from its other cloud providers — and it’s in discussions with Google Cloud on plans to test Gemini, the competitor AI tool formerly known as Bard. 

Though NASA policy prohibits the use of sensitive data on generative AI systems, that won’t be the case forever. Jennifer Dooren, the deputy news chief of NASA, told FedScoop that the agency is now working with “leading vendors to approve generative AI systems” for use on sensitive data and anticipates those capabilities will be available soon. While the agency’s most recent AI inventory only includes one explicit reference to OpenAI technology, an updated list with more references to generative AI could be released publicly as soon as October. 

In the first weeks of 2023, and as ChatGPT entered the public lexicon, the agency’s internal discussions surrounding generative AI appeared to focus on two core values: researching and investing in technological advances and encouraging extreme caution on safety. Those conversations also show how the agency had to factor in myriad authorities and research interests to coordinate its use. 

“NASA was like anyone else during the time that ChatGPT was rolled out: trying to understand services like these, their capabilities and competencies, and their limitations, like any of us tried to do,” said Namrata Goswami, an independent space policy expert who reviewed the emails, which were obtained via a public records request. 

She continued: “NASA did not seem to have a prior understanding of generative AI, as well as how these may be different from a platform like Google Search. NASA also had limited knowledge of the tools and source structure of AI. Neither did it have the safety, security, and protocols in place to take advantage of generative AI. Instead, like any other institution [or] individual, its policy appeared to be reactive.” 

NASA’s response

Emails show early enthusiasm and demand internally for OpenAI technology — and confusion about how and when agency staffers could use it. In one January 2023 email, Brandon Ruffridge, from the Office of the Chief Information Officer at NASA’s Glenn Research Center, expressed frustration that without access to the tool, interns would have to spend time on “less important tasks” and that engineers and scientists’ research would be held back. In another email that month, Martin Garcia Jr., an enterprise data science operations lead in the OCIO at the Johnson Space Center, wrote that there was extensive interest in getting access to the tech.

By mid-February, Ed McLarney, the agency’s AI lead, had sent a message noting that, at least informally, he’d been telling people that ChatGPT had not been approved for IT use and that NASA data should only be used on NASA-approved systems. He also raised the idea of sending a workforce-wide message, which ended up going out in May. In those opening weeks, the emails seem to show growing pressure on the agency to establish permissions for the tool. 

“We have demand and user interest through the roof for this. If we slow roll it, we run [a] high risk of our customers going around us, doing it themselves in [an] unauthorized, non-secure manner, and having to clean up the mess later,” McLarney warned in a March email to other staff focused on the technology. Another email, from David Kelldorf, chief technology officer of the Johnson Space Center, noted that “many are chomping at the bits to try it out.”

But while some members of the space agency expressed optimism, others urged caution about the technology’s potential pitfalls. In one email, Martin Steele, a member of the data stewardship and strategy team at NASA’s Information, Data, and Analytics Services division, warned against assuming that ChatGPT had “intelligence” and stressed the importance of “The Human Element.” In a separate email, Steven Crawford, senior program executive for scientific data and computing with the agency’s Science Mission Directorate, expressed concerns about the tool’s potential to spread misinformation. (Crawford later told FedScoop that he’s now satisfied by NASA’s guardrails and has joined some generative AI efforts at the agency). 

Email from Steven Crawford, April 10, 2023.

In those first weeks and months of 2023, there were also tensions surrounding security and existing IT procedures. Karen Fallon, the director of Information, Data, and Analytics Services for NASA’s Chief Information Office operations, cautioned in March that enthusiasm for the technology shouldn’t trump agency leaders’ need to follow existing IT practices. (When asked for comment, NASA called Fallon’s concerns “valid and relevant.”)

Email from Karen Fallon, March 16, 2023.

In another instance, before NASA’s official policy was publicized in May, an AI researcher at the Goddard Space Flight Center asked if it would be acceptable for their team to use their own GPT instances with code that was already in the public domain. In response, McLarney explained that researchers should not use NASA emails for personal OpenAI accounts, be conscious about data and code leaks, and make sure both the data and code were public and non-sensitive. 

NASA later told FedScoop that the conversation presented “a preview of pre-decisional, pending CIO guidance” and that it aligned with NASA IT policy — though they noted that NASA doesn’t encourage employees to spend their own funds on IT services for space agency work. 

Email from Martin Garcia, Jr., April 7, 2023.

“As NASA continues to work to onboard generative AI systems it is working through those concerns and is mitigating risks appropriately,” Dooren, the agency’s deputy news chief, said. 

Of course, NASA’s debate comes as other federal agencies and companies continue to evaluate generative AI. Organizations are still learning how to approach the technology and its impact on daily work, said Sean Costigan, managing director of resilience strategy at the cybersecurity company Red Sift. NASA is no exception, he argued, and must consider potential risks, including misinformation, data privacy and security, and reduced human oversight. 

“It is critical that NASA maintains vigilance when adopting AI in space or on earth —wherever it may be — after all, the mission depends on humans understanding and accounting for risk,” he told FedScoop. “There should be no rush to adopt new technologies without fully understanding the opportunities and risks.” 

Greg Falco, a systems engineering professor at Cornell University who has focused on space infrastructure, noted that NASA tends to play catchup on new computing technologies and can fall behind the startup ecosystem. Generative AI wouldn’t necessarily be used for the most high-stakes aspects of the space agency’s work, but could help improve efficiency, he added.

NASA generative AI campaign.

“NASA is and was always successful due to [its] extremely cautious nature and extensive risk management practices. Especially these days, NASA is very risk [averse] when it comes to truly emergent computing capabilities,” he said. “However, they will not be solved anytime soon. There is a cost/benefit scale that needs to be tilted towards the benefits given the transformative change that will come in the next [three-to-five] years with Gen AI efficiency.”

He continued: “If NASA and other similar [government] agencies fail to hop on the generative AI train, they will quickly be outpaced not just by industry but by [nation-state] competitors. China has made fantastic government supported advancements in this domain which we see publicly through their [government] funded academic publications.”

Meanwhile, NASA continues to work on its broader AI policy. The space agency published an initial framework for ethical AI in 2021 that was meant to be a “conversation-starter,” but emails obtained by FedScoop show that the initial framework received criticism — and agency leaders were told to hold off.  The agency has since paused co-development on practitioners’ guidance on AI to focus instead on federal AI work, but plans to return to that work “in the road ahead,” according to Dooren.

The space agency also drafted an AI policy in 2023, but ultimately decided to delay it to wait for federal directives. NASA now plans to refine and publish the policy this year. 

The post Inside NASA’s deliberations over ChatGPT appeared first on FedScoop.

]]>
78445
Nuclear Regulatory Commission staff recommends AI framework, identifies potential use cases https://fedscoop.com/nrc-ai-framework-needed-identified-potential-use-cases/ Fri, 10 May 2024 18:18:32 +0000 https://fedscoop.com/?p=78269 An artificial intelligence team within the NRC released a report outlining recommendations for the agency to leverage the technology.

The post Nuclear Regulatory Commission staff recommends AI framework, identifies potential use cases appeared first on FedScoop.

]]>
Nuclear Regulatory Commission staffers identified  36 potential artificial intelligence use cases — including some involving generative AI —  as part of a series of recommendations to the commissioners and an agency-wide enterprise strategy detailed in a report released Thursday.

In the report, NRC staff recommended an AI framework for the agency to follow, which outlines approaches for AI governance, hiring new talent, upskilling existing workers, maturing the commission’s data management program and allocating resources to support AI integration into IT infrastructure. 

Additionally, NRC staff recommended that the agency invest in “foundational tools” by acquiring gen AI-based services and integrating AI in the NRC’s system for document access and management’s cognitive search technology.

“To effectively implement AI solutions, the NRC will need to develop a framework to deploy AI at the agency,” the report states. “As part of this effort, the NRC will continue to strengthen its many partnerships to stay current with the evolving state of AI. To achieve the promise of AI, leadership engagement will be essential.”

The report pushed for a collaborative approach to furthering the NRC’s use of the technology, pointing to the Chief AI Officers Council, the Responsible AI Officers Council, and other individual agency partnerships as being “essential to the agency’s response to the rapidly changing AI landscape.”

The NRC’s AI team — designated to lead this review by the agency’s executive director for operations — reported working closely with internal data scientists and subject matter experts to consider possible AI uses. Staff reviewed 61 AI use cases and identified 36 that align with tools that have AI capabilities, while the other 25 could “be addressed using non-AI solutions.”

The nuclear industry currently uses AI to “change its approach to some nonregulated activities and has expressed interest in using AI for NRC-regulated activities,” per the report, adding that the NRC is investing in AI research to identify where AI could build foundational knowledge across the agency, while still meeting its mission. 

Staff reported that the broad approach to AI research is “preparing the agency to use AI to increase staff knowledge and experience for future regulatory reviews and oversight.”

The NRC’s congressional budget justification for fiscal year 2025 carved out over $4 million for AI-related funds.

Correction: This story was updated May 13, 2024, to indicate that the nuclear industry, not the NRC, is using AI to alter its approach on some nonregulated activities.

The post Nuclear Regulatory Commission staff recommends AI framework, identifies potential use cases appeared first on FedScoop.

]]>
78269
NIST launches GenAI evaluation program, releases draft publications on AI risks and standards https://fedscoop.com/nist-launches-genai-evaluation-program-releases-draft-ai-publications/ Mon, 29 Apr 2024 21:50:37 +0000 https://fedscoop.com/?p=77783 The actions were among several announced by the Department of Commerce at the roughly six-month mark after Biden’s executive order on artificial intelligence.

The post NIST launches GenAI evaluation program, releases draft publications on AI risks and standards appeared first on FedScoop.

]]>
The National Institute of Standards and Technology announced a new program to evaluate generative AI and released several draft documents on the use of the technology Monday, as the government hit a milestone on President Joe Biden’s AI executive order.

The Department of Commerce’s NIST was among multiple agencies on Monday that announced actions they’ve taken that correspond with the October order at the 180-day mark since its issuance. The actions were largely focused on mitigating the risks of AI and included several actions specifically focused on generative AI.

“The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time,” Commerce Secretary Gina Raimondo said in a statement. “With these resources and the previous work on AI from the Department, we are continuing to support responsible innovation in AI and America’s technological leadership.”

Among the four documents released by NIST on Monday was a draft version of a publication aimed at helping identify generative AI risks and strategy for using the technology. That document will serve as a companion to its already-published AI risk management framework, as outlined in the order, and was developed with input from a public working group with more than 2,500 members, according to a release from the agency.

The agency also released a draft of a companion resource to its Secure Software Development Framework that outlines software development practices for generative AI tools and dual-use foundation models. The EO defined dual-use foundation models as those that are “trained on broad data,” are “applicable across a wide range of contexts,” and “exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters,” among other things. 

“For all its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see with traditional software. These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation,” Laurie E. Locascio, NIST director and undersecretary of commerce for standards and technology, said in a statement.

NIST also released draft documents on reducing risks of synthetic content — that which was AI-created or altered — and a plan for developing global AI standards. All four documents have a comment period that ends June 2, according to the Commerce release.

Notably, the agency also announced its “NIST GenAI” program for evaluating generative AI technologies. According to the release, that will “help inform the work of the U.S. AI Safety Institute at NIST.” Registration for a pilot of those evaluations opens in May.

The program will evaluate generative AI with a series of “challenge problems” that will test the capabilities of the tools and use that information “promote information integrity and guide the safe and responsible use of digital content,” the release said. “One of the program’s goals is to help people determine whether a human or an AI produced a given text, image, video or audio recording.”

The release and focus on generative AI comes as other agencies similarly took action Monday on federal use of such tools. The Office of Personnel Management released its guidance for federal workers’ use of generative AI tools and the General Services Administration released a resource guide for federal acquisition of generative AI tools. 

The post NIST launches GenAI evaluation program, releases draft publications on AI risks and standards appeared first on FedScoop.

]]>
77783
GSA releases generative AI resource guide for federal purchasers https://fedscoop.com/general-services-administration-releases-generative-ai-resource-guide-for-federal-purchasers/ Mon, 29 Apr 2024 19:35:46 +0000 https://fedscoop.com/?p=77748 The agency's generative AI and specialized computing infrastructure acquisition guide fulfills a requirement in the October AI executive order.

The post GSA releases generative AI resource guide for federal purchasers appeared first on FedScoop.

]]>
The General Services Administration on Monday released a resource guide for federal purchasers looking to buy generative artificial intelligence solutions and related computing infrastructure, completing a requirement in the White House’s October AI executive order

The GSA’s Generative AI and Specialized Computing Infrastructure Acquisition Resource Guide details how contracting officers can approach gen AI procurement decisions through suggested questions and considerations, per an agency press release. 

“Generative AI technology will continue to evolve and we know that this resource guide should continue to evolve with it,” Laura Stanton, assistant commissioner in the GSA’s Office of Information Technology Category, said in the release. “Contracting officers will play a critical role by working closely with program and IT staff to find, source, acquire and make secure the right generative AI solutions for agencies’ needs.”

Along with acquisition recommendations, the guide also includes examples of generative AI in government, recommendations for how government entities may use things like sandboxes or testbeds before committing to a large-scale purchase, instructions on how agencies may define issues they are looking to solve, and more. 

The agency said in the release that the resource guide “will be updated as technologies develop.”

GSA Administrator Robin Carnahan said in the press release that the guide offers AI use cases, common challenges and information to support the public sector’s exploration of the “growing AI marketplace,” adding that the guide “starts to leverage the power of AI to better deliver” for the public.

“This guide is a key part of our commitment to equipping the federal community to responsibly and effectively deploy generative AI technologies to benefit the American people,” Carnahan said.

The post GSA releases generative AI resource guide for federal purchasers appeared first on FedScoop.

]]>
77748
OPM issues generative AI guidance, competency model for AI roles required by Biden order https://fedscoop.com/opm-issued-generative-ai-guidance-ai-competency-model/ Mon, 29 Apr 2024 11:00:00 +0000 https://fedscoop.com/?p=77713 The guidance was among several actions required by the federal workforce agency within 180 days of President Joe Biden’s executive order on the technology.

The post OPM issues generative AI guidance, competency model for AI roles required by Biden order appeared first on FedScoop.

]]>
Guidance on generative AI and a competency model for AI roles are among the latest actions that the Office of Personnel Management has completed under President Joe Biden’s executive order on the technology, an agency spokesperson said.

In a statement provided to FedScoop ahead of the Monday announcement, OPM disclosed it would issue guidance on use of generative AI tools for the federal workforce; a competency model and skills-based hiring guidance for AI positions to help agencies find people with the skills needed for those roles; and an AI competency model specifically for civil engineering

All of those actions were among those the agency was required to complete at the 180-day mark of the October executive order, which would have been over the weekend. The spokesperson also noted that the agency established an interagency working group for AI, as required by the order. 

OPM was given multiple actions under the sweeping order, most of which were aimed at helping agencies attract and retain a federal workforce prepared to address AI. That role is important as the government is working to rapidly hire for 100 AI positions by this summer. The latest actions from OPM give federal agencies a better roadmap for hiring workers in those positions.

They also add to OPM’s existing work under the order, which has included authorizing direct hire authority for AI-related positions and outlining incentives for attracting and retaining AI workers in the federal government. 

Notably, OPM’s action on the responsible use of generative AI comes as agencies across the government have been developing their own unique approaches to those tools for their workforces. Those policies have ranged from banning the use of certain third-party tools to allowing use across the workforce with guidelines. 

The OPM guidance, which was posted publicly Monday, outlines risks and benefits of the technology along with best practices for implementing it in work. 

Though it ultimately directs employees to consult their agency’s policy, the guidance provides examples of uses and specific considerations for those uses, such as summarizing notes and transcripts, drafting content, and using generative tools for software and code development. 

“GenAI has the potential to improve the way the federal workforce delivers results for the public,” the guidance says. “Federal employees can leverage GenAI to enhance creativity, efficiency, and productivity. Federal agencies and employees are encouraged to consider how best to use these tools to fulfill their missions.”

Under the order, OPM was required to create that guidance in consultation with the Office of Management and Budget. 

In addition to the competency models and guidance, the OPM spokesperson also disclosed that the agency issued an AI classification policy and talent acquisition guidance. While those actions support the rest of OPM’s work, they weren’t required by Biden’s executive order but rather the 2020 AI in Government Act. The spokesperson described those actions as addressing “position classification, job evaluation, qualifications, and assessments for AI positions.”

OPM is seeking feedback on that policy and guidance in a 30-day comment period ending May 29. 

This story was updated April 29, 2024, with additional information and links from OPM released Monday.

The post OPM issues generative AI guidance, competency model for AI roles required by Biden order appeared first on FedScoop.

]]>
77713
Generative AI could raise questions for federal records laws https://fedscoop.com/generative-ai-could-raise-questions-for-federal-records-laws/ Mon, 22 Apr 2024 21:02:22 +0000 https://fedscoop.com/?p=77508 A clause in a DHS agreement with OpenAI opens the door to some debate on transparency issues.

The post Generative AI could raise questions for federal records laws appeared first on FedScoop.

]]>
The Department of Homeland Security has been eager to experiment with generative artificial intelligence, raising questions about what aspects of interactions with those tools might be subject to public records laws. 

In March, the agency announced several initiatives that aim to use the technology, including a pilot project that the Federal Emergency Management Agency will deploy to address hazard mitigation planning, and a training project involving U.S. Citizenship and Immigration Services staff. Last November, the agency released a memo meant to guide the agency’s use of the technology. A month later, Eric Hysen, the department’s chief information officer and chief AI officer, told FedScoop that there’s been “good interest” in using generative AI within the agency. 

But the agency’s provisional approval of a few generative AI products — which include ChatGPT, Bing Chat, Claude 2, DALL-E2, and Grammarly, per a privacy impact assessment — call for closer examination in regard to federal transparency. Specifically, an amendment to OpenAI’s terms of service uploaded to the DHS website established that outputs from the model are considered federal records, along with referencing freedom of information laws. 

“DHS processes all requests for records in accordance with the law and the Attorney General’s guidelines to ensure maximum transparency while protecting FOIA’s specified protected interests,” a DHS spokesperson told FedScoop in response to several questions related to DHS and FOIA. DHS tracks its FOIAs in a public log. OpenAI did not respond to a request for comment. 

“Agency acknowledges that use of Company’s Site and Services may require management of Federal records. Agency and user-generated content may meet the definition of Federal records as determined by the agency,” reads the agreement. “For clarity, any Federal Records-related obligations are Agency’s, not Company’s. Company will work with Agency in good faith to ensure that Company’s record management and data storage processes meet or exceed the thresholds required for Agency’s compliance with applicable records management laws and regulations.” 

Generative AI may introduce new questions related to the Freedom of Information Act, according to Enid Zhou, senior counsel at the Electronic Privacy Information Center, a digital rights group. She pointed to nuances related to “agency and user-generated content,” since the DHS-OpenAI clause doesn’t make clear whether inputs or user prompts are records, or also the outputs produced by the AI system. Zhou also pointed to record management and data storage as a potential issue. 

“The mention of ‘Company’s record management and data storage processes’ could raise an issue of whether an agency has the capacity to access and search for these records when fulfilling a FOIA request,” she said in an email to FedScoop. “It’s one thing for OpenAI to work with the agency to ensure that they are complying with federal records management obligations but it’s another when FOIA officers cannot or will not search these records management systems for responsive records.”

She added that agencies could also try shielding certain outputs of generative AI systems by citing an exemption related to deliberative process privilege. “Knowing how agencies are incorporating generative AI in their work, and whether or not they’re making decisions based off of these outputs, is critical for government oversight,” she said. “Agencies already abuse the deliberative process privilege to shield information that’s in the public interest, and I wouldn’t be surprised if some generative AI material falls within this category.”

Beryl Lipton, an investigative researcher at the Electronic Frontier Foundation, argued that generative AI outputs should be subject to FOIA and that agencies need a plan to “document and archive its use so that agencies can continue to comply properly with their FOIA responsibilities.”.  

“When FOIA officers conduct a search and review of records responsive to a FOIA request, there generally need to be notes on how the request was processed, including, for example, the files and databases the officer searched for records,” Lipton said. “If AI is being used in some of these processes, then this is important to cover in the processing notes, because requesters are entitled to a search and review conducted with integrity. “

The post Generative AI could raise questions for federal records laws appeared first on FedScoop.

]]>
77508
How NIH’s National Library of Medicine is testing AI to match patients to clinical trials https://fedscoop.com/how-nihs-national-library-of-medicine-is-testing-ai-to-match-patients-to-clinical-trials/ Mon, 15 Apr 2024 20:56:03 +0000 https://fedscoop.com/?p=77082 A team at the National Institutes of Health’s National Library of Medicine is using large language models and AI to help researchers find candidates for clinical trials.

The post How NIH’s National Library of Medicine is testing AI to match patients to clinical trials appeared first on FedScoop.

]]>
Few organizations in the world do more to turn biomedical and behavioral research into better health than the National Institutes of Health, its 27 institutes and centers and more than 18,000 employees.

One of those institutes is the National Library of Medicine (NLM). Considered the NIH’s data hub, NLM’s 200-plus databases and systems serve billions of user sessions every day. From PubMed, the premier biomedical literature database, to resources like Genome  and ClinicalTrials.gov, NLM supports a diverse range of users, including researchers, clinicians, information professionals and the general public.

Photo of Dianne Babski, Director, User Services and Collection Division, NLM
Dianne Babski, Director, User Services and Collection Division, NLM

With so many users coming to its sites looking for a variety of information, NLM is always looking for new ways to enhance its products and services, according to Dianne Babski, Director of the User Services and Collection Division. NLM has been harnessing emerging technologies for many years but was quick to see how generative AI and large language models (LLMs) could potentially make its vast information resources more accessible to improve discovery.

Focus on innovation

“We’ve jumped into the GenAI [AI] arena ,” Babski said. “Luckily, we work in a very innovative institute, so staff were eager to play with these tools when they became accessible.” Through the Science and Technology Research Infrastructure for Discovery, Experimentation, and Sustainability (STRIDES) initiative, NIH researchers have access to leading cloud services and environments.

For her part, Babski is leading a six-month pilot project across NLM focused on 10 AI use cases using GenAI. The use cases are divided into five categories: product efficiency and usage, customer experience, data and code automation, workflow bias reduction, and research discovery.

NLM chart of 10 GenAI initiatives.
National Library of Medicine GenAI Initiatives (NLM)

The participating cloud service providers gave NIH access to a “firewalled, safe environment to play in, we’re not in an open web environment,” Babski explained. As part of this pilot program, NLM is also providing feedback on the user interface that it’s been creating for one of the provider’s government enterprise system.

Reducing recruitment challenges in clinical trials

One use case with potentially significant implications focuses on the work in ClinicalTrials.gov. Researchers, clinicians and patients use this NLM database to search for information about clinical research studies worldwide.

While clinical trials are pivotal for advancing medical knowledge and improving patient care, one of the most significant challenges in conducting them is patient recruitment. Identifying suitable candidates who meet specific study criteria is a time-consuming and resource-intensive process for researchers and clinicians, which can hamper the progress of medical research and delay the development of potentially lifesaving treatments.

Recognizing the need to streamline clinical trial matching, NLM created a prototype called TrialGPT. Using an innovative LLM framework, TrialGPT is designed to predict three elements of patient eligibility for clinical trials based on several criteria. It does so by processing information from patient notes to generate detailed explanations of eligibility, which are then aggregated to recommend appropriate clinical trials for patients.

Early results have demonstrated TrialGPT can accurately explain patient-criterion relevance and effectively rank and exclude candidates from clinical trials. However, two challenges were also noted, according to an agency brief: the model’s lack of intrinsic medical knowledge and its limited capacity for medical reasoning.

To address these challenges, the NLM project team plans to augment LLMs with specialized medical knowledge bases and domain-specific tools.

Babski said implementing TrialGPT has the potential to deliver a more efficient and accurate method for matching patients to trials. “While currently only available as a research prototype, we see its potential as a great resource for clinicians to help find patient participants for these different types of trials,” she said.

Lessons learned

As NLM continues to pioneer and experiment with AI-driven use cases like TrialGPT, Babski said several vital recommendations and lessons have emerged. “One of the biggest things I’ve taken away from this is that it’s way more work and complicated than you think it’s going to be,” she said.

For instance, there is a steep learning curve for people to get comfortable with these new tools. But at the same time, that process also allows participants to develop new technical skills, such as running Python code and working in notebook environments.

Effective collaboration and interdisciplinary teamwork are also essential. According to Babski, the pilot program has been successful because NLM was able to not only assemble a “dream team” of domain experts, data scientists, and engineers but also established a community across NIH—currently more than 500 people strong—that is energized and motivated to share their work and support one another. “Everyone has a interesting use case and they are rolling up their sleeves, and trying to figure out how to work with GenAI to solve real work problems,” she said.

Babski also follows a checklist of goals to be applied to any Generative AI pilot:

  • Experiment and develop best practices for LLMs in a safe (behind the firewall) “playground” environment.
  • Create a proof of concept that applies to the agency’s work.
  • Measure results to ensure utility and safety (e.g. NIST guidelines).
  • Develop workforce skills in generative AI.

For other agencies and organizations looking to explore the potential of AI technologies, Babski shared that it’s essential to embrace a culture of adaptability. “You have to be OK with pivoting halfway through,” she said. “We were trying to do data visualization work, and we just realized that this isn’t the right environment for what we were attempting, so we pivoted the use case.”

Ultimately, NLM’s use cases, including TrialGPT, highlight the transformative impact of GenAI and cloud-based platforms on healthcare innovation. By leveraging these technologies, NLM is likely to improve future healthcare delivery and patient outcomes globally.

Editor’s note: This piece was written by Scoop News Group’s content strategy division.

The post How NIH’s National Library of Medicine is testing AI to match patients to clinical trials appeared first on FedScoop.

]]>
77082
State Department is launching an internal chatbot https://fedscoop.com/state-department-is-launching-an-internal-chatbot/ Tue, 02 Apr 2024 17:59:49 +0000 https://fedscoop.com/?p=76943 The agency’s CIO said the rollout of a generative AI chatbot is in response to staffer requests for streamlining processes like translation services.

The post State Department is launching an internal chatbot appeared first on FedScoop.

]]>
The Department of State is rolling out a chatbot for internal use, in a move that the agency’s top IT official said is largely in response to employee requests for help in streamlining processes such as translating. 

Kelly Fletcher, State’s chief information officer, said during a speech Tuesday at Palo Alto Networks’s Public Sector Ignite event that the creation of a generative AI chatbot is something that the agency’s workforce is asking for as publicly available tools like ChatGPT become more popular.

“The thing I hear most that people want is, they want a chatbot,” Fletcher said. “We’re gonna let people experiment, we’re gonna see what they use it for and then we are gonna move to building things that are more custom fit for State.”

Fletcher provided examples of how the gen AI tool could help with translation needs, including the loading of a 30-page document written in Russian into a model and asking for a summary in English, and inputting public information from other countries — such as regional news — into systems and receiving an English summary.

In addition to improving user experience and increasing productivity “dramatically,” Fletcher said that having this tool — a pilot effort led by the Office of Management Strategy and Solutions Center for Analytics and the Bureau of Information Resource Management — could enhance cybersecurity, since employees would be using the chatbot on the agency’s secured network.

“We can load public information into publicly available chatbots, get a summary, get some hints, get started. But I want to do that with data that’s unclassified but specific to the State Department, where it wouldn’t be appropriate for it to get hoovered up into the world,” Fletcher said. “What I don’t want is State Department data leaving the State Department environment.”

Separately, in its AI use case inventory, the agency noted that the Bureau of Information Resource Management is “planning to incorporate” a virtual agent or chatbot — provided by ServiceNow as part of its platform as a service — into existing applications to offer users support and connect users with data requests.

The agency declined to comment further on Fletcher’s speech.

This story was updated April 4, 2024, after State corrected the bureau in its AI use case inventory that is planning to incorporate a virtual agent or chatbot. The update also included newly provided information regarding the bureaus piloting the new chatbot.

The post State Department is launching an internal chatbot appeared first on FedScoop.

]]>
76943
Eight trends that are redefining government at ‘warp speed’ https://fedscoop.com/eight-trends-that-are-redefining-government-at-warp-speed/ Tue, 26 Mar 2024 17:45:04 +0000 https://fedscoop.com/?p=76822 Deloitte's William Eggers shares the eight seismic trends redefining governance in 2024 and beyond.

The post Eight trends that are redefining government at ‘warp speed’ appeared first on FedScoop.

]]>
Government leaders today find themselves grappling with an epochal technological upheaval. As artificial intelligence unfurls its wings, a fervent dialogue ensues on how government agencies might wield this technological juggernaut to streamline operations and confront the thorniest challenges of our era.

Surveying the global landscape of governmental evolution, we see reason for optimism. We’ve identified more than 200 cases worldwide that offer proof of radical transformation, where government agencies have achieved quantum leaps, delivering upwards of 10X improvements across areas ranging from operational efficiency to customer experience to mission outcomes.

Here are eight seismic trends redefining governance in 2024 and beyond:

  • Government at Warp Speed: Government leaders worldwide are seeing ever-greater benefits of increased operational speed. By introducing new technologies – such as AI and machine learning, along with reimagined processes that break down isolated silos – governments can deliver services much faster.
  • Unleashing Untapped Government Productivity: Advances in Generative AI can usher a new era of increased productivity in the public sector and diminish the adverse effects of today’s talent and workforce gaps. In order to test and scale powerful AI technologies and applications, government leaders can build solid foundations of data and digital capabilities to identify work streams that are well suited for automation.
  • Agile Government: In an era of rapid change, government leaders are abandoning traditional processes and moving toward flexible approaches to policymaking, funding, technology development, and decision making. Whether it’s streamlining permitting and procurement processes, introducing flexible resourcing, or breaking down obsolete bureaucratic barriers, instilling a culture that prioritizes outcomes over rigid processes will enhance government agility.
  • Radical Improvement in Customer Experience: Customer Experience (CX) serves as a primary touchpoint between government and its constituents. Boosting CX has the power to increase public trust in government. Targeted investments in digital public infrastructure – like digital identity, digital payments, and data exchange platforms – can anticipate people’s needs and enhance their experiences with government services.
  • Achieving Innovation at Scale: Addressing modern challenges demands innovation at a scale that government cannot achieve alone. As a result, governments are adjusting incentives for stakeholders to foster a network of problem solvers that span private sector industries, academia, and every level of the public sector.
  • Cross-Boundary Mission Effectiveness: Some of today’s most pressing problems transcend agency boundaries and require effective cross-agency collaboration. By embracing technology infrastructure like cloud-based data analytics and artificial intelligence, government agencies can compile diverse expertise and resources for a more holistic approach to complex issues.
  • Government’s Resilience Imperative: Building resilience against various threats – including geopolitical shocks, climate change, supply chain snarls, and cyberattacks – is central to the continuity of effective government. By enhancing the capacity to navigate these disruptions while ensuring community safety, governments can actively combat disruptions and challenges to daily operations.
  • Fair and Equitable Government: Agencies will continually evaluate and evolve to serve constituents equitably. By focusing on three primary spheres of influence within government organizations – the workforce, vendor ecosystems, and communities – government leaders can advance equity within and outside of their agencies.

As we navigate the complexities of our time, embracing these trends will be paramount in building a government that is not only responsive, but also proactive in addressing the needs of the individuals and families it serves. By harnessing the power of technology, prioritizing collaboration, and striving for innovation, agencies can overcome adversity and thrive in 2024.

To hear more about these trends, listen to William Eggers on the Daily Scoop Podcast discuss Deloitte’s Top Trends in Government 2024 report.

The post Eight trends that are redefining government at ‘warp speed’ appeared first on FedScoop.

]]>
76822