OpenAI Archives | FedScoop https://fedscoop.com/tag/openai/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Tue, 11 Jun 2024 18:13:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 OpenAI Archives | FedScoop https://fedscoop.com/tag/openai/ 32 32 OpenAI official meets with the USAID administrator https://fedscoop.com/openai-official-meets-with-the-usaid-administrator/ Tue, 11 Jun 2024 18:13:52 +0000 https://fedscoop.com/?p=78760 Samantha Power’s meeting with OpenAI’s Anna Makanju comes amid continued investments and interest from the international development agency in the technology.

The post OpenAI official meets with the USAID administrator appeared first on FedScoop.

]]>
USAID Administrator Samantha Power met this week with OpenAI’s head of global affairs, according to an agency press release, a move that comes as the international development organization continues to invest in artificial intelligence while also raising concerns about the technology’s privacy, security, bias, and risks.

The Monday meeting with OpenAI’s Anna Makanju focused on artificial intelligence’s impact on global development, the release stated. Topics included “advancing progress in key sectors like global health and food security, preventing the misuse of AI, and strengthening information integrity and resilience in USAID partner countries.” 

The announcement comes as several federal agencies, including NASA and the Department of Homeland Security, experiment with OpenAI’s technology. USAID is also prioritizing looking at artificial intelligence use cases and is in the midst of developing a playbook for AI in global development. 

“Administrator Power and Vice President Makanju also discussed USAID’s commitment to localization, and the potential for generative AI and other AI tools to support burden reduction for USAID implementing partners – in particular, burdens that disproportionately impact local organizations,” the agency said.

Meanwhile, OpenAI appears to be continuing to look for ways to work with U.S. federal agencies. Makanju, for her part, has previously said that government use of OpenAI tools is a goal for the company. At a conference hosted by the Semafor in April, she said she was “bullish” on government use of the technology because of its role in providing services to people. 

The post OpenAI official meets with the USAID administrator appeared first on FedScoop.

]]>
78760
Inside NASA’s deliberations over ChatGPT https://fedscoop.com/inside-nasas-deliberations-over-chatgpt/ Wed, 22 May 2024 14:43:59 +0000 https://fedscoop.com/?p=78445 More than 300 pages of documents provide insight into how the space agency thought about generative AI, just as ChatGPT entered the public lexicon.

The post Inside NASA’s deliberations over ChatGPT appeared first on FedScoop.

]]>
In the months after ChatGPT’s public release, leaders inside NASA debated the merits and flaws of generative AI tools, according to more than 300 pages of emails obtained by FedScoop, revealing both excitement and concerns within an agency known for its cautious approach to emergent technologies. 

NASA has so far taken a relatively proactive approach to generative AI, which the agency is considering for tasks like summarization and code-writing. Staff are currently working with the OpenAI tools built into Microsoft’s Azure service to analyze use cases. NASA is also weighing generative AI capabilities from its other cloud providers — and it’s in discussions with Google Cloud on plans to test Gemini, the competitor AI tool formerly known as Bard. 

Though NASA policy prohibits the use of sensitive data on generative AI systems, that won’t be the case forever. Jennifer Dooren, the deputy news chief of NASA, told FedScoop that the agency is now working with “leading vendors to approve generative AI systems” for use on sensitive data and anticipates those capabilities will be available soon. While the agency’s most recent AI inventory only includes one explicit reference to OpenAI technology, an updated list with more references to generative AI could be released publicly as soon as October. 

In the first weeks of 2023, and as ChatGPT entered the public lexicon, the agency’s internal discussions surrounding generative AI appeared to focus on two core values: researching and investing in technological advances and encouraging extreme caution on safety. Those conversations also show how the agency had to factor in myriad authorities and research interests to coordinate its use. 

“NASA was like anyone else during the time that ChatGPT was rolled out: trying to understand services like these, their capabilities and competencies, and their limitations, like any of us tried to do,” said Namrata Goswami, an independent space policy expert who reviewed the emails, which were obtained via a public records request. 

She continued: “NASA did not seem to have a prior understanding of generative AI, as well as how these may be different from a platform like Google Search. NASA also had limited knowledge of the tools and source structure of AI. Neither did it have the safety, security, and protocols in place to take advantage of generative AI. Instead, like any other institution [or] individual, its policy appeared to be reactive.” 

NASA’s response

Emails show early enthusiasm and demand internally for OpenAI technology — and confusion about how and when agency staffers could use it. In one January 2023 email, Brandon Ruffridge, from the Office of the Chief Information Officer at NASA’s Glenn Research Center, expressed frustration that without access to the tool, interns would have to spend time on “less important tasks” and that engineers and scientists’ research would be held back. In another email that month, Martin Garcia Jr., an enterprise data science operations lead in the OCIO at the Johnson Space Center, wrote that there was extensive interest in getting access to the tech.

By mid-February, Ed McLarney, the agency’s AI lead, had sent a message noting that, at least informally, he’d been telling people that ChatGPT had not been approved for IT use and that NASA data should only be used on NASA-approved systems. He also raised the idea of sending a workforce-wide message, which ended up going out in May. In those opening weeks, the emails seem to show growing pressure on the agency to establish permissions for the tool. 

“We have demand and user interest through the roof for this. If we slow roll it, we run [a] high risk of our customers going around us, doing it themselves in [an] unauthorized, non-secure manner, and having to clean up the mess later,” McLarney warned in a March email to other staff focused on the technology. Another email, from David Kelldorf, chief technology officer of the Johnson Space Center, noted that “many are chomping at the bits to try it out.”

But while some members of the space agency expressed optimism, others urged caution about the technology’s potential pitfalls. In one email, Martin Steele, a member of the data stewardship and strategy team at NASA’s Information, Data, and Analytics Services division, warned against assuming that ChatGPT had “intelligence” and stressed the importance of “The Human Element.” In a separate email, Steven Crawford, senior program executive for scientific data and computing with the agency’s Science Mission Directorate, expressed concerns about the tool’s potential to spread misinformation. (Crawford later told FedScoop that he’s now satisfied by NASA’s guardrails and has joined some generative AI efforts at the agency). 

Email from Steven Crawford, April 10, 2023.

In those first weeks and months of 2023, there were also tensions surrounding security and existing IT procedures. Karen Fallon, the director of Information, Data, and Analytics Services for NASA’s Chief Information Office operations, cautioned in March that enthusiasm for the technology shouldn’t trump agency leaders’ need to follow existing IT practices. (When asked for comment, NASA called Fallon’s concerns “valid and relevant.”)

Email from Karen Fallon, March 16, 2023.

In another instance, before NASA’s official policy was publicized in May, an AI researcher at the Goddard Space Flight Center asked if it would be acceptable for their team to use their own GPT instances with code that was already in the public domain. In response, McLarney explained that researchers should not use NASA emails for personal OpenAI accounts, be conscious about data and code leaks, and make sure both the data and code were public and non-sensitive. 

NASA later told FedScoop that the conversation presented “a preview of pre-decisional, pending CIO guidance” and that it aligned with NASA IT policy — though they noted that NASA doesn’t encourage employees to spend their own funds on IT services for space agency work. 

Email from Martin Garcia, Jr., April 7, 2023.

“As NASA continues to work to onboard generative AI systems it is working through those concerns and is mitigating risks appropriately,” Dooren, the agency’s deputy news chief, said. 

Of course, NASA’s debate comes as other federal agencies and companies continue to evaluate generative AI. Organizations are still learning how to approach the technology and its impact on daily work, said Sean Costigan, managing director of resilience strategy at the cybersecurity company Red Sift. NASA is no exception, he argued, and must consider potential risks, including misinformation, data privacy and security, and reduced human oversight. 

“It is critical that NASA maintains vigilance when adopting AI in space or on earth —wherever it may be — after all, the mission depends on humans understanding and accounting for risk,” he told FedScoop. “There should be no rush to adopt new technologies without fully understanding the opportunities and risks.” 

Greg Falco, a systems engineering professor at Cornell University who has focused on space infrastructure, noted that NASA tends to play catchup on new computing technologies and can fall behind the startup ecosystem. Generative AI wouldn’t necessarily be used for the most high-stakes aspects of the space agency’s work, but could help improve efficiency, he added.

NASA generative AI campaign.

“NASA is and was always successful due to [its] extremely cautious nature and extensive risk management practices. Especially these days, NASA is very risk [averse] when it comes to truly emergent computing capabilities,” he said. “However, they will not be solved anytime soon. There is a cost/benefit scale that needs to be tilted towards the benefits given the transformative change that will come in the next [three-to-five] years with Gen AI efficiency.”

He continued: “If NASA and other similar [government] agencies fail to hop on the generative AI train, they will quickly be outpaced not just by industry but by [nation-state] competitors. China has made fantastic government supported advancements in this domain which we see publicly through their [government] funded academic publications.”

Meanwhile, NASA continues to work on its broader AI policy. The space agency published an initial framework for ethical AI in 2021 that was meant to be a “conversation-starter,” but emails obtained by FedScoop show that the initial framework received criticism — and agency leaders were told to hold off.  The agency has since paused co-development on practitioners’ guidance on AI to focus instead on federal AI work, but plans to return to that work “in the road ahead,” according to Dooren.

The space agency also drafted an AI policy in 2023, but ultimately decided to delay it to wait for federal directives. NASA now plans to refine and publish the policy this year. 

The post Inside NASA’s deliberations over ChatGPT appeared first on FedScoop.

]]>
78445
Generative AI could raise questions for federal records laws https://fedscoop.com/generative-ai-could-raise-questions-for-federal-records-laws/ Mon, 22 Apr 2024 21:02:22 +0000 https://fedscoop.com/?p=77508 A clause in a DHS agreement with OpenAI opens the door to some debate on transparency issues.

The post Generative AI could raise questions for federal records laws appeared first on FedScoop.

]]>
The Department of Homeland Security has been eager to experiment with generative artificial intelligence, raising questions about what aspects of interactions with those tools might be subject to public records laws. 

In March, the agency announced several initiatives that aim to use the technology, including a pilot project that the Federal Emergency Management Agency will deploy to address hazard mitigation planning, and a training project involving U.S. Citizenship and Immigration Services staff. Last November, the agency released a memo meant to guide the agency’s use of the technology. A month later, Eric Hysen, the department’s chief information officer and chief AI officer, told FedScoop that there’s been “good interest” in using generative AI within the agency. 

But the agency’s provisional approval of a few generative AI products — which include ChatGPT, Bing Chat, Claude 2, DALL-E2, and Grammarly, per a privacy impact assessment — call for closer examination in regard to federal transparency. Specifically, an amendment to OpenAI’s terms of service uploaded to the DHS website established that outputs from the model are considered federal records, along with referencing freedom of information laws. 

“DHS processes all requests for records in accordance with the law and the Attorney General’s guidelines to ensure maximum transparency while protecting FOIA’s specified protected interests,” a DHS spokesperson told FedScoop in response to several questions related to DHS and FOIA. DHS tracks its FOIAs in a public log. OpenAI did not respond to a request for comment. 

“Agency acknowledges that use of Company’s Site and Services may require management of Federal records. Agency and user-generated content may meet the definition of Federal records as determined by the agency,” reads the agreement. “For clarity, any Federal Records-related obligations are Agency’s, not Company’s. Company will work with Agency in good faith to ensure that Company’s record management and data storage processes meet or exceed the thresholds required for Agency’s compliance with applicable records management laws and regulations.” 

Generative AI may introduce new questions related to the Freedom of Information Act, according to Enid Zhou, senior counsel at the Electronic Privacy Information Center, a digital rights group. She pointed to nuances related to “agency and user-generated content,” since the DHS-OpenAI clause doesn’t make clear whether inputs or user prompts are records, or also the outputs produced by the AI system. Zhou also pointed to record management and data storage as a potential issue. 

“The mention of ‘Company’s record management and data storage processes’ could raise an issue of whether an agency has the capacity to access and search for these records when fulfilling a FOIA request,” she said in an email to FedScoop. “It’s one thing for OpenAI to work with the agency to ensure that they are complying with federal records management obligations but it’s another when FOIA officers cannot or will not search these records management systems for responsive records.”

She added that agencies could also try shielding certain outputs of generative AI systems by citing an exemption related to deliberative process privilege. “Knowing how agencies are incorporating generative AI in their work, and whether or not they’re making decisions based off of these outputs, is critical for government oversight,” she said. “Agencies already abuse the deliberative process privilege to shield information that’s in the public interest, and I wouldn’t be surprised if some generative AI material falls within this category.”

Beryl Lipton, an investigative researcher at the Electronic Frontier Foundation, argued that generative AI outputs should be subject to FOIA and that agencies need a plan to “document and archive its use so that agencies can continue to comply properly with their FOIA responsibilities.”.  

“When FOIA officers conduct a search and review of records responsive to a FOIA request, there generally need to be notes on how the request was processed, including, for example, the files and databases the officer searched for records,” Lipton said. “If AI is being used in some of these processes, then this is important to cover in the processing notes, because requesters are entitled to a search and review conducted with integrity. “

The post Generative AI could raise questions for federal records laws appeared first on FedScoop.

]]>
77508
Commerce adds five members to AI Safety Institute leadership https://fedscoop.com/commerce-adds-to-ai-safety-institute-leadership/ Wed, 17 Apr 2024 17:48:43 +0000 https://fedscoop.com/?p=77326 The new AI Safety Institute executive leadership team members include researchers and current administration officials.

The post Commerce adds five members to AI Safety Institute leadership appeared first on FedScoop.

]]>
The Department of Commerce has added five people to the AI Safety Institute’s leadership team, including current administration officials, a former OpenAI manager, and academics from Stanford and the University of Southern California.

In a statement announcing the hires Tuesday, Commerce Secretary Gina Raimondo called the new leaders “the best in their fields.” They join the institute’s director, Elizabeth Kelly, and chief technology officer, Elham Tabassi, who were named in February. The new leaders are:

  • Paul Christiano, founder of the nonprofit Alignment Research Center who formerly ran OpenAI’s language model alignment team, will be head of AI safety;
  • Mara Quintero Campbell, who was most recently the deputy chief operating officer of Commerce’s Economic Development Administration, will be the acting chief operating officer and chief of staff;
  • Adam Russell, director of the AI division of USC’s Information Sciences Institute, will be chief vision officer;
  • Rob Reich, a professor of political science and associate director of the Institute for Human-Centered AI at Stanford, will be a senior advisor; and
  • Mark Latonero, who was most recently deputy director of the National AI Initiative Office in the White House Office of Science and Technology Policy, will be head of international engagement.

The AI Safety Institute, which is housed in the National Institute of Standards and Technology, is tasked with advancing safety of the technology through research, evaluation and developing guidelines for those assessments. That work includes actions listed in President Joe Biden’s executive order on AI outlined for NIST, such as developing guidance, red-teaming and watermarking synthetic contact. 

In February, the AI Safety Institute launched a consortium, which will contribute to the agency’s work carrying out the executive order actions. That consortium is made up of more than 200 stakeholders, including academic institutions, unions, nonprofits, and other organizations. Earlier this month, the department also announced a partnership with the U.K. to have their AI Safety bodies work together.

“I am very pleased to welcome these talented experts to the U.S. AI Safety Institute leadership team to help establish the measurement science that will support the development of AI that is safe and trustworthy,” said Laurie Locascio, NIST’s director and undersecretary of commerce for standards and technology. “They each bring unique experiences that will help the institute build a solid foundation for AI safety going into the future.”

“I am very pleased to welcome these talented experts to the U.S. AI Safety Institute leadership team to help establish the measurement science that will support the development of AI that is safe and trustworthy,” said Laurie Locascio, NIST’s director and undersecretary of commerce for standards and technology. “They each bring unique experiences that will help the institute build a solid foundation for AI safety going into the future.”

The post Commerce adds five members to AI Safety Institute leadership appeared first on FedScoop.

]]>
77326
CDC’s generative AI pilots include school closure tracking, website updates https://fedscoop.com/cdc-generative-ai-pilots-school-closure-tracking-website-updates/ Fri, 05 Apr 2024 18:29:29 +0000 https://fedscoop.com/?p=77030 The Centers for Disease Control and Prevention is testing out use cases for generative AI and sharing its approach with other federal partners as it plans to develop an agencywide AI strategy.

The post CDC’s generative AI pilots include school closure tracking, website updates appeared first on FedScoop.

]]>
An artificial intelligence service deployed within the Centers for Disease Control and Prevention is being put to the test for things like modernizing its websites and capturing information on school closures, the agency’s top data official said. 

The tool — Microsoft Azure Open AI that’s been configured for CDC use within its cloud infrastructure — has both a chatbot component for employees to use and the ability for more technical staff to develop applications that connect to the service via an application programming interface (API), Alan Sim, CDC’s chief data officer, said in an interview with FedScoop. 

“The idea here is that we can allow for our CDC staff to practice innovation and gen AI safely, within CDC boundaries, rather than going out to third-party sites,” Sim said. 

In total, CDC has 15 pilots using the agency’s generative AI capabilities, primarily through the Azure Open AI service, a spokesperson said.

Exploring generative AI uses comes as the CDC, like agencies throughout the federal government, looks to create its own approach to artificial intelligence. Roughly a year ago, CDC leadership got together to develop an AI roadmap, Sim said, and since then, it’s prioritized goals like working on the chatbot and developing guidance that it’s shared with others in the federal government.

Now, the agency is planning to develop an AI strategy that Sim said he’s hopeful will be released in late spring to early summer. That strategy will aim to set “high-level principles” for how the CDC wants to use AI to support the public, Sim said. 

“We’re still learning, but we’re trying our best to be innovative, responsive, and obviously sharing as we learn with our partners,” he said.

Piloted uses

The CDC’s pilots are varied in terms of application and topic, including HIV, polio containment, communications, analyzing public comments, and survey design. So far, there’s been positive feedback from the pilots that generative AI has “significantly enhanced data analysis, efficiency, and productivity,” Sim said.

In one of the more operational pilots, for example, communications staff is using AI to assist with updates to the CDC’s websites across the agency.

That process tends to be “tedious” and “manual,” Sim said. To help make it easier, the Office of Communications is using an application connected to the Azure Open AI API, which was created by a data scientist at the agency.

“This has allowed staff to begin summarizing, leveraging … the benefits of generative AI to help speed up the work,” Sim said. 

CDC is also looking to AI for tracking school closures, which it did during the COVID-19 pandemic to watch for potential outbreaks. 

That tracking — which included monitoring thousands of school district websites and various types of closures, from weather to disease outbreaks — was done manually. And although the funding for those efforts stopped in December 2022, Sim said, there’s “a recognition that it’s still important from a public health perspective to keep track of school closure information.” 

As a result, CDC developed an AI prototype to collect information via social media about closures at roughly 45,000 school districts and schools. That prototype is still being evaluated for effectiveness and for whether it’s something that can be scaled, but it’s something CDC is looking into, Sim said.

While the CDC isn’t using agency data with the generative AI service, training against relevant datasets could happen in the future, Sim said. “We haven’t gotten there yet, but that’s part of our roadmap is to sort of mature and learn from these initial pilots, and then just build upon that work,” he said. 

Generative AI guidance

In addition to working toward potential uses, CDC has also developed guidance for generative AI. That document “gets into some of the details” of leveraging generative AI tools responsibly, safely and equitably, Sim said. 

It’s also something the agency is sharing. Sim said CDC presented that guidance at the Chief Artificial Intelligence Officers Council and he’s shared the guidance with “many federal agencies.”

“We are just trying to do our part,” he said. “We are not necessarily experts, but we are sharing the progress that we’ve made.” 

Throughout the federal government, agencies have been creating their own generative AI policies for their employees that detail things like whether third-party tools are prohibited, what information shouldn’t be used in queries, and processes for approving potential uses of the technology. A recent Office of Management and Budget memo further directs agencies to “assess potential beneficial uses” of generative AI uses and establish safeguards. 

CDC declined to share a copy of its guidance.

Even though deploying an AI tool within CDC’s cloud infrastructure provides more security, Sim said there are always concerns. One of the reasons the agency is focused on machine-learning operations is so it can explore and provide guidance on best practices on things like ensuring developers are being transparent, being able to detect “model drift,” and certifying that a model isn’t amplifying bias.

Ultimately, CDC wants to take a proactive approach to AI and machine learning so the agency is prepared for the next outbreak response and to empower state, local, tribal and territorial partners to leverage their data to gain efficiencies where it’s possible, Sim said.

“Any efficiencies that we can gain through these types of innovations, we’re always trying to support and encourage,” Sim said. 

The post CDC’s generative AI pilots include school closure tracking, website updates appeared first on FedScoop.

]]>
77030
Microsoft makes Azure OpenAI service available in government cloud platform https://fedscoop.com/openai-service-available-government-cloud/ Tue, 06 Feb 2024 14:00:00 +0000 https://fedscoop.com/?p=75932 The service is live on Azure Government Tuesday while the agency pursues FedRAMP authorization for high-impact data.

The post Microsoft makes Azure OpenAI service available in government cloud platform appeared first on FedScoop.

]]>
Federal agencies that use Microsoft’s Azure Government service now have access to its Azure OpenAI Service through the cloud platform, permitting use of the tech giant’s AI tools in a more regulated environment.

Candice Ling, senior vice president of Microsoft’s federal government business, announced the launch in a Tuesday blog post, highlighting the data safety measures of the service and its potential uses for productivity and innovation. 

“Azure OpenAI in Azure Government enables agencies with stringent security and compliance requirements to utilize this industry-leading generative AI service at the unclassified level,” Ling’s post said.

The announcement comes as the federal government is increasingly experimenting with and adopting AI technologies. Agencies have reported hundreds of use cases for the technology while also crafting their own internal policies and guidance for use of generative AI tools.

Ling also announced that the company is submitting Azure OpenAI for federal cloud services authorizations that, if approved, would allow higher-impact data to be used with the system. 

Microsoft is submitting the service for authorization for FedRAMP’s “high” baseline, which is reserved for cloud systems using high-impact, sensitive, unclassified data like heath care, financial or law enforcement information. It will also submit the system for authorization for the Department of Defense’s Impact Levels 4 and 5, Ling said. Those data classification levels for DOD include controlled unclassified information, non-controlled unclassified information and non-public, unclassified national security system data.

In an interview with FedScoop, a Microsoft executive said the availability of the technology in Azure Government is going to bring government customers capabilities expected from GPT-4 — the fourth version of Open AI’s large language models — in “a more highly regulated environment.”

The executive said the company received feedback from government customers who were experimenting with smaller models and open source models but wanted to be able to use the technology on more sensitive workloads.

Over 100 agencies have already deployed the technology in the commercial environment, the executive said, “and the majority of those customers are asking for the same capability in Azure Government.” 

Ling underscored data security measures for Azure OpenAI in the blog, calling it “a fundamental aspect” of the service. 

“This includes ensuring that prompts and proprietary data aren’t used to further train the model,” Ling wrote. “While Azure OpenAI Service can use in-house data as allowed by the agency, inputs  and outcomes are not made available to Microsoft or others using the service.”

That means embeddings and training data aren’t available to other customers, nor are they used to train other models or used to improve the company’s or third-party services. 

According to Ling’s blog, the technology is already being used for a tool being developed by the National Institutes of Health’s National Library of Medicine. In collaboration with the National Cancer Institute, the agency is working on a large language model-based tool, called TrialGPT, that will match patients with clinical trials.

The post Microsoft makes Azure OpenAI service available in government cloud platform appeared first on FedScoop.

]]>
75932
National Science Foundation rolls out NAIRR pilot with industry, agency support https://fedscoop.com/nsf-launches-nairr-pilot/ Wed, 24 Jan 2024 16:00:00 +0000 https://fedscoop.com/?p=75701 The pilot brings together research resources from multiple federal and industry partners and will serve as a “proof of concept” for the full-scale project, according to NSF.

The post National Science Foundation rolls out NAIRR pilot with industry, agency support appeared first on FedScoop.

]]>
The National Science Foundation launched a pilot for the National Artificial Intelligence Research Resource on Wednesday, giving U.S.-based researchers and educators unique access to a variety of tools, data, and support to explore the technology.

The pilot for the resource, referred to as the NAIRR, is composed of contributions from 11 federal agencies and 25 private sector partners, including Microsoft, Amazon Web Services, NVIDIA, Intel, and IBM. Those contributions range from use of the Department of Energy’s Summit supercomputer to datasets from NASA and the National Oceanic and Atmospheric Administration to access for models from OpenAI, Anthropic, and Meta.

“A National AI Research Resource, simply put, has the potential to change the trajectory of our country’s approach to AI,” NSF Director Sethuraman Panchanathan told reporters on a call ahead of the launch. “It will lead the way for a healthy, trustworthy U.S. AI ecosystem.”

The idea for a NAIRR has been under discussion for some time as a way to provide researchers with the resources needed to carry out their work on AI, including advanced computing, data, software, and AI models. Supporters say a NAIRR is needed because the computational resources that AI demands aren’t often attainable for prospective academic researchers.

Katie Antypas, director of NSF’s Office of Advanced Cyberinfrastructure, underscored that need on the call with reporters, saying “the pilot is the first step to bridging this gap and will provide access to the research and education community across our country — all 50 states and territories.”

The launch comes ahead of a requirement in President Joe Biden’s Oct. 30 AI executive order for NSF to establish a pilot project for the resource within 90 days. According to an NSF release and accompanying call with reporters, the two-year pilot will serve as a “proof of concept” for the full-scale resource. 

Creating a pilot that would run parallel to a full buildout was among the options the NAIRR Task Force, which was co-chaired by NSF and the Office of Science and Technology Policy, presented in its implementation framework for the resource roughly a year ago. 

The pilot is divided into four focus areas: “NAIRR Open,” which will provide access to resources for AI research on the pilot’s portal; “NAIRR Secure,” an AI privacy- and security-focused component co-led by DOE and the National Institutes of Health; “NAIRR Software,” which will facilitate and explore the interoperable use of pilot resources; and “NAIRR Classroom,” which focuses on education, training, user support, and outreach.

Antypas said anticipated uses of the pilot might include a researcher seeking access to large models to investigate validation and verification or an educator from a community college, rural, or minority-serving institution who’s able to obtain AI resources for the students in their classroom.

When asked how resources are being vetted for the NAIRR, Antypas said there will be a process for datasets that become part of the resource. “We are going to be standing up an external ethics advisory committee to be providing independent advice on, you know, what are those standards? How do we develop those with a pilot?” Antypas said.

Quality of datasets came into focus recently after a Stanford report flagged the existence of child sexual abuse material on a popular AI research dataset known as LAION-5B. FedScoop previously reported that NSF doesn’t know if or how many researchers had used that dataset — it doesn’t track this aspect of principal investigators’ work — but highlighted the need for a NAIRR to provide researchers with trusted resources.

Among the support from industry, Microsoft is contributing $20 million in compute credits for its cloud computing platform Azure, in addition to access to its models, and NVIDIA is contributing $30 million in support, including $24 million in computing access on its DGX platform.

Some contributions are tied to specific uses. OpenAI, for example, will contribute “up to $1 million in credits for model access for research related to AI safety, evaluations, and societal impacts, and up to $250,000 in model access and/or ChatGPT accounts to support applied research and coursework at Historically Black Colleges and Universities and Minority Serving Institutions,” according to information provided by NSF. Anthropic, meanwhile, is providing 10 researchers working on climate change-related projects with API access to its Claude model.

The list of partners could grow as time goes on. Tess deBlanc-Knowles, special assistant to the director for AI in the Office of the Director at NSF, noted on the call with reporters that the pilot came together on “a really ambitious timeline” and said “it’s important to note that this is just the beginning.”

deBlanc-Knowles said NSF hopes to bring on more partners and add more resources after the launch “so that we can serve more researchers, educators, and more places, and start to really make progress towards that bigger vision of the NAIRR of democratizing AI.”

The post National Science Foundation rolls out NAIRR pilot with industry, agency support appeared first on FedScoop.

]]>
75701
USDA determined ChatGPT’s risk was ‘high,’ set up board to review generative AI use, documents show https://fedscoop.com/usda-determined-chatgpt-risk-high-established-board/ Wed, 20 Dec 2023 19:36:58 +0000 https://fedscoop.com/?p=75332 OpenAI pushed back on a vulnerability cited in USDA’s March risk determination.

The post USDA determined ChatGPT’s risk was ‘high,’ set up board to review generative AI use, documents show appeared first on FedScoop.

]]>
As OpenAI’s ChatGPT tool broke into the mainstream earlier this year, the U.S. Department of Agriculture determined that the generative artificial intelligence tool posed too high a risk to use on its network and prohibited its use, according to documents obtained by FedScoop. 

In October, seven months after that risk determination was made, department leaders distributed interim guidance that extended that prohibition more broadly to employee and contractor use of third-party generative AI tools in their official capacities and on government equipment. The agency also established a board that’s creating a process to review proposed uses of the technology going forward, according to documents obtained through a Freedom of Information Act request and the department’s response to FedScoop.

Information about USDA’s approach comes as agencies across the federal government are grappling with creating policies for generative AI tools within their agencies and coming to different conclusions about how to handle the nascent and rapidly growing technology. 

The Department of Homeland Security, for example, recently made public its conditional approval of generative AI tools for use in the department, including ChatGPT, Bing Chat, Claude 2 and DALL-E2. Meanwhile, NASA leaders told employees in May that the tools weren’t cleared for widespread use with “sensitive NASA data,” though they permitted use on personal accounts “following acceptable use policies.”

An Agriculture Department spokesperson told FedScoop in an emailed statement that the agency’s interim guidance, along with the White House’s AI executive order, “will help ensure that USDA, like other agencies across the federal government, is using this emerging, important technology safely, securely, and responsibly, while also delivering better results for the people who rely on its programs and services.”

According to the March 16 risk determination obtained by FedScoop, the department found that “ChatGPT displays multiple concerning indicators and vulnerabilities that will pose a risk if used in the USDA enterprise network infrastructure” and ultimately labeled that risk as “high.”

Specifically, the risk determination referenced a vulnerability documented in the National Vulnerability Database involving a WordPress plugin that appears to use ChatGPT. The determination said the vulnerability “describes a missing authorization check that allows users the ability to access data or perform actions that should be prohibited.” It also pointed to “insufficient safeguards.”

“While OpenAI alleges having safeguards in place to mitigate these risks, use cases demonstrate that malicious users can get around those safeguards by posing questions or requests differently to obtain the same results,” the risk determination said. “Use of ChatGPT poses a risk of security breaches or incidents associated with data entered [into] the tool by users, to include controlled unclassified information (CUI), proprietary government data, regulated Food and Agriculture (FA) sector data, and personal confidential data.”

In response to a FedScoop inquiry about the USDA’s determination, a spokesperson for OpenAI said the company was not affiliated with the WordPress plugin it cited. The spokesperson also pointed to DHS’s recent assessment that conditionally approved generative tools and noted the launch of ChatGPT Enterprise, which has additional security and privacy controls.

“We appreciate the U.S. government’s dedication to using AI safely and effectively to improve services for the public. We would be happy to discuss the safe use of our products to support the USDA’s work,” the spokesperson said. 

Under USDA’s interim guidance, which was distributed internally Oct. 16, the Generative AI Review Board includes representation from USDA’s chief data officer and the chief technology officer, in addition to representatives for cybersecurity, the general counsel’s office, and two mission areas. 

Since President Joe Biden’s executive order, the department’s CDO and responsible AI official, Chris Alvares, has been elevated to serve as its chief AI officer, and he also serves on the board in that capacity, the spokesperson said. That comes as agencies are starting to name CAIOs in light of a new position created under Biden’s order and subsequent White House guidance.

The board will meet monthly, the document said, and implement a process for reviewing proposed generative AI projects within 90 days, which would be roughly mid-January. It also stipulated that “any use cases currently in development or in use at the time of this memo should be paused until reviewed by the” Generative AI Review Board, and noted specifically that using AI language translation services is prohibited.

Submitting personal identifiable or non-public information to public generative AI tools is “a prohibited release of protected information” that employees must report, the document said. The spokesperson said there haven’t been any known instances where USDA personal identifiable information has been submitted to a generative AI tool, and “USDA has not received any reports of inappropriate GenAI output.”

Rebecca Heilweil contributed to this article.

This story was updated to correct the spelling of Chris Alvares’s name.

The post USDA determined ChatGPT’s risk was ‘high,’ set up board to review generative AI use, documents show appeared first on FedScoop.

]]>
75332
Microsoft rolls out generative AI roadmap for government services https://fedscoop.com/microsoft-rolls-out-generative-ai-roadmap-for-government-services/ Tue, 31 Oct 2023 12:59:00 +0000 https://fedscoop.com/?p=73924 Some of the new AI services that Microsoft will roll out in the coming months include: Azure OpenAI generative services for government, classified cloud workloads, intelligent recap of meetings and Open Source LLMs in Azure Government.

The post Microsoft rolls out generative AI roadmap for government services appeared first on FedScoop.

]]>
Microsoft on Tuesday will announce a slew of new cutting edge artificial intelligence tools and capabilities through its Azure OpenAI Government and Microsoft 365 Government services, including classified cloud workloads and intelligent recap of meetings, as well as generative AI tools like content generation and summarization, code generation, and semantic search using its FedRAMP-approved systems.

“Government customers have signaled a strong, strong demand for the latest AI tools, especially for what we call our [Microsoft 365] co-pilot,” Candice Ling, vice president of Microsoft Federal, told FedScoop before the announcement. 

“By announcing the roadmap, we’re giving the agencies a heads up on how they can be prepared to adopt the capabilities that they want so much,” she added. “At the same time for those who haven’t done so, migrating to the cloud is a key first step to building and also looking at data governance, so that we can fully take advantage of the AI capabilities.”

Some of the key AI services that Microsoft will roll out in the coming months include: Azure OpenAI generative AI services for government, including GPT-3.5 Turbo and GPT-4 models; Azure OpenAI service for classified workloads; Teams Premium with intelligent recap in Microsoft 365 Government; Microsoft 365 Copilot update for government; and Open Source LLMs in Azure Government.

In a blog post shared exclusively with FedScoop that will publish Tuesday, Microsoft noted the higher levels of security and compliance required by government agencies when handling sensitive data. “To enable these agencies to fully realize the potential of AI, over the coming months Microsoft will begin rolling out new AI capabilities and infrastructure solutions across both our Azure commercial and Azure Government environments,” the blog post stated.

The new Azure OpenAI Service in Azure Government will enable the latest generative AI capabilities, including GPT-3.5 Turbo and GPT-4 models, for customers requiring higher levels of compliance and isolation. The product will be available in the first quarter of 2024.

Microsoft this summer will preview Azure OpenAI Services in its “air-gapped classified clouds to select national security customers.” The generative AI platform will be brought to its isolated classified cloud environment, enabling national security leaders and operators to use critical AI capabilities to analyze highly sensitive data anytime and anywhere.

The tech giant’s Teams Premium service with intelligent recap of meetings is expected to roll out to government users during the spring of 2024. Intelligent recap uses AI to help users summarize meeting content and focus on key elements through AI-generated meeting notes and tasks.

“So every agency, their needs are going to be different. But the theme that we’re hearing across the board is how we can transform the way they can deliver services to citizens that could really drive critical outcomes,” Ling told FedScoop. 

Ling added that consumers don’t have to be advanced programmers or data scientists to use the systems. “It’s anyone being able to ask the question about your data and being able to process information quite quickly. So anyone can do that now. And that can transform how the agencies work, right?”

Microsoft 365 Copilot for government is also expected to roll out during the summer of 2024, giving access to a “transformational AI assistant in GCC, bringing generative AI to our comprehensive productivity suite for a host of government users,” according to the blog post.

The Seattle-based company will announce on Tuesday that it has enabled access to open source AI model Llama-2 via the Azure Machine Learning catalog in Azure Government. The company recognizes that “some mission requirements benefit from smaller generative AI models” in addition to its own OpenAI models.

Microsoft’s AI rollout builds upon the June launch of its Azure OpenAI Service for the government to allow federal agencies to use powerful language models to run within the company’s cloud service for U.S. government agencies, Azure Government.

Microsoft in July also received FedRAMP high authorization, giving federal agencies who manage some of the government’s most sensitive data access to powerful language models including ChatGPT.

The post Microsoft rolls out generative AI roadmap for government services appeared first on FedScoop.

]]>
73924
Sen. Schumer’s first AI insight forum focuses on 2024 election, federal regulators https://fedscoop.com/sen-schumers-first-ai-insight-forum-focuses-on-2024-election-federal-regulators/ Fri, 15 Sep 2023 12:24:28 +0000 https://fedscoop.com/?p=72889 More than 65 senators and top tech CEOs debated openness and transparency for AI systems at the first meeting, among other key issues.

The post Sen. Schumer’s first AI insight forum focuses on 2024 election, federal regulators appeared first on FedScoop.

]]>
Two-thirds of the Senate along with top tech CEOs and labor and civil rights leaders gathered Wednesday on Capitol Hill to discuss the major AI issues affecting the world and to start sharing preliminary ideas on how the federal government could help solve them.

Senate Majority Leader Chuck Schumer’s first closed-door AI insight forum focused on issues including national security, privacy, high-risk applications, bias, and the implications of AI for the workforce, gathering those bullish on AI as well as skeptics and critics of the technology.

“The things we discussed were open AI, and the pros and cons of that, then health care — the amazing potential that AI could have in health care,” Schumer told reporters after the first of his nine planned AI insight forums.

“We talked about election law and the need to do something fairly immediate, before the election. We talked about the displacement of workers, both the training of workers into the new AI jobs but also what we do about displaced workers who might lose their jobs or have diminished jobs,” Schumer added. “We talked about who the regulators should be – lots of different decisions and questions about that. We talked about the need for immigration. We talked about transparency.” 

The AI insight forum included tech industry leaders like Google CEO Sundar Pichai; Tesla, X and SpaceX CEO Elon Musk; NVIDIA President Jensen Huang; Meta founder and CEO Mark Zuckerberg; technologist and Google alum Eric Schmidt; OpenAI CEO Sam Altman; and Microsoft CEO Satya Nadella, along with representatives from labor and civil rights advocacy groups.

Schumer said that tackling issues around AI-generated content that is fake or deceptive that can lead to widespread misinformation and disinformation was the most time-sensitive problem to solve due to the upcoming 2024 presidential election.

“There’s the issue of actually having deepfakes, where people really believe … that a candidate is saying something when they’re totally a creation of AI,” said Schumer. 

“We talked about watermarking … that one has a quicker timetable maybe than some of the others and it’s very important to do,” Schumer added.

The top Democrat in the Senate said there was much discussion during the meeting about the creation of a new AI agency and that there was also debate about how to use some of the existing federal agencies to regulate AI.

South Dakota Sen. Mike Rounds, Schumer’s Republican counterpart in leading the bipartisan AI forums, said: “We’ve got to have the ability to provide good information to regulators. And it doesn’t mean that every single agency has to have all of the top-end, high-quality of professionals but we need that group of professionals who can be shared across the different agencies when it comes to AI.”

Although there were no significant voluntary commitments made during the first AI insight forum, tech leaders who participated in the forum said there was much debate around how open and transparent AI developers and those using AI in the federal government will be required to be.

“I think the main debates during the forum were around openness and transparency for AI systems based on where it is and where it will go in the future,” Clément Delangue, CEO of Hugging Face, an AI startup focused on open-source, for-profit machine learning platforms, told FedScoop after the forum.

“We emphasize the importance of openness and transparency because we believe open systems are the way to distribute power and distribute value. We think it’s important for the U.S. to create tens of millions of jobs in AI,” Delangue added. “To do that, you need open systems, because like companies, especially small companies, they can’t start from scratch. They need to work based on the science and the models and the datasets that are available for them. Open systems also kind of like create more inclusiveness for everyone to be at the table, participate.”

Rounds said the forums were a way for the U.S. to urgently take leadership on AI regulations and policy-making alongside its existing dominance in development of AI products and tools. 

“We need to be the leaders in the international community. And we have the opportunity, we’re there now. We don’t want to lose that,” Rounds told reporters. “And that means that we become the place where we create but we also share in many cases with the rest of the world, that maintains our leadership that came across very strong today as well.”

Some participants told FedScoop that there was much more agreement than disagreement in the room regarding AI challenges and policymaking.

“I think this was a framing conversation regarding the AI problems, because the following sessions we’ll get into, I think, more detail and try to work out proposals,” Eric Fanning, president and CEO of Aerospace Industries Association, told FedScoop during an interview after the forum.

“This was a chance to sort of see where people are more aligned, or maybe less aligned. But I think, on the big issues, there’ll be a lot of alignment,” said Fanning. “It just was illuminating the different ways, the different perspectives that were brought to the table and the debates. There’s a lot of work to be done. It’s not going to be an easy thing. Because there’s lots of different philosophies on open versus closed, for example.”

The post Sen. Schumer’s first AI insight forum focuses on 2024 election, federal regulators appeared first on FedScoop.

]]>
72889