ChatGPT Archives | FedScoop https://fedscoop.com/tag/chatgpt/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 22 May 2024 14:51:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 ChatGPT Archives | FedScoop https://fedscoop.com/tag/chatgpt/ 32 32 Inside NASA’s deliberations over ChatGPT https://fedscoop.com/inside-nasas-deliberations-over-chatgpt/ Wed, 22 May 2024 14:43:59 +0000 https://fedscoop.com/?p=78445 More than 300 pages of documents provide insight into how the space agency thought about generative AI, just as ChatGPT entered the public lexicon.

The post Inside NASA’s deliberations over ChatGPT appeared first on FedScoop.

]]>
In the months after ChatGPT’s public release, leaders inside NASA debated the merits and flaws of generative AI tools, according to more than 300 pages of emails obtained by FedScoop, revealing both excitement and concerns within an agency known for its cautious approach to emergent technologies. 

NASA has so far taken a relatively proactive approach to generative AI, which the agency is considering for tasks like summarization and code-writing. Staff are currently working with the OpenAI tools built into Microsoft’s Azure service to analyze use cases. NASA is also weighing generative AI capabilities from its other cloud providers — and it’s in discussions with Google Cloud on plans to test Gemini, the competitor AI tool formerly known as Bard. 

Though NASA policy prohibits the use of sensitive data on generative AI systems, that won’t be the case forever. Jennifer Dooren, the deputy news chief of NASA, told FedScoop that the agency is now working with “leading vendors to approve generative AI systems” for use on sensitive data and anticipates those capabilities will be available soon. While the agency’s most recent AI inventory only includes one explicit reference to OpenAI technology, an updated list with more references to generative AI could be released publicly as soon as October. 

In the first weeks of 2023, and as ChatGPT entered the public lexicon, the agency’s internal discussions surrounding generative AI appeared to focus on two core values: researching and investing in technological advances and encouraging extreme caution on safety. Those conversations also show how the agency had to factor in myriad authorities and research interests to coordinate its use. 

“NASA was like anyone else during the time that ChatGPT was rolled out: trying to understand services like these, their capabilities and competencies, and their limitations, like any of us tried to do,” said Namrata Goswami, an independent space policy expert who reviewed the emails, which were obtained via a public records request. 

She continued: “NASA did not seem to have a prior understanding of generative AI, as well as how these may be different from a platform like Google Search. NASA also had limited knowledge of the tools and source structure of AI. Neither did it have the safety, security, and protocols in place to take advantage of generative AI. Instead, like any other institution [or] individual, its policy appeared to be reactive.” 

NASA’s response

Emails show early enthusiasm and demand internally for OpenAI technology — and confusion about how and when agency staffers could use it. In one January 2023 email, Brandon Ruffridge, from the Office of the Chief Information Officer at NASA’s Glenn Research Center, expressed frustration that without access to the tool, interns would have to spend time on “less important tasks” and that engineers and scientists’ research would be held back. In another email that month, Martin Garcia Jr., an enterprise data science operations lead in the OCIO at the Johnson Space Center, wrote that there was extensive interest in getting access to the tech.

By mid-February, Ed McLarney, the agency’s AI lead, had sent a message noting that, at least informally, he’d been telling people that ChatGPT had not been approved for IT use and that NASA data should only be used on NASA-approved systems. He also raised the idea of sending a workforce-wide message, which ended up going out in May. In those opening weeks, the emails seem to show growing pressure on the agency to establish permissions for the tool. 

“We have demand and user interest through the roof for this. If we slow roll it, we run [a] high risk of our customers going around us, doing it themselves in [an] unauthorized, non-secure manner, and having to clean up the mess later,” McLarney warned in a March email to other staff focused on the technology. Another email, from David Kelldorf, chief technology officer of the Johnson Space Center, noted that “many are chomping at the bits to try it out.”

But while some members of the space agency expressed optimism, others urged caution about the technology’s potential pitfalls. In one email, Martin Steele, a member of the data stewardship and strategy team at NASA’s Information, Data, and Analytics Services division, warned against assuming that ChatGPT had “intelligence” and stressed the importance of “The Human Element.” In a separate email, Steven Crawford, senior program executive for scientific data and computing with the agency’s Science Mission Directorate, expressed concerns about the tool’s potential to spread misinformation. (Crawford later told FedScoop that he’s now satisfied by NASA’s guardrails and has joined some generative AI efforts at the agency). 

Email from Steven Crawford, April 10, 2023.

In those first weeks and months of 2023, there were also tensions surrounding security and existing IT procedures. Karen Fallon, the director of Information, Data, and Analytics Services for NASA’s Chief Information Office operations, cautioned in March that enthusiasm for the technology shouldn’t trump agency leaders’ need to follow existing IT practices. (When asked for comment, NASA called Fallon’s concerns “valid and relevant.”)

Email from Karen Fallon, March 16, 2023.

In another instance, before NASA’s official policy was publicized in May, an AI researcher at the Goddard Space Flight Center asked if it would be acceptable for their team to use their own GPT instances with code that was already in the public domain. In response, McLarney explained that researchers should not use NASA emails for personal OpenAI accounts, be conscious about data and code leaks, and make sure both the data and code were public and non-sensitive. 

NASA later told FedScoop that the conversation presented “a preview of pre-decisional, pending CIO guidance” and that it aligned with NASA IT policy — though they noted that NASA doesn’t encourage employees to spend their own funds on IT services for space agency work. 

Email from Martin Garcia, Jr., April 7, 2023.

“As NASA continues to work to onboard generative AI systems it is working through those concerns and is mitigating risks appropriately,” Dooren, the agency’s deputy news chief, said. 

Of course, NASA’s debate comes as other federal agencies and companies continue to evaluate generative AI. Organizations are still learning how to approach the technology and its impact on daily work, said Sean Costigan, managing director of resilience strategy at the cybersecurity company Red Sift. NASA is no exception, he argued, and must consider potential risks, including misinformation, data privacy and security, and reduced human oversight. 

“It is critical that NASA maintains vigilance when adopting AI in space or on earth —wherever it may be — after all, the mission depends on humans understanding and accounting for risk,” he told FedScoop. “There should be no rush to adopt new technologies without fully understanding the opportunities and risks.” 

Greg Falco, a systems engineering professor at Cornell University who has focused on space infrastructure, noted that NASA tends to play catchup on new computing technologies and can fall behind the startup ecosystem. Generative AI wouldn’t necessarily be used for the most high-stakes aspects of the space agency’s work, but could help improve efficiency, he added.

NASA generative AI campaign.

“NASA is and was always successful due to [its] extremely cautious nature and extensive risk management practices. Especially these days, NASA is very risk [averse] when it comes to truly emergent computing capabilities,” he said. “However, they will not be solved anytime soon. There is a cost/benefit scale that needs to be tilted towards the benefits given the transformative change that will come in the next [three-to-five] years with Gen AI efficiency.”

He continued: “If NASA and other similar [government] agencies fail to hop on the generative AI train, they will quickly be outpaced not just by industry but by [nation-state] competitors. China has made fantastic government supported advancements in this domain which we see publicly through their [government] funded academic publications.”

Meanwhile, NASA continues to work on its broader AI policy. The space agency published an initial framework for ethical AI in 2021 that was meant to be a “conversation-starter,” but emails obtained by FedScoop show that the initial framework received criticism — and agency leaders were told to hold off.  The agency has since paused co-development on practitioners’ guidance on AI to focus instead on federal AI work, but plans to return to that work “in the road ahead,” according to Dooren.

The space agency also drafted an AI policy in 2023, but ultimately decided to delay it to wait for federal directives. NASA now plans to refine and publish the policy this year. 

The post Inside NASA’s deliberations over ChatGPT appeared first on FedScoop.

]]>
78445
State Department encouraging workers to use ChatGPT https://fedscoop.com/state-department-encouraging-workers-to-use-chatgpt/ Fri, 19 Apr 2024 18:10:15 +0000 https://fedscoop.com/?p=77397 The agency just launched an internal chatbot as the Biden administration pushes AI.

The post State Department encouraging workers to use ChatGPT appeared first on FedScoop.

]]>
The State Department is encouraging its workforce to use generative AI tools, having launched a new internal chatbot to a thousand users this week. The move comes as the agency leans heavily on chatbots and other artificial intelligence-based tools amid the Biden administration’s push for departments to look for use cases for the technology. 

“Of our workforce, there are a lot of people who haven’t been playing with ChatGPT,” State Chief Information Officer Kelly Fletcher said Thursday at AIScoop’s AITalks event in Washington, D.C. “We’re encouraging them to do so, but they need training.”

The internal chatbot, which FedScoop previously reported on, is an example of how the agency is weighing how generative AI might help with tasks like summarization and translation. It comes in response to staff demand. 

Beyond the chatbot, the State Department is using artificial intelligence for other purposes, including declassifying documents, said Matthew Graviss, the agency’s chief data and artificial intelligence officer. The department is also using open-source models to help create a digital research assistant for certain mandated reports, though he didn’t name those documents.  

The department is also using public tools with public information to help synthesize information for ambassadors, Graviss said. “You don’t need FedRAMP this and FISMA that to do that kind of stuff,” he added. “Public tools work.” 

Earlier this month, FedScoop reported that the Department of State had removed several references to artificial intelligence use cases in its executive order-required inventory. 

Other agencies, meanwhile, have taken a variety of approaches to generative AI, with some more cautious about exploring the technology. Others are setting up sandboxes to explore generative AI tools, working, for instance, with versions of OpenAI tools available on Azure for Government. 

The post State Department encouraging workers to use ChatGPT appeared first on FedScoop.

]]>
77397
Congressional offices experimenting with generative AI, though widespread adoption appears limited https://fedscoop.com/congressional-offices-experimenting-with-generative-ai-though-widespread-adoption-appears-limited/ Tue, 26 Mar 2024 14:48:07 +0000 https://fedscoop.com/?p=76816 A handful of lawmakers indicated they’re using AI in their offices, in response to a FedScoop inquiry to House and Senate AI caucus members.

The post Congressional offices experimenting with generative AI, though widespread adoption appears limited appeared first on FedScoop.

]]>
As generative artificial intelligence tools have made their way into public use, a few offices on Capitol Hill have also begun to experiment with them. Widespread use, however, appears to be limited. 

FedScoop inquiries to every member of the House and Senate AI caucuses yielded over a dozen responses from lawmakers’ offices about whether they are using generative AI tools, as well as if they have their own AI policies. Seven offices indicated or had previously stated that staff were using generative AI tools, five said they were not currently using the technology, and three provided a response but didn’t address whether their offices were currently using it. 

The varied responses from lawmakers and evolving policies for use in each chamber paint a picture of a legislative body exploring how to potentially use the technology while remaining cautious about its outputs. The exploration of generative AI by lawmakers and staff also comes as Congress attempts to create guardrails for the rapidly growing technology.

“I have recommended to my staff that you have to think about how you use ChatGPT and other tools to enhance productivity,” Rep. Ami Bera, D-Calif., told FedScoop in an interview, pointing to responding to constituent letters as an example of an area where the process could be streamlined.

But Bera also noted that while he has accessed ChatGPT, he doesn’t often use it. “I’d rather do the human interaction,” he said.

Meanwhile, Sen. Gary Peters, D-Mich., has policies for generative AI use in both his office and the majority office of the Homeland Security and Governmental Affairs Committee, which he chairs. 

“The policy permits the use of generative AI, and provides strong parameters to ensure the accuracy of any information compiled using generative AI, protect the privacy and confidentiality of constituents, ensure sensitive information is not shared outside of secure Senate channels, and guarantee that human judgment is not supplanted,” a Peters aide told FedScoop.

And some lawmakers noted they’ve explored the technology themselves.

Rep. Scott Franklin, R-Fla., told FedScoop that when ChatGPT first became public, he asked the service to write a floor speech on the topic of the day as a Republican member of Congress from Florida. Once the machine responded, Franklin said he joked with his communication staff that “y’all are in big trouble.’”

While Franklin did not directly comment on AI use within his office during an interview with FedScoop, he did say that he’ll play with ChatGPT and doesn’t want to be “left behind” where the technology is concerned. 

House and Senate policies

As interest in the technology has grown, both House and Senate administrative arms have developed policies for generative tools. And while generative AI use is permitted in both chambers, each has its own restrictions.

The House Chief Administrative Officer’s House Digital Services purchased 40 ChatGPT Plus licenses last April to begin experimenting with the technology, and in June the CAO restricted ChatGPT use in the House to the ChatGPT Plus version only, while outlining guardrails. That was first reported by Axios and FedScoop independently confirmed with a House aide. 

There is also indication that work is continuing on that policy. At a January hearing, House Deputy Chief Administrative Officer John Clocker shared that the office is developing a new policy for AI with the Committee on House Administration and said the CAO plans on creating guidance and training for House staff.

In a statement to FedScoop, the Committee on House Administration acknowledged that offices are experimenting with AI tools — ChatGPT Plus, specifically — for research and evaluation, and noted some offices are developing “tip sheets to help guide their use.”

“This is a practice we encourage. CAO is able to work with interested offices to craft tip sheets using lessons learned from earlier pilots,” the committee said in a statement. 

The committee has also continued to focus on institutional policies for AI governance, the statement said. “Towards that end, last month we updated our 2024 User’s Guide to include mention of data governance and this month we held internal discussions on AI guardrails which included national AI experts and House Officials.”

On the Senate side, the Sergeant at Arms’ Chief Information Officer issued a notice to offices allowing the use of ChatGPT, Microsoft Bing Chat, and Google Bard and outlining guidance for use last year. PopVox Foundation was the first to share that document in a blog, and FedScoop independently confirmed with a Senate aide that the policy was received in September. The document also indicated that the Sergeant at Arms CIO determined that those three tools had a “moderate level of risk if controls are followed.”

Congressional support agencies, including the Library of Congress, the Government Accountability Office and the Government Publishing Office, have also recently shared how they’re exploring AI to improve their work and services in testimony before lawmakers. Those uses could eventually include tools that support the work of congressional staff as well.

Aubrey Wilson, director of government innovation at the nonprofit POPVOX Foundation who has written about AI use in the legislative branch, said the exploration of the technology is “really innovative for Congress.”

“Even though it might seem small, for these institutions that traditionally move slowly, the fact that you’re even seeing certain offices that have productively and proactively set these internal policies and are exploring these use cases,” Wilson said. “That is something to celebrate.”

Individual approaches

Of the offices that told FedScoop they do use the technology, most indicated that generative tools were used to assist with things like research and workflow, and a few, including Peters’ office, noted that they had their own policies to ensure the technology was being used appropriately. 

Clocker, of the CAO, had recommended offices adopt their own internal policies adjusted to their preferences and risk tolerance at the January Committee on House Administration hearing. POPVOX has also published a guide for congressional offices establishing their own policies for generative AI tools.

The office of Rep. Glenn Ivey, D-Md., for example, received approval from the House for its AI use and encouraged staff to use the account to assist in drafting materials. But they’ve also stressed that staff should use the account for work only, ensure they fact-check the outputs, and are transparent about their use of AI with supervisors, according to information provided by Ivey’s office. 

“Overall, it is a tool we have used to improve workflow and efficiencies, but it is not a prominent and redefining aspect of our operations,” said Ramón Korionoff, Ivey’s communications director.

Senate AI Caucus co-chair Martin Heinrich, D-N.M., also has a policy that provides guidance for responsible use of AI in his office. According to a Heinrich spokesperson, those policies “uphold a high standard of integrity rooted in the fundamental principle that his constituents ultimately benefit from the work of people.”

Even if they don’t have their own policies yet, other offices are looking into guidelines. Staff for one House Republican, for example, noted they were exploring best practices for AI for their office.

Two House lawmakers indicated they were keeping in line with CAO guidance when asked about a policy. Rep. Ro Khanna, D-Calif., said in a statement that his “office follows the guidance of the CAO and uses ChatGPT Plus for basic research and evaluation tasks.” 

Rep. Kevin Mullin, D-Calif., on the other hand, isn’t using generative AI tools in his office but  said it “will continue to follow the CAO’s guidance.”

“While Rep Mullin is interested in continuing to learn about the various applications of AI and find bipartisan policy solutions to issues that may arise from this technology, our staff is not using or experimenting with generative AI tools at this time,” his office shared with FedScoop in a written statement.

That guidance has been met with some criticism, however. Rep. Ted Lieu, D-Calif, initially pushed back on those guardrails after they were announced, arguing the decision about what to use should be left up to individual offices. He also noted, at the time, that his staff were free to use the tools without restrictions. 

Sen. Todd Young, R-Ind., has also previously indicated he and his staff use the technology. A spokesperson for Young pointed FedScoop to a statement the senator made last year noting that he regularly uses AI and encourages his staff to use it as well, though he said staff are ultimately responsible for the end product.

Parodies and potential uses

Some uses of generative tools have made their way into hearings and remarks, albeit the uses are generally more tongue-in-cheek or meant to underscore the capabilities of the technology.

Sen. Chris Coons, D-Del., for example, began his remarks at a July hearing with an AI-generated parody of “New York, New York;” Sen. Richard Blumenthal, D-Conn., played an AI-generated audio clip at a May hearing that mimicked the sound of his own voice; Rep. Nancy Mace, R-S.C., delivered remarks at a March 2023 hearing written by ChatGPT; and Rep. Jake Auchincloss, D-Mass., delivered a speech on the House floor in January 2023 written by ChatGPT.

Rep. Don Beyer, D-Va., said anecdotally in an interview that he’s heard of others using it to draft press releases or speeches, though it’s not something his office uses. “This is no criticism of GPT4, but when you are looking at an enormous amount of written material, and you’re averaging it all out, you’re going to get something pretty average,” Beyer said.

Other lawmakers seemed interested in the uses of technology but haven’t yet experimented with it in their offices. 

Rep. Adriano Espaillat, D-N.Y., for example, said in an interview that while his office isn’t using AI right now, he and his staff are exploring how it could be used.

“We are looking at potential use of AI for fact-finding, for the verification of any data that we may have available to us, fact-checking matters that are important for us in terms of background information for debate,” Espaillat said, adding “but we’re not there yet.”

POPVOX Foundation’s Wilson, a former congressional staffer, said one of her takeaways from her time working in Congress was “how absolutely underwater” staff is with keeping up with information, from corresponding with federal agencies to letters from constituents. She said that generative AI could help congressional staff sort through information and data faster, which could inform data-driven policymaking.

“In a situation where Congress is not willing to give itself more people to help with the increased workflow, the idea that it’s innovatively allowing the people who are in Congress to explore use of better tools is one way that I think congressional capacity can really be aided,” Wilson said. 

Rebecca Heilweil contributed to this story.

The post Congressional offices experimenting with generative AI, though widespread adoption appears limited appeared first on FedScoop.

]]>
76816
Export-Import Bank taking open-minded approach on the use of generative AI tools https://fedscoop.com/export-import-bank-permissive-on-generative-ai/ Fri, 01 Mar 2024 22:51:12 +0000 https://fedscoop.com/?p=76283 Addressing employee generative AI use is largely an evolution of the agency’s existing policies for general internet searches, said Ex-Im's Howard Spira.

The post Export-Import Bank taking open-minded approach on the use of generative AI tools appeared first on FedScoop.

]]>
The Export-Import Bank of the United States is among the agencies opting for a more permissive approach to generative AI tools, providing employees the same kind of access the independent agency has for access to the internet, according to its top IT official.

“We do not block AI any more than we block general internet access,” Howard Spira, chief information officer of Ex-Im, said during a Thursday panel discussion hosted by the Advanced Technology Academic Research Center (ATARC).

Spira said the agency is approaching generative tools with discussions about accountability and best practices, such as not inputting private information into tools like ChatGPT or other public large language models. “But frankly, that is just an evolution of policies that we’ve had with respect to just even search queries on the general internet,” Spira said.

He emphasized the importance of context in AI usage, noting that the agency — whose mission is facilitating U.S. exports — deals with the kinds of decisions that it believes are “a relatively low-risk environment” for AI. Most of the work the agency is doing with AI is with “embedded AI” that’s within its existing environments, such as those for cyber and infrastructure monitoring.

“We’re also actually encouraging our staff to play with this,” Spira said.

His comments come as agencies across the federal government have grappled with how to address the use of generative AI tools by employees and contractors. Those policies have so far varied by agency depending on their individual needs and mission, according to FedScoop reporting.

While some agencies have taken a permissive approach like Ex-Im, others are approaching the tools with more caution.

Jennifer Diamantis, special counsel to the chief artificial intelligence officer in the Securities and Exchange Commission’s Office of Information Technology Strategy and Innovation, said during the panel that the SEC isn’t jumping into third-party generative AI tools yet, citing unknowns and risks. 

There is, however, a lot of exploration, learning, safe testing and making sure guardrails are followed, Diamantis said. She added that while the agency is exploring the technical side, there is also an opportunity right now to explore the process, policy and compliance side of things to make sure they’re ready to manage risks if and when they do move forward with the technology. 

Diamantis, who noted she wasn’t speaking for the commission or commissioners, encouraged people to use this time to focus not just on the technology, “but also, what do you need in terms of governance? What do you need in terms of updating your lifecycle process? What do you need in terms of upskilling, training for staff?”

In addition to exploration, the SEC is also educating its staff on AI. Diamantis said those efforts have included trainings — such as a recent one on responsible AI — and having outside speakers, as well as establishing an AI community of practice and a user group.

Spira similarly noted that Ex-Im has working groups addressing AI and is including discussions about the technology in its continuous strategy process. This year, that process for its IT portfolio included having “the portfolio owners identify potential use cases that they were interested in exploring” and the identification of embedded use cases, he said.

Tony Holmes, another panelist and Pluralsights’s director of public sector presales solution consulting for North America, underscored the importance of broad training on AI to build a workforce that isn’t afraid of the technology. 

“I know when I talk to people in my organization, when I talk to people at agencies, there are a lot of people that just haven’t touched it because they’re like, ‘we’re not sure about it and we’re a little bit scared of it,’’’ Holmes said. Exposure, he added, can help those people “understand it’s not scary” and “can be very productive.”

The post Export-Import Bank taking open-minded approach on the use of generative AI tools appeared first on FedScoop.

]]>
76283
MITRE researched air traffic language AI tool for FAA, documents show https://fedscoop.com/mitre-air-traffic-conversation-ai-tool-faa-dot/ Wed, 14 Feb 2024 22:34:02 +0000 https://fedscoop.com/?p=76048 The Department of Transportation has been relatively mum about its work on AI.

The post MITRE researched air traffic language AI tool for FAA, documents show appeared first on FedScoop.

]]>
MITRE, a public interest research nonprofit that receives federal funding, proposed a system for transcribing and studying conversations between pilots and air traffic controllers, according to documents obtained by FedScoop through a public records request. 

A presentation dated August 2023 and titled “Advanced Capabilities for Capturing Controller-Pilot dialogue” shows that MITRE engaged in a serious effort to study how natural language processing could be used to help the Federal Aviation Administration, and, in particular, to help with “understanding the safety-related and routine operations of the National Airspace System.”

MITRE, which supported the project through its Center for Advanced Aviation System Development, told FedScoop that the prototype is currently being transitioned to the FAA for “potential operational implementation.” Otherwise, it’s not clear what the current status of the tool is, as the agency’s artificial intelligence use case inventory was last updated in July 2023, according to a DOT page. The FAA did not respond to a request for comment and instead directed FedScoop to the Department of Transportation. 

“Communications between pilots and air traffic controllers are a crucial source of information and context for operations across the national airspace,” Greg Tennille, MITRE’s managing director for transportation safety, said in a statement to FedScoop in response to questions about the documents. “Collecting voice data accurately, efficiently and effectively can provide important insights into the national airspace and unearth trends and potential hazards.” 

The August 2023 presentation describes several ways that natural language processing, a type of AI meant to focus on interpreting and understanding speech and text, could be fine-tuned to understand conversations between air traffic controllers and pilots. The project reported on the performance of different strategies and models in terms of accuracy and provided recommendations. At the end, it also describes a brief exploration of how ChatGPT might be able to help with comprehension of Air Traffic Control sub-dialogues, noting that “the results were surprisingly good.”  

The presentation reveals how the often-overwhelmed aviation agency might try to take advantage of artificial intelligence and comes as the Biden administration continues to push federal agencies to look for ways to deploy the technology. 

At the same time, it also outlines potential interest in ChatGPT. While the Department of Transportation said it doesn’t have a relationship with OpenAI, other documents show that officials within the agency are interested in generative AI.  

The Department of Transportation and ChatGPT

The reference to ChatGPT in the project, though it appears to be provisional and not a core part of the research, is more evidence of how the Department of Transportation might use generative AI tools in the future. FedScoop previously reported, for example, that the DOT’s Pipeline and Hazardous Materials Safety Administration had disclosed a use case — described as “planned” and “not in production” — involving “ChatGPT to support the rulemaking processes.”

PHMSA, which said it’s continuing to study the short- and long-term risks and benefits associated with generative AI, has said it does not plan on using the technology for rulemaking. The agency also said that it has an agreement with Incentive Technology Group, worth about several hundreds of thousands of dollars, to explore generative AI pilots. 

PHMSA said that the project did not involve ChatGPT, but instead involved “Azure OpenAI Generative LLM version 3.5.” (OpenAI explains on its website that GPT-3.5 models can be used to understand and generate natural language or code, but PHMSA did not explain whether the reference to ChatGPT in the AI use case disclosure was a mistake or a distinct project from its work with Incentive Technology Group.)

Notably, while other agencies are beginning to develop policies for generative AI, the Department of Transportation has not responded to questions from FedScoop about what policies or guidance it might have surrounding the technology. 

Emails obtained by FedScoop through public records requests show that the Chief Data Officer Dan Morgan had on-hand a “generative AI” guidance document attributed to the government of New Zealand. An email last summer to the Department of Transportation’s AI Task Force from Matt Cuddy, operations research analyst at the DOT’s Volpe National Transportation Systems Center, shows that the agency had made large language models a topic of interest.

A publicly available document from 2019 said that through the task force, the DOT had made transportation-related AI “an agency research & development priority.” 

Last year, FedScoop reported that the Department of Transportation had disclosed the use of ChatGPT for code-writing assistance in its inventory, but then removed the entry and said it was made in error. The department has not responded to questions about how that error actually occurred. Emails obtained by FedScoop show that the incident attracted attention from Conrad Stosz, the artificial intelligence director in the Office of the Federal Chief Information Officer. 

In regard to this story, the Department of Transportation told FedScoop again that the FAA ChatGPT entry was made in error and that the “FAA does not use Chat GPT in any of its systems, including air traffic systems.” It also said that the use case was unrelated to the MITRE FAA project. 

The post MITRE researched air traffic language AI tool for FAA, documents show appeared first on FedScoop.

]]>
76048
USAID warned employees not to share private data on ChatGPT, memo shows https://fedscoop.com/usaid-warned-employees-not-to-share-private-data-on-chatgpt/ Thu, 04 Jan 2024 17:38:00 +0000 https://fedscoop.com/?p=75463 As of April, the international development agency does not have an outright ban on the generative AI tool.

The post USAID warned employees not to share private data on ChatGPT, memo shows appeared first on FedScoop.

]]>
Back in April, the U.S. Agency for International Development warned employees that they should only input information from “publicly-available sources” into generative artificial intelligence tools like ChatGPT. Until now, it wasn’t clear how, exactly, USAID was approaching the rapidly developing technology. 

Federal agencies have started crafting and solidifying their strategies for generative AI. Still, their approaches have varied. The Social Security Administration has temporarily banned the technology on its devices, while the Agriculture Department determined that ChatGPT’s risk was “high” and established a board to review potential generative AI use cases. NASA, which is using a version of OpenAI software provided through the Microsoft Azure cloud system, has set up a secure testing environment to study the technology.

Notably, the White House’s recent executive order on artificial intelligence discouraged agencies from outright forbidding the technology. 

The USAID memo, which FedScoop obtained through a public records request, was sent by an official within the agency’s Office of the Chief Information Officer and titled “Usage of ChatGPT and Large Language Models (LLMs).” Its approach appears to mirror that of the General Services Administration, as well as some other agencies, in avoiding an outright ban, though it’s not clear if the agency has made any updates since last year. USAID did not respond to a request for comment.

The general notice stated that “USAID has neither approved nor banned the use of ChatGPT or any LLMs for Agency Use.” For that reason, the memo explained, only information that is already public should be entered in these tools — and any content created with their help should be “referenced as output” from a large language model. 

“Artificial Intelligence (AI) and LLMs are powerful tools with enormous value, but the Agency should exercise a degree of caution in their use as their reliability, accuracy and trustworthiness are not proven,” the memo stated. “Additionally, LLMs have not demonstrated their compliance with Federal and USAID security requirements, provided transparency around the data collected, and addressed the resulting Privacy and Records Management implications.” 

USAID has released an action plan related to artificial intelligence, and the agency’s responsible AI official appears to have spoken about how generative AI tools can be used by the government. 

Still, a data governance page for the agency notes that “emerging technologies such as generative AI raise new questions around data ownership, the ethical use of data, and intellectual property rights, among others,” and USAID’s public list of AI use cases does not appear to include any generative AI applications. 

Madison Alder contributed to this article. 

The post USAID warned employees not to share private data on ChatGPT, memo shows appeared first on FedScoop.

]]>
75463
USDA determined ChatGPT’s risk was ‘high,’ set up board to review generative AI use, documents show https://fedscoop.com/usda-determined-chatgpt-risk-high-established-board/ Wed, 20 Dec 2023 19:36:58 +0000 https://fedscoop.com/?p=75332 OpenAI pushed back on a vulnerability cited in USDA’s March risk determination.

The post USDA determined ChatGPT’s risk was ‘high,’ set up board to review generative AI use, documents show appeared first on FedScoop.

]]>
As OpenAI’s ChatGPT tool broke into the mainstream earlier this year, the U.S. Department of Agriculture determined that the generative artificial intelligence tool posed too high a risk to use on its network and prohibited its use, according to documents obtained by FedScoop. 

In October, seven months after that risk determination was made, department leaders distributed interim guidance that extended that prohibition more broadly to employee and contractor use of third-party generative AI tools in their official capacities and on government equipment. The agency also established a board that’s creating a process to review proposed uses of the technology going forward, according to documents obtained through a Freedom of Information Act request and the department’s response to FedScoop.

Information about USDA’s approach comes as agencies across the federal government are grappling with creating policies for generative AI tools within their agencies and coming to different conclusions about how to handle the nascent and rapidly growing technology. 

The Department of Homeland Security, for example, recently made public its conditional approval of generative AI tools for use in the department, including ChatGPT, Bing Chat, Claude 2 and DALL-E2. Meanwhile, NASA leaders told employees in May that the tools weren’t cleared for widespread use with “sensitive NASA data,” though they permitted use on personal accounts “following acceptable use policies.”

An Agriculture Department spokesperson told FedScoop in an emailed statement that the agency’s interim guidance, along with the White House’s AI executive order, “will help ensure that USDA, like other agencies across the federal government, is using this emerging, important technology safely, securely, and responsibly, while also delivering better results for the people who rely on its programs and services.”

According to the March 16 risk determination obtained by FedScoop, the department found that “ChatGPT displays multiple concerning indicators and vulnerabilities that will pose a risk if used in the USDA enterprise network infrastructure” and ultimately labeled that risk as “high.”

Specifically, the risk determination referenced a vulnerability documented in the National Vulnerability Database involving a WordPress plugin that appears to use ChatGPT. The determination said the vulnerability “describes a missing authorization check that allows users the ability to access data or perform actions that should be prohibited.” It also pointed to “insufficient safeguards.”

“While OpenAI alleges having safeguards in place to mitigate these risks, use cases demonstrate that malicious users can get around those safeguards by posing questions or requests differently to obtain the same results,” the risk determination said. “Use of ChatGPT poses a risk of security breaches or incidents associated with data entered [into] the tool by users, to include controlled unclassified information (CUI), proprietary government data, regulated Food and Agriculture (FA) sector data, and personal confidential data.”

In response to a FedScoop inquiry about the USDA’s determination, a spokesperson for OpenAI said the company was not affiliated with the WordPress plugin it cited. The spokesperson also pointed to DHS’s recent assessment that conditionally approved generative tools and noted the launch of ChatGPT Enterprise, which has additional security and privacy controls.

“We appreciate the U.S. government’s dedication to using AI safely and effectively to improve services for the public. We would be happy to discuss the safe use of our products to support the USDA’s work,” the spokesperson said. 

Under USDA’s interim guidance, which was distributed internally Oct. 16, the Generative AI Review Board includes representation from USDA’s chief data officer and the chief technology officer, in addition to representatives for cybersecurity, the general counsel’s office, and two mission areas. 

Since President Joe Biden’s executive order, the department’s CDO and responsible AI official, Chris Alvares, has been elevated to serve as its chief AI officer, and he also serves on the board in that capacity, the spokesperson said. That comes as agencies are starting to name CAIOs in light of a new position created under Biden’s order and subsequent White House guidance.

The board will meet monthly, the document said, and implement a process for reviewing proposed generative AI projects within 90 days, which would be roughly mid-January. It also stipulated that “any use cases currently in development or in use at the time of this memo should be paused until reviewed by the” Generative AI Review Board, and noted specifically that using AI language translation services is prohibited.

Submitting personal identifiable or non-public information to public generative AI tools is “a prohibited release of protected information” that employees must report, the document said. The spokesperson said there haven’t been any known instances where USDA personal identifiable information has been submitted to a generative AI tool, and “USDA has not received any reports of inappropriate GenAI output.”

Rebecca Heilweil contributed to this article.

This story was updated to correct the spelling of Chris Alvares’s name.

The post USDA determined ChatGPT’s risk was ‘high,’ set up board to review generative AI use, documents show appeared first on FedScoop.

]]>
75332
Energy Department working on preliminary generative AI initiatives https://fedscoop.com/energy-department-working-on-preliminary-generative-ai-initiatives/ Wed, 27 Sep 2023 17:11:14 +0000 https://fedscoop.com/?p=73151 The two projects include a user guide focused on how the technology correlates with the department's existing IT guidelines as well as an AI Discovery Zone "sandbox" for working with these systems.

The post Energy Department working on preliminary generative AI initiatives appeared first on FedScoop.

]]>
The Department of Energy is working on a pair of projects related to generative artificial intelligence, Energy CIO Ann Dunkin told FedScoop in a recent interview.

These initiatives, which are still in development, include a user guide focused on how the technology correlates with the department’s existing IT guidelines as well as an AI Discovery Zone “sandbox” for working with these systems.

Both initiatives are focused on generative AI, said Dunkin, and come as other federal agencies wrestle with how to deploy, and monitor, these kinds of systems, which include OpenAI’s ChatGPT and Google’s Bard tools.

NASA, for example, sent a notice to employees earlier this year outlining how they should approach the technology, with some researchers at the agency now beginning efforts to test generative AI in a controlled environment. The Pipeline and Hazardous Materials Safety Administration is apparently exploring plans for a ChatGPT pilot for assistance in the rule-making project (though the agency says it’s not currently using generative AI for that purpose). Meanwhile, the Office of Management and Budget is expected to release guidance on federal agency use of the technology.

“When we talk about that guidance and the Discovery Zone, those are specifically around generative AI,” Dunkin told FedScoop. “Before ChatGPT, the barriers to entry to playing with AI were sufficiently high that we didn’t randomly have people just playing with AI. But now, generative AI creates a different set of opportunities for people to use it. So, definitely, there’s a lot of interest.”

Dunkin said that the department’s current IT policies provide “more than sufficient” guidelines on how to use AI, but added that the upcoming guide will also help connect those policies to the emerging technology.

“We have to remember that we have data that we steward for other people and the labs have federal data they steward for the government,” said Dunkin. “One of the reasons of Discovery Zone is to set up so that individuals are able to identify opportunities and then we’re gonna have to figure out how we build those things into our processes while protecting our data.”

More broadly, the Department of Energy has disclosed around 180 artificial intelligence use cases, according to an inventory the agency updated earlier this summer. Many of these use cases include work at the National Energy Technology Laboratory, but the technology is being used to help with the department’s internal operations, too.

While speaking with FedScoop, Dunkin added that several other agencies had asked the Energy Department for demos of a chatbot used by its small business team to help businesses apply to become vendors with the agency.

The post Energy Department working on preliminary generative AI initiatives appeared first on FedScoop.

]]>
73151
Pipeline safety agency’s proposed pilot for ChatGPT in rulemaking raises questions https://fedscoop.com/pipeline-safety-agencys-proposed-pilot-for-chatgpt-in-rulemaking-raises-questions/ Tue, 05 Sep 2023 18:04:20 +0000 https://fedscoop.com/?p=72484 The Pipeline and Hazardous Materials Safety Administration is considering using OpenAI in the rulemaking process, according to a Transportation Department AI inventory.

The post Pipeline safety agency’s proposed pilot for ChatGPT in rulemaking raises questions appeared first on FedScoop.

]]>
The Pipeline and Hazardous Materials Safety Administration is exploring using ChatGPT in the rulemaking process, according to a disclosure by its parent agency, the Department of Transportation.

According to a posting on the agency’s public AI inventory, PHMSA is weighing an “artificial intelligence support for rulemaking use case.” The project, according to the posting, involves using ChatGPT to support the rulemaking “processes to provide significant efficiencies, reduction of effort, or the ability to scale efforts for unusual levels of public scrutiny or interest.” The agency told FedScoop that, right now, it has no official plans to implement such technology.

Interest from PHMSA, which creates regulations for the movement of potentially dangerous materials, comes as other agencies, including NASA and the Defense Department, begin considering the role of generative AI tools in their work.

Still, PHMSA’s concept for a technology pilot that would use ChatGPT to analyze comments submitted to the agency about regulations it’s considering raises concerns about what role, if any, the technology should play in the regulatory process, according to an expert on AI and civil liberties.

“The idea that agencies will use a tool notorious for factual inaccuracies for development of rules that forbid arbitrary and capricious rule-making processes is concerning,” Ben Winters, an attorney and the leader of the AI and Human Rights project at the Electronic Privacy Information Center, said in an email to FedScoop. “Especially, the PHMSA, whose rules often concern potentially life-altering exposure to hazardous materials.”

The Transportation Department’s AI inventory states that the OpenAI chatbot would be used to conduct sentiment analysis on comments sent to the agency about proposed rules. The tool could be used for analyzing the “relevance” of the comments, providing a “synopsis” for comments, “cataloging of comments,” and identifying duplicates.

When asked about the use case, PHMSA emphasized that the project is still in a very early stage.

“PHMSA, like many other federal agencies, is exploring the responsible and ethical use of AI through limited pilots and demonstration projects,” the agency told FedScoop in a statement. “These pilots and projects are designed to ensure alignment with recent guidance from the Administration on the appropriate use of AI in the federal government.”

The agency continued: “At this time, PHMSA is not using, and does not plan on using any generative AI tools or commercial software for generative AI like OpenAI to influence the rulemaking process. PHMSA is working with our stakeholders to assess both short term and long term risks from generative AI.”

In the agency’s AI inventory, which was last updated in July, the project is described as “a pilot initiative” that’s “planned” and “not in production.”

Winters, from EPIC, questioned whether ChatGPT is an appropriate technology for the rulemaking process. He argued that relevance analysis could ultimately result in an agency missing a novel point they hadn’t considered before, and added that sentiment analysis isn’t a “relevant consideration” of the Administration Procedure Act’s rulemaking process.

“[S]ummaries by ChatGPT are prone to factual inaccuracies and a limited and outdated corpus of information,” he said. “Most of these functions could not be reliably achieved by ChatGPT.”

OpenAI did not respond to a request for comment by publication time.

There are other instances where the DOT’s AI activities, at least as described on the agency’s official AI inventory, have raised questions. Earlier this year, the Department removed a reference to the Federal Aviation Administration’s Air Traffic Office using ChatGPT for code-writing assistance in response to FedScoop questions.

Stanford researchers highlighted major issues in the AI inventory compliance process at the end of last year. FedScoop has reported on ongoing issues related to these inventories and the AI use cases they’ve revealed.

Madison Alder contributed reporting. 

The post Pipeline safety agency’s proposed pilot for ChatGPT in rulemaking raises questions appeared first on FedScoop.

]]>
72484
Regulating AI risk: Why we need to revamp the ‘AI Bill of Rights’ and lean on depoliticized third parties https://fedscoop.com/regulating-ai-risk-why-we-need-to-revamp-the-ai-bill-of-rights-and-lean-on-depoliticized-third-parties/ Thu, 31 Aug 2023 18:51:09 +0000 https://fedscoop.com/?p=72428 In an exclusive commentary, Arthur Maccabe argues that AI must be regulated, and that it shouldn't be the job of the federal government alone.

The post Regulating AI risk: Why we need to revamp the ‘AI Bill of Rights’ and lean on depoliticized third parties appeared first on FedScoop.

]]>
The AI debate has transitioned from doomsday prophecies to big questions about its risks and how to effectively regulate AI technologies. AI brings a new level of intricacy to an already complex regulatory landscape as a rapidly evolving technology that will likely outpace the creation of comprehensive regulation.

While AI has tremendous potential to increase efficiencies, create new types of job opportunities and enable innovative public-private partnerships, it’s important to regulate its risks. Threats to U.S. cybersecurity and national defense are a major concern, along with the risk of bias and the ability of these tools to spread disinformation quickly and effectively. Additionally, there is a need for increased transparency amidst the development and ongoing use of AI, especially with popular, widely deployed tools like ChatGPT.

Washington D.C. is more focused on AI regulation than ever. The Biden Administration recently announced the National Institute of Standards and Technology’s (NIST) launch of a new AI public working group. Comprised of experts from private and public sectors, the group aims to better understand and tackle the risks of rapidly advancing generative AI. Additionally, Congress has held nearly a dozen hearings on AI since March.

While this momentum demonstrates progress, there is an urgent need to regulate AI as risks continue to emerge and other nations deploy their own AI regulation. Effectively regulating AI will first require the development of a regulatory framework created and upheld by a responsible and respected entity and produced with input from industry, academia and the federal government.  

Addressing biases through the federal government and academia 

This framework must address the potential biases of the technology and clearly articulate the rights of individuals and communities. The Blueprint for an AI ‘Bill of Rights’ developed by the Office of Science and Technology Policy (OSTP) is a good starting point. However, it doesn’t tie back to the original Bill of Rights or the Privacy Act of 1974, which articulates rights that individuals have in protecting their personal data. Going forward, it will be important to explicitly note why an AI-specific version is needed. The government can contribute to the framework by creating a stronger foundation for the Bill of AI Rights that will address AI biases by making them implicit and explicit. 

This regulatory framework should be motivated by potential risks to these rights. Regulations will need to be evaluated and updated regularly, as there can be unintended and unexpected consequences – like those of the European Union’s General Data Protection Regulation (GDPR). This regulation to safeguard personal data resulted in unintentional high-compliance costs which disproportionately impacted smaller businesses.

Academia’s commitment to scholarship, debate, and collaboration can also enable the formation of interdisciplinary teams to tackle AI system challenges. Fairness, for example, is a social construct; ensuring that a computational system is fair will require collaboration between social scientists and computer scientists. The emergence of generative AI systems like ChatGPT raises new questions about creation and learning, necessitating engagement from an even broader range of disciplines.

Why a regulatory framework alone won’t work 

Regulating AI shouldn’t just be the job of the federal government. The highly politicized legislative process is lengthy, which isn’t conducive to quickly evolving AI technology. Collaboration with industry, academia and professional societies is key to successfully deploying and enforcing AI regulation.

In Washington, D.C., previous attempts at AI regulation policy have been limited in scope and have ignited a debate about the federal government’s role. For example, the Algorithmic Accountability Act of 2022, which aimed to promote transparency and accountability in AI systems, was introduced in Congress but did not pass into law. While it did involve government oversight, it also encouraged industry self-regulation by giving companies flexibility in designing their own methods for conducting impact assessments. 

Additionally, Sen. Chuck Schumer, D-N.Y., recently introduced the Safe Innovation Framework for AI Policy to develop comprehensive legislation to regulate and advance AI development and questioned the federal government’s role in AI regulation.

Third-party self-regulation is a key component 

There are existing models of self-regulation used in other industries that could work for AI to complement this legislative framework. For example, the financial industry has implemented self-regulatory processes through organizations like the National Futures Association to certify that the products developed by its licensed members are valid.  

Self-regulation in AI could include third-party certification for AI products from professional societies like the Association for Computing Machinery or the Institute of Electrical and Electronics Engineers. Professional societies include academics and industry and can collaborate with government entities like NIST. They are also nimble and able to keep up with the rapid rate of change to depolarize and depoliticize AI regulation.  

Additionally, establishing and reviewing regulations could be done through Blue Ribbon panels organized by the National Academies which should include participants from government, industry and academia, especially the social sciences and humanities.

Across the globe, the race is on to regulate AI with the European Union already taking steps by releasing its regulatory framework. In the United States, elected officials in areas like New York City have passed laws on how companies can use AI in hiring and promotion

When it comes to AI, we must move quickly to protect fundamental rights. Leveraging the expertise of academia and industry experts, and taking a risk-based approach with self-regulating entities will be crucial. Now is the time to organize, evaluate and regulate AI. 

Dr. Arthur Maccabe is the executive director of the Institute for Computation and Data-Enabled Insight (ICDI) at the University of Arizona. Prior to this, he was the computer science and mathematics division director at Oak Ridge National Laboratory (ORNL) where he was responsible for fundamental research enabling and enabled by the nation’s leadership class Peta-scale computing capabilities, and he was co-author of the US Department of Energy’s roadmap for intelligent computing. Prior to that, he spent 26 years teaching computer science and serving as Chief Information Officer at the University of New Mexico and was instrumental in developing the high-performance computing capabilities at Sandia National Laboratory.

The post Regulating AI risk: Why we need to revamp the ‘AI Bill of Rights’ and lean on depoliticized third parties appeared first on FedScoop.

]]>
72428