Congress Archives | FedScoop https://fedscoop.com/tag/congress/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 08 May 2024 17:36:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Congress Archives | FedScoop https://fedscoop.com/tag/congress/ 32 32 House bill calls on EPA to update IT systems that store air quality data https://fedscoop.com/house-bill-calls-on-epa-to-update-it-systems-that-store-air-quality-data/ Wed, 08 May 2024 17:36:30 +0000 https://fedscoop.com/?p=78232 The “Clean Air in the Cloud Act” would codify recommendations from a Government Accountability Act report released in September 2023.

The post House bill calls on EPA to update IT systems that store air quality data appeared first on FedScoop.

]]>
The Environmental Protection Agency would be required to update the legacy IT platform it uses to store air quality data systems under new legislation in the House. 

The “Clean Air in the Cloud Act,” introduced Tuesday by Rep. Gerry Connolly, D-Va., pushes the EPA to update the IT system for storing AirNow and the Air Quality System (AQS). The bill’s requirements come directly from recommendations in a September 2023 Government Accountability Office study that Connolly requested. 

“I requested the GAO report on this issue because the federal government is only as good as the IT it utilizes,” Connolly said in a press release. “That’s true across government and it’s certainly true for the EPA. It is my hope that, with this legislation, the EPA can resolve the challenges posed by AQS and AirNow to best deliver results for the American people they service.”

The watchdog recommended that the EPA consider an operational analysis along with developing and documenting a business case for a new IT system. Those would be rooted in considerations for how a system would be able to address challenges posed by the existing legacy systems. The agency agreed with both recommendations.

However, the EPA disagreed with a GAO recommendation that the agency should identify factors for assessing if the agency’s systems are ready for either replacement or retirement.

The GAO found that the use of multiple systems for air quality monitoring “results in inefficient use of resources” for EPA and other monitoring agencies. Agency officials reported that finding and retaining IT staff who could work with AQS’s “outdated software” was “particularly challenging.”

While the EPA declined to comment on the new legislation, a spokesperson said that the agency is “happy to provide technical assistance when asked.”

The post House bill calls on EPA to update IT systems that store air quality data appeared first on FedScoop.

]]>
78232
House Modernization panel advances bill to improve CRS’s data access in first-ever markup https://fedscoop.com/house-modernization-advances-crs-data-access-bill/ Fri, 12 Apr 2024 21:15:04 +0000 https://fedscoop.com/?p=77188 The committee’s unanimous approval of legislation that aims to make it easier for the Congressional Research Service to get federal agency data was a milestone for the year-old subcommittee.

The post House Modernization panel advances bill to improve CRS’s data access in first-ever markup appeared first on FedScoop.

]]>
A bill to improve the Congressional Research Service’s access to federal data was one of two bipartisan pieces of legislation advanced Thursday at the first-ever markup by the Committee on House Administration’s newest subcommittee.

The Subcommittee on Modernization unanimously approved by voice vote the “Modernizing the Congressional Research Service’s Access to Data Act” (H.R. 7593), which is aimed at making it easier and faster for the research entity to obtain data from federal agencies and entities within the executive branch. 

“While CRS’s work is held up by bureaucratic processes and procedures, our work is held up. That is unacceptable, and our constituents deserve better,” Subcommittee Chairwoman Stephanie Bice, R-Okla., said at the markup. 

At a hearing in March, Robert Newlen, CRS’s interim director, told the subcommittee that the agency has difficulty obtaining information from government agencies and the legislation would address those “roadblocks.”

The committee also advanced by voice vote a bill that would direct the Library of Congress to publish digital annotated constitutions instead of the hardbound copies it’s currently required to produce. Bice said eliminating the print requirement is estimated to save just over a million dollars.

Rep. Derek Kilmer, D-Wash., the panel’s ranking member, noted that both pieces of legislation address recommendations from the Select Committee on the Modernization of Congress, which was a precursor to the creation of the subcommittee. 

Those recommendations were that congressional support agencies, like CRS, should report on challenges to accessing federal data and potential solutions, and for lawmakers to examine whether authorities for those agencies needed to be updated.

In addition to being a milestone for the subcommittee, which was created last February, it was also the first House Administration subcommittee to have a markup vote in 31 years, Bice said at the meeting.

“We’re trying to find ways to improve — modernize Congress, of course — but also just improve processes, and I think these are just two easy, simple wins that can make that happen,” Bice told FedScoop after the markup. 

The bills go next to the full committee.

The post House Modernization panel advances bill to improve CRS’s data access in first-ever markup appeared first on FedScoop.

]]>
77188
Congressional offices experimenting with generative AI, though widespread adoption appears limited https://fedscoop.com/congressional-offices-experimenting-with-generative-ai-though-widespread-adoption-appears-limited/ Tue, 26 Mar 2024 14:48:07 +0000 https://fedscoop.com/?p=76816 A handful of lawmakers indicated they’re using AI in their offices, in response to a FedScoop inquiry to House and Senate AI caucus members.

The post Congressional offices experimenting with generative AI, though widespread adoption appears limited appeared first on FedScoop.

]]>
As generative artificial intelligence tools have made their way into public use, a few offices on Capitol Hill have also begun to experiment with them. Widespread use, however, appears to be limited. 

FedScoop inquiries to every member of the House and Senate AI caucuses yielded over a dozen responses from lawmakers’ offices about whether they are using generative AI tools, as well as if they have their own AI policies. Seven offices indicated or had previously stated that staff were using generative AI tools, five said they were not currently using the technology, and three provided a response but didn’t address whether their offices were currently using it. 

The varied responses from lawmakers and evolving policies for use in each chamber paint a picture of a legislative body exploring how to potentially use the technology while remaining cautious about its outputs. The exploration of generative AI by lawmakers and staff also comes as Congress attempts to create guardrails for the rapidly growing technology.

“I have recommended to my staff that you have to think about how you use ChatGPT and other tools to enhance productivity,” Rep. Ami Bera, D-Calif., told FedScoop in an interview, pointing to responding to constituent letters as an example of an area where the process could be streamlined.

But Bera also noted that while he has accessed ChatGPT, he doesn’t often use it. “I’d rather do the human interaction,” he said.

Meanwhile, Sen. Gary Peters, D-Mich., has policies for generative AI use in both his office and the majority office of the Homeland Security and Governmental Affairs Committee, which he chairs. 

“The policy permits the use of generative AI, and provides strong parameters to ensure the accuracy of any information compiled using generative AI, protect the privacy and confidentiality of constituents, ensure sensitive information is not shared outside of secure Senate channels, and guarantee that human judgment is not supplanted,” a Peters aide told FedScoop.

And some lawmakers noted they’ve explored the technology themselves.

Rep. Scott Franklin, R-Fla., told FedScoop that when ChatGPT first became public, he asked the service to write a floor speech on the topic of the day as a Republican member of Congress from Florida. Once the machine responded, Franklin said he joked with his communication staff that “y’all are in big trouble.’”

While Franklin did not directly comment on AI use within his office during an interview with FedScoop, he did say that he’ll play with ChatGPT and doesn’t want to be “left behind” where the technology is concerned. 

House and Senate policies

As interest in the technology has grown, both House and Senate administrative arms have developed policies for generative tools. And while generative AI use is permitted in both chambers, each has its own restrictions.

The House Chief Administrative Officer’s House Digital Services purchased 40 ChatGPT Plus licenses last April to begin experimenting with the technology, and in June the CAO restricted ChatGPT use in the House to the ChatGPT Plus version only, while outlining guardrails. That was first reported by Axios and FedScoop independently confirmed with a House aide. 

There is also indication that work is continuing on that policy. At a January hearing, House Deputy Chief Administrative Officer John Clocker shared that the office is developing a new policy for AI with the Committee on House Administration and said the CAO plans on creating guidance and training for House staff.

In a statement to FedScoop, the Committee on House Administration acknowledged that offices are experimenting with AI tools — ChatGPT Plus, specifically — for research and evaluation, and noted some offices are developing “tip sheets to help guide their use.”

“This is a practice we encourage. CAO is able to work with interested offices to craft tip sheets using lessons learned from earlier pilots,” the committee said in a statement. 

The committee has also continued to focus on institutional policies for AI governance, the statement said. “Towards that end, last month we updated our 2024 User’s Guide to include mention of data governance and this month we held internal discussions on AI guardrails which included national AI experts and House Officials.”

On the Senate side, the Sergeant at Arms’ Chief Information Officer issued a notice to offices allowing the use of ChatGPT, Microsoft Bing Chat, and Google Bard and outlining guidance for use last year. PopVox Foundation was the first to share that document in a blog, and FedScoop independently confirmed with a Senate aide that the policy was received in September. The document also indicated that the Sergeant at Arms CIO determined that those three tools had a “moderate level of risk if controls are followed.”

Congressional support agencies, including the Library of Congress, the Government Accountability Office and the Government Publishing Office, have also recently shared how they’re exploring AI to improve their work and services in testimony before lawmakers. Those uses could eventually include tools that support the work of congressional staff as well.

Aubrey Wilson, director of government innovation at the nonprofit POPVOX Foundation who has written about AI use in the legislative branch, said the exploration of the technology is “really innovative for Congress.”

“Even though it might seem small, for these institutions that traditionally move slowly, the fact that you’re even seeing certain offices that have productively and proactively set these internal policies and are exploring these use cases,” Wilson said. “That is something to celebrate.”

Individual approaches

Of the offices that told FedScoop they do use the technology, most indicated that generative tools were used to assist with things like research and workflow, and a few, including Peters’ office, noted that they had their own policies to ensure the technology was being used appropriately. 

Clocker, of the CAO, had recommended offices adopt their own internal policies adjusted to their preferences and risk tolerance at the January Committee on House Administration hearing. POPVOX has also published a guide for congressional offices establishing their own policies for generative AI tools.

The office of Rep. Glenn Ivey, D-Md., for example, received approval from the House for its AI use and encouraged staff to use the account to assist in drafting materials. But they’ve also stressed that staff should use the account for work only, ensure they fact-check the outputs, and are transparent about their use of AI with supervisors, according to information provided by Ivey’s office. 

“Overall, it is a tool we have used to improve workflow and efficiencies, but it is not a prominent and redefining aspect of our operations,” said Ramón Korionoff, Ivey’s communications director.

Senate AI Caucus co-chair Martin Heinrich, D-N.M., also has a policy that provides guidance for responsible use of AI in his office. According to a Heinrich spokesperson, those policies “uphold a high standard of integrity rooted in the fundamental principle that his constituents ultimately benefit from the work of people.”

Even if they don’t have their own policies yet, other offices are looking into guidelines. Staff for one House Republican, for example, noted they were exploring best practices for AI for their office.

Two House lawmakers indicated they were keeping in line with CAO guidance when asked about a policy. Rep. Ro Khanna, D-Calif., said in a statement that his “office follows the guidance of the CAO and uses ChatGPT Plus for basic research and evaluation tasks.” 

Rep. Kevin Mullin, D-Calif., on the other hand, isn’t using generative AI tools in his office but  said it “will continue to follow the CAO’s guidance.”

“While Rep Mullin is interested in continuing to learn about the various applications of AI and find bipartisan policy solutions to issues that may arise from this technology, our staff is not using or experimenting with generative AI tools at this time,” his office shared with FedScoop in a written statement.

That guidance has been met with some criticism, however. Rep. Ted Lieu, D-Calif, initially pushed back on those guardrails after they were announced, arguing the decision about what to use should be left up to individual offices. He also noted, at the time, that his staff were free to use the tools without restrictions. 

Sen. Todd Young, R-Ind., has also previously indicated he and his staff use the technology. A spokesperson for Young pointed FedScoop to a statement the senator made last year noting that he regularly uses AI and encourages his staff to use it as well, though he said staff are ultimately responsible for the end product.

Parodies and potential uses

Some uses of generative tools have made their way into hearings and remarks, albeit the uses are generally more tongue-in-cheek or meant to underscore the capabilities of the technology.

Sen. Chris Coons, D-Del., for example, began his remarks at a July hearing with an AI-generated parody of “New York, New York;” Sen. Richard Blumenthal, D-Conn., played an AI-generated audio clip at a May hearing that mimicked the sound of his own voice; Rep. Nancy Mace, R-S.C., delivered remarks at a March 2023 hearing written by ChatGPT; and Rep. Jake Auchincloss, D-Mass., delivered a speech on the House floor in January 2023 written by ChatGPT.

Rep. Don Beyer, D-Va., said anecdotally in an interview that he’s heard of others using it to draft press releases or speeches, though it’s not something his office uses. “This is no criticism of GPT4, but when you are looking at an enormous amount of written material, and you’re averaging it all out, you’re going to get something pretty average,” Beyer said.

Other lawmakers seemed interested in the uses of technology but haven’t yet experimented with it in their offices. 

Rep. Adriano Espaillat, D-N.Y., for example, said in an interview that while his office isn’t using AI right now, he and his staff are exploring how it could be used.

“We are looking at potential use of AI for fact-finding, for the verification of any data that we may have available to us, fact-checking matters that are important for us in terms of background information for debate,” Espaillat said, adding “but we’re not there yet.”

POPVOX Foundation’s Wilson, a former congressional staffer, said one of her takeaways from her time working in Congress was “how absolutely underwater” staff is with keeping up with information, from corresponding with federal agencies to letters from constituents. She said that generative AI could help congressional staff sort through information and data faster, which could inform data-driven policymaking.

“In a situation where Congress is not willing to give itself more people to help with the increased workflow, the idea that it’s innovatively allowing the people who are in Congress to explore use of better tools is one way that I think congressional capacity can really be aided,” Wilson said. 

Rebecca Heilweil contributed to this story.

The post Congressional offices experimenting with generative AI, though widespread adoption appears limited appeared first on FedScoop.

]]>
76816
AI watermarking could be exploited by bad actors to spread misinformation. But experts say the tech still must be adopted quickly https://fedscoop.com/ai-watermarking-misinformation-election-bad-actors-congress/ Wed, 03 Jan 2024 21:56:04 +0000 https://fedscoop.com/?p=75453 As Washington putters on AI watermarking legislation, TikTok and Adobe are leading the way with transparency standards.

The post AI watermarking could be exploited by bad actors to spread misinformation. But experts say the tech still must be adopted quickly appeared first on FedScoop.

]]>
By and large, government and private-sector technologists agree that the use of digital watermarking to verify AI-generated content should be a key component for tackling deepfakes and other forms of malicious misinformation and disinformation. 

But there is no clear consensus regarding what a digital watermark is, or what common standards and policies around it should be, leading many AI experts and policymakers to fear that the technology could fall short of its potential and even empower bad actors.

Industry groups and a handful of tech giants — most notably TikTok and Adobe — have been singled out by experts as leading the charge on AI watermarking and embracing a transparent approach to the technology. They’ll need all the help they can get during what promises to be an especially chaotic year in digital spaces. 

With over 2 billion people expected to vote in elections around the world in 2024, AI creators, scholars and politicians said in interviews with FedScoop that standards on the watermarking of AI-generated content must be tackled in the coming months — or else the proliferation of sophisticated, viral deepfakes and fake audio or video of politicians will continue unabated.

“This idea of authenticity, of having authentic trustworthy content, is at the heart of AI watermarking,” said Ramayya Krishnan, dean of Carnegie Mellon University’s information systems and public policy school and a member of President Joe Biden’s National Artificial Intelligence Advisory Committee. 

“Having a technological way of labeling how content was made and having an AI detection tool to go with that would help, and there’s a lot of interest in that, but it’s not a silver bullet,” he added. “There’s all sorts of enforcement issues.” 

Digital watermarking “a triage tool for harm reduction”

There are three main types of watermarks created by major tech companies and AI creators to reduce misinformation and build trust with users: visible watermarks added to images, videos or text by companies like Google, OpenAI or Getty to verify the authenticity of content; invisible watermarks that can only be detected through special algorithms or software; and cryptographic metadata, which details when a piece of content was created and how it has been edited or modified before someone consumes it.

Using watermarking to try and reduce AI-generated misinformation and disinformation can be helpful when the average consumer is viewing a piece of content, but it can also backfire. Bad actors can manipulate a watermark and create even more misinformation, AI experts focused on watermarking told FedScoop.

It’s like Olympic athletes — now that I know that you’re looking for this drug, I’ll just take another drug.

Senior senate independent staffer on how bad actors can manipulate watermarks

“Watermarking technology has to be taken with a grain of salt because it is not so hard for someone with a knowledge of watermarks and AI to being able to break it and remove the watermark or manufacture one,” said Siwei Lyu, a University at Buffalo computer science professor who studies deepfakes and digital forgeries. 

Lyu added that digital watermarking is “not foolproof” and invisible watermarks are often more effective, though not without their flaws. 

“I think watermarks mostly play on people’s unawareness of their existence. So if they know they can, they will find a way to break it.”

A senior Senate independent staffer deeply involved in drafting legislation related to AI  watermarking said the concern of bad actors using well-intentioned watermarks for manipulative purposes is “1,000% valid. It’s like Olympic athletes — now that I know that you’re looking for this drug, I’ll just take another drug. It’s like we need to try our best we can to keep pace with the bad actors.”

When it comes to AI watermarking, the Senate is currently in an “education and defining the problem” phase, the senior staffer said. Once the main problems with the technology are better defined, the staffer said they’ll begin to explore whether there is a legislative fix or an appropriations fix.

Senate Majority Leader Chuck Schumer said in September that ahead of the 2024 election, tackling issues around AI-generated content that is fake or deceptive and can lead to widespread misinformation and disinformation was an exceedingly time-sensitive problem.

“There’s the issue of actually having deepfakes, where people really believe … that a candidate is saying something when they’re totally a creation of AI,” the New York Democrat said after his first closed-door AI insight forum

“We talked about watermarking … that one has a quicker timetable maybe than some of the others, and it’s very important to do,” he added.

Another AI expert said that watermarking can be manipulated by bad actors in a small but highly consequential number of scenarios. Sam Gregory, executive director at the nonprofit WITNESS, which helps people use technology to promote human rights, said it’s best to think of AI watermarking as “almost a triage tool for harm reduction.” 

”You’re making available a greater range of signals on where content has come from that works for 95% of people’s communication,” he said. “But then you’ve got like 5% or 10% of situations where someone doesn’t use the watermark to conceal their identity or strip out information or perhaps they’re a bad actor. 

“It’s not a 100% solution,” Gregory added.

TikTok, Adobe leading the way on watermarking

Among major social media platforms, Chinese-owned TikTok has taken an early lead on watermarking, requiring users to be highly transparent when AI tools and effects are used within their content, three AI scholars told FedScoop. Furthermore, the company has created a culture of encouraging users to be comfortable with sharing the role that AI plays in altering their videos or photos in fun ways.

“TikTok shows you the audio track that was used, it shows you the stitch that was made, it shows you the AI effects used,” Gregory said. And as “the most commonly used platform by young people,” TikTok makes it “easy and comfortable to be transparent about how a piece of content was made with presence of AI in the mix.” 

TikTok recently announced new labels for disclosing AI-generated content. In a statement, the social media platform said that its policy “requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualize the video and prevent the potential spread of misleading content. Creators can now do this through the new label (or other types of disclosures, like a sticker or caption).”

We realized that we can’t keep getting out in front to determine if something is false, so we decided to flip it and say, ‘Let’s have everybody expect to say this is true.’

Jeffrey young, adobe senior solutions consultant manager, on the company’s approach content authenticity

Major AI developers, including Adobe and Microsoft, also support some forms of labeling AI in their products. Both tech giants are members of the Coalition for Content Provenance and Authenticity (C2PA), which addresses the prevalence of misinformation online through the development of technical standards for certifying the source and history of online content.

Jeffrey Young, a senior solutions consultant manager at Adobe, said the company has “had a big drive for the content authenticity initiative” due in large part to its awareness that bad actors use Photoshop to manipulate images “for nefarious reasons.” 

“We realized that we can’t keep getting out in front to determine if something is false, so we decided to flip it and say, ‘Let’s have everybody expect to say this is true,’” Young said. “So we’re working with camera manufacturers, working with websites on their end product, that they’re able to rollover that image and say, this was generated by [the Department of Homeland Security], they’ve signed it, and this is confirmed, and it hasn’t been manipulated since this publication.”

Most major tech companies are in favor of labeling AI content through watermarking and are working to create transparent watermarks, but the tech industry recognizes that it’s a simplistic solution, and other actions must be taken as well to comprehensively reduce AI-generated misinformation online. 

Paul Lekas, senior vice president for global public policy & government affairs at the Software & Information Industry Association, said the trade group — which represents Amazon, Apple and Google, among others — is “very supportive” of watermarking labeling and provenance authentication but acknowledges that those measures do “not solve all the issues that are out there.” 

“Ideally we’d have a system where everything would be clear and transparent, but we don’t have that yet,” Lekas said. “I think another thing that we are very supportive of is nontechnical, which is literacy — media literacy, digital literacy for people — because we can’t just rely on technology alone to solve all of our problems.”

In Washington, some momentum on AI watermarking

The White House, certain federal agencies and multiple prominent members of Congress have made watermarking and the reduction of AI-generated misinformation a high priority, pushing through a patchwork of suggested solutions to regulate AI and create policy safeguards around the technology when it comes to deepfakes and other manipulative content.

Through Biden’s October AI executive order, the Commerce Department’s National Institute of Standards and Technology has been charged with creating authentication and watermarking standards for generative AI systems — following up on discussions in the Senate about similar kinds of verification technologies

Alondra Nelson, the former White House Office of Science and Technology Policy chief, said in an interview with FedScoop that there is enough familiarity with watermarking that it is no longer “a completely foreign kind of technological intervention or risk mitigation tactic.”

“I think that we have enough early days experience with watermarking that people have to use,” she said. “You’ve got to use it in different kinds of sectors for different kinds of concerns, like child sexual abuse and these sorts of things.” 

Congress has also introduced several pieces of legislation related to AI misinformation and watermarking, such as a bill from Rep. Yvette Clarke, D-N.Y., to regulate deepfakes by requiring content creators to digitally watermark certain content and make it a crime to fail to identify malicious deepfakes that are related to criminal conduct, incite violence or interfere with elections.

In September, Sens. Amy Klobuchar, D-Minn., Josh Hawley, R-Mo., Chris Coons, D-Del., and Susan Collins, R-Maine, proposed new bipartisan legislation focused on banning the use of deceptive AI-generated content in elections. In October, Sens. Brian Schatz, D-Hawaii, and John Kennedy, R-La., introduced the bipartisan AI Labeling Act of 2023, which would require clear labeling and disclosure on AI-generated content and chatbots so consumers are aware when they’re interacting with any product powered by AI.

Meanwhile, the Federal Election Commission has been asked to establish a new rule requiring political campaigns and groups to disclose when their ads include AI-generated content.

In the absence of any AI legislation within Congress becoming law or garnering significant bipartisan consensus, the White House has pushed to get tech giants to sign voluntary commitments governing AI, which require steps such as watermarking AI-generated content. Adobe, IBM, Nvidia and others are on board. The private commitments backed by the Biden administration are seen as a stopgap. 

From Nelson’s point of view, NIST’s work on the creation of AI watermarking standards will “be taking it to another level.” 

“One hopes that CIOs and CTOs will take it up,” she said. “That remains to be seen.”

The post AI watermarking could be exploited by bad actors to spread misinformation. But experts say the tech still must be adopted quickly appeared first on FedScoop.

]]>
75453
AI deepfake detection requires NSF and DARPA funding and new legislation, congressman says https://fedscoop.com/ai-deepfake-detection-requires-nsf-and-darpa-funding-and-new-legislation-congressman-says/ Thu, 09 Nov 2023 21:57:45 +0000 https://fedscoop.com/?p=74749 Rep. Gerry Connolly, D-Va., said additional funding of DARPA and NSF is “critical” to creating advanced and effective deepfake detection tools.

The post AI deepfake detection requires NSF and DARPA funding and new legislation, congressman says appeared first on FedScoop.

]]>
Lawmakers warned of the dangers of AI-generated deepfake content during a House Oversight subcommittee hearing Wednesday, pushing for additional funding for key federal agencies as well as new targeted legislation to tackle the problem.

There was bipartisan agreement during the “Advances in Deepfake Technology” hearing that the government should play a role in regulating deceptive, AI-generated deepfake photos and videos that could harm people, particularly related to fake pornographic material. 

Approximately 96 percent of deepfake videos online are nonconsensual pornography, and most of them depict women, according to a study by the Dutch AI company Sensity.

Rep. Gerry Connolly, D-Va., ranking member of the House Oversight Subcommittee on Cybersecurity, IT, and Government Innovation, said additional funding for the Defense Advanced Research Projects Agency and the National Science Foundation is “critical” to creating advanced and effective deepfake detection tools. 

Dr. David Doermann, the interim chair of computer science and engineering at SUNY Buffalo, said during the hearing that DARPA was taking the lead within the federal government to tackle deepfakes, but highlighted that there was more that the agency could do.

“I think the explainability issues of AI are things that DARPA is looking at now,” Doermann said. “But we need to have the trust and safety aspects explored at the grassroots level for all of these things” within DARPA.

Connolly noted that the Biden administration’s recent AI executive order included productive steps to tackle deepfakes, leaning “on tools like watermarking that can help people identify whether what they’re looking at online is authentic as a government document or tool of disinformation.” 

“The order instructs the Secretary of Commerce to work enterprise-wide to develop standards and best practices for detecting fake content and tracking the providence of authentic information,” Connolly added.

Legislation to tackle deepfakes was introduced in May by Rep. Joe Morelle, D-N.Y. The “Preventing Deepfakes of Intimate Images Act” would make the sharing of nonconsensual deepfake pornography illegal. 

The proposed bill includes provisions to ensure that giving consent to create an AI image does not equate to consent to share the image. The bill also seeks to protect the anonymity of plaintiffs that sue to protect themselves from deepfake content.

The post AI deepfake detection requires NSF and DARPA funding and new legislation, congressman says appeared first on FedScoop.

]]>
74749
Schumer to host AI workforce forum with labor unions, big banks and tech scholars https://fedscoop.com/schumer-to-host-ai-workforce-forum-with-labor-unions-big-banks-and-tech-scholars/ Thu, 26 Oct 2023 14:06:22 +0000 https://fedscoop.com/?p=73824 Senate Majority Leader Chuck Schumer will host JPMorgan CEO Jamie Dimon, Visa CEO Al Kelly and some of the top union leaders in the country next work for discussions on AI in the workplace.

The post Schumer to host AI workforce forum with labor unions, big banks and tech scholars appeared first on FedScoop.

]]>
Top leaders from some of the most powerful labor unions, financial institutions and think tanks in the U.S. will convene on Capitol Hill next week to discuss the nexus of artificial intelligence and the workforce, with an eye toward how the federal government can ensure AI benefits for those across the economic spectrum.  

Senate Majority Leader Chuck Schumer’s third bipartisan, closed-door AI insight forum, to be held Nov. 1, will lay down a new foundation for AI policy in the workplace and discuss potential avenues for regulating the technology by gathering both those bullish on AI as well as skeptics and critics of the technology.

“This Forum is focused on the intersection of AI and the workforce. It aims to explore how AI will alter the way that Americans work, including the risks and opportunities,” Schumer’s staff said in an invite to the forum viewed exclusively by FedScoop. 

“Workers in industries across the economy, from medicine, manufacturing, transportation and energy, to entertainment, hospitality, and more, will feel the impacts of widespread use of AI. The primary objective is to examine how the federal government can bolster the domestic Al workforce and ensure the use of AI benefits everyone,” the invite said.

The list of attendees invited to the third AI forum includes: JPMorgan CEO Jamie Dimon, Visa CEO Al Kelly, National Nurses United Executive Director Bonnie Castillo, UNITE HERE President D. Taylor, SAG-AFTRA Executive Director Duncan Crabtree-Ireland, IBEW President Kenneth Cooper, CWA President Claude Cummings, Accenture CEO Julie Sweet, Microsoft’s Senior Director of Education Policy Allyson Knox, and Center for American Progress (CAP) President Patrick Gaspard.

Schumer’s first AI forum in September was focused broadly on finding and agreeing on the most pressing problems related to AI, while the second forum earlier this week was focused on innovation through increased federal research and development funding, tech immigration issues, and ways to find common ground on AI safeguards.   

The New York Democrat has planned nine different “insight forums” that will focus on issues including national security, privacy, high-risk applications, bias and others.

Schumer’s AI push comes as federal officials, along with Congress, weigh myriad approaches to regulating AI.  There’s growing pressure on the U.S. to catch up to the European Union, which recently passed draft legislation called the AI Act. At the same time, federal officials are also searching for ways to push U.S. companies to the forefront of global AI technology development — particularly as China continues to invest in the technology.

Schumer in June introduced a plan to develop comprehensive legislation in Congress to regulate and advance the development of artificial intelligence in the U.S. Called the “Safe Innovation Framework for AI Policy,” the plan outlines ways to “protect, expand, and harness AI’s potential” as Congress pursues legislation.

The Biden administration has also expressed a commitment to safeguarding Americans’ rights and safety with a focus on protecting user privacy and addressing bias and misinformation in AI. Biden in June met with tech leaders and academics in the AI space in Silicon Valley.

The post Schumer to host AI workforce forum with labor unions, big banks and tech scholars appeared first on FedScoop.

]]>
73824
Sen. Chuck Schumer’s second AI Insight Forum covers increased R&D funding, immigration challenges and safeguards https://fedscoop.com/sen-chuck-schumers-second-ai-insight-forum-covers-increased-rd-funding-immigration-challenges-and-safeguards/ Thu, 26 Oct 2023 01:27:07 +0000 https://fedscoop.com/?p=73818 Dozens of top AI minds and Senate leaders came together to discuss and debate how the federal government can ensure the U.S. remains a leader in AI while developing safe systems.

The post Sen. Chuck Schumer’s second AI Insight Forum covers increased R&D funding, immigration challenges and safeguards appeared first on FedScoop.

]]>
Sen. Chuck Schumer, D-N.Y., once again Tuesday brought together top artificial intelligence scholars, tech evangelists and civil rights leaders to discuss AI regulation and development, this time focusing the conversation on increased federal research and development funding, tech immigration issues, and ways to find common ground on AI safeguards.  

In Senate Majority Leader Schumer’s second closed-door bipartisan AI Insight Forum, participants also spotlighted how the federal government in particular can best ensure the U.S. remains a leader in AI innovation while developing better and safer autonomous systems.

“We came to an agreement that the government has to fund — now we say at least $32 billion. There are certain things we have to do, specific things in terms of funding NAIRR [ the National Artificial Intelligence Research Resource] with at least $32 billion,” Schumer told reporters halfway through the forum. 

“We have to have the government and the private sector collaborate on sharing information, sharing data, and the federal government needs to set up some models and some kind of ecosystem that allows the private sector to do even more. And if we don’t do this, China will get ahead of us,” added Schumer.

The forum was attended by top tech evangelists like Marc Andreessen of venture firm Andreessen Horowitz, and Patrick Collison, the CEO of Stripe, as well as key civil rights leaders like Derrick Johnson, the president of the NAACP, and Amanda Ballantyne, the director of the AFL-CIO Technology Institute. It also included former top White House AI officials like Alondra Nelson, the former OSTP director, and Suresh Venkatasubramanian, a former AI specialist within OSTP who is now a computer science professor at Brown University.  

South Dakota Sen. Mike Rounds, Schumer’s Republican counterpart in leading the bipartisan AI forums, said he would like to see the federal government have something akin to the American Society for Testing and Materials (ASTM) – an international standards organization that develops and publishes voluntary consensus technical standards – for AI. 

Rounds told FedScoop after the forum that such an entity or group within the government could be a “good referee” for AI and provide “provide technical assistance to a lot of different federal agencies that need to understand it better.”

Rounds added that there was significant consensus in the room Tuesday regarding AI problems and solutions but said there were some disagreements on how to handle large language models that underpin most generative AI tools like Open AI’s ChatGPT. There was also disagreement on privacy and who controls open source and private databases that most AI tools have been trained or built upon, the senator added.

One of the forum’s attendees, Ylli Bajraktari, CEO of the nonprofit Special Competitive Studies Project (SCSP) and the former executive director of the National Security Commission on AI, told FedScoop that the forum focused on three key ideas on how to boost AI innovation.

In addition to increasing funding for AI research and development, Bajraktari said the meeting also centered around ensuring the U.S. has a strong pipeline of skilled AI workers — both by educating and reskilling American citizens and through increased immigration — and agreeing upon necessary safeguards for the government to put on AI technologies so the technology doesn’t harm society.

“Right now, we have invested less than 1% of our GDP in [AI] R&D. So I think there was a general agreement, we got to put more money there. The issue is, how fast and how much because you cannot dump a lot of money all at once. We need the government to inject money, through our institutions like [the National Institute of Standards and Technology], the National Science Foundation, NASA and others,” Bajraktari told FedScoop.

Bajraktari also said there was agreement during the forum that the immediate impact of AI on jobs and the workforce should be studied further, perhaps through the creation of a national commission on automation and the future of work. This was also a recommendation in one of SCSP’s recent reports.

There was also discussion during the forum, Bajraktari said, about increasing the number of H-1B visas allowed into the U.S. to attract and retain more of the world’s brightest minds.

Max Tegmark, a physics professor at the Massachusetts Institute of Technology who’s also president of the Future of Life Institute, told FedScoop that the AI forum was highly productive but he was disappointed with the lack of discussion on potential existential risks created by Artificial General Intelligence (AGI) due to fears of AGI tools becoming so intelligent that they could get out of control and harm humans. 

“I think there was a very commendable push from Sen. Schumer and others for AI innovation being sustainable. But there was a great unwillingness to discuss large-scale risks, to discuss existential risks, and to discuss AGI at all,” Tegmark told FedScoop during an interview after the forum.

“I’m the only one that really brought up the subject and another one of the invited speakers explicitly said that we shouldn’t talk about these things. Nobody wants to talk about the thing that could transform everything in two or three years,” added Tegmark.

Tegmark’s institute earlier this year led an open letter, signed by Tesla CEO Elon Musk, calling to pause the development of powerful AI systems to focus on safe and responsible AI deployment.

Schumer’s AI Insight Forums will continue on, with the third scheduled for Nov. 1 focused on AI in the workforce, FedScoop learned. The Senate majority leader has planned nine different “insight forums” that will focus on issues including national security, privacy, high-risk applications, bias, and the implications of AI for the workforce, gathering both those bullish on AI as well as skeptics and critics of the technology.

The post Sen. Chuck Schumer’s second AI Insight Forum covers increased R&D funding, immigration challenges and safeguards appeared first on FedScoop.

]]>
73818
Technology Modernization Fund awardees get creative to repay funds https://fedscoop.com/technology-modernization-fund-awardees-get-creative-to-repay-funds/ Thu, 21 Sep 2023 20:35:29 +0000 https://fedscoop.com/?p=73069 "We're seeing more creative ways in terms of repayment, not necessarily just saying I have investment A and I need to take the cost savings from investment A to be able to repay the TMF," NARA CIO Sheena Burrell said.

The post Technology Modernization Fund awardees get creative to repay funds appeared first on FedScoop.

]]>
Repayment has been a challenge for some federal agencies that have received funding from the Technology Modernization Fund to support transformational technology projects. But according to a federal CIO who sits on the TMF board, agencies have gotten “creative” to fulfill that repayment obligation when their modernization projects don’t lead to immediate or obvious cost savings.

Sheena Burrell, CIO of the National Archives and Records Administration and also a term member of the TMF board, said Thursday that agencies — including her own — have found alternative ways to transfer money back to the fund even when the projects themselves don’t generate tangible financial returns.

“We’re seeing more creative ways in terms of repayment, not necessarily just saying I have investment A and I need to take the cost savings from investment A to be able to repay the TMF,” Burrell said. “I think we’re seeing some agencies where they’re being a little bit more creative.”

NARA, for instance, received $9.1 million to modernize and digitize its legacy records processing systems under the TMF. To pay that back, Burrell said, the agency used money from its revolving fund through which it collects fees from other agencies it stores and digitizes records for.

“So we’re utilizing some of the money from that revolving fund in order to pay back our TMF … loan here,” she said during an Alliance for Digital Innovation event Thursday focused on the TMF.

Other agencies have taken similar steps, getting creative with the budgeting process or pulling money from fee-for-service revolving funds to make sure they repay their loans on time, Burrell explained.

The General Services Administration received a large TMF award to support its Login.gov program. Similar to NARA’s records management program, Login.gov takes fees from other agencies it supports that it can then use to pay off some of the funding it received, she said.

But some agencies don’t have revolving funds or working capital funds that they can pull from if need be. And, as Burrell explained, some projects, like cybersecurity modernization, don’t have near-term returns on investment to pay back the TMF within five years.

“We know that sometimes with cybersecurity proposals that they don’t necessarily have that return on investment or that cost savings from the perspective of, ‘I’m getting rid of this application or modernizing this application so it will cost me less than [operations and maintenance] of legacy technology,'” Burrell said.

In cases like that, the TMF board has seen agencies use the fund as an “accelerator,” she said.

“They’re utilizing those funds to accelerate the implementation of something that was maybe already planned, but for a later time. And then because it was already planned, and they requested that from a budget perspective, they could take that money and repay the TMF,” Burrell said.

The TMF payment process has been a point of contention for both lawmakers and agencies since the introduction of the TMF in 2018. Burrell said it initially made many agencies hesitant to apply for awards for fear of not being able to make good on repayment if they didn’t find the savings that they hoped for.

“There were a lot of agencies that were very nervous to take advantage of the TMF because of the repayment and how would they pay the money back?” Burrell said. “And when would the money need to be paid back? And what if there wasn’t any cost savings from the investment that the TMF approved? How would they work those funds into the repayment process?”

She said the guidance the Office of Management and Budget and General Service Administration introduced in 2021 after the American Rescue Plan Act injected $1 billion into the TMF eased that tension for agencies as it created a more flexible system for them to repay money, requiring in some cases only partial or limited repayment depending on the project.

Some lawmakers, though, were not happy with that decision. Just this week, Reps. Nancy Mace, R-S.C., and Gerry Connolly, D-Va., introduced a bill revising the TMF to sustain it long term — but also bring it closer to its original intent in terms of repayment. If passed, it would require agencies to repay any funds in full to keep the program solvent and sustainable.

A congressional aide familiar with the bill told FedScoop this week: “The problem is that … those who influence how the TMF program office operates have veered away from congressional intent and have not required the fund to remain solvent, and have given out awards without requiring even a small percentage of those awards to be reimbursed.”

The post Technology Modernization Fund awardees get creative to repay funds appeared first on FedScoop.

]]>
73069
House administration panel seeking monthly AI updates from Congress-related agencies https://fedscoop.com/house-administration-panel-seeking-monthly-ai-updates-from-congress-related-agencies/ Thu, 14 Sep 2023 17:11:09 +0000 https://fedscoop.com/?p=72870 Updates encompass information on AI use case inventory work and actions to establish comprehensive AI governance documents in line with the NIST framework, among other things.

The post House administration panel seeking monthly AI updates from Congress-related agencies appeared first on FedScoop.

]]>
The Committee on House Administration has requested monthly updates on artificial intelligence actions from agencies that support the work of Congress, according to a new report from the panel detailing its AI efforts.

The requested updates seek information on actions taken to develop or maintain agencies’ AI use case inventories and share them publicly; ongoing work to establish “comprehensive AI-related governance documents in line with” a framework from the National Institute of Standards and Technology; and any other actions underway, such as creating AI advisory committees, pilots, or skills initiatives for staff.

The report emphasizes bipartisan efforts to establish transparency and standards for the nascent technology and comes as AI remains a focus on the Hill. Several committees have held hearings on AI this week and the use of the technology in the legislative branch is expected to be a topic of discussion at the fifth Congressional Hackathon on Thursday

“AI presents rank-and-file congressional staff with opportunities for dramatically increased efficiency across a wide variety of legislative and operational use cases,” the report said. “At the same time, AI presents the House with unique governance challenges due to the complex legislative data ecosystem and the House’s unique legislative, security, and oversight responsibilities.”

In addition to the monthly reports, the document summarized “preliminarily accomplishments,” including working with NIST and the General Services Administration on governance documents and considering AI use cases pilots built with data from things like former members’ research repositories or appropriations and spending data. 

It also disclosed that some agencies that support Congress, the Government Printing Office, are already using AI to “power intranet searches and more quickly assist with customer inquiries of government publications” and has “three pilot programs planned once its governance documents are approved.”

The House, the report said, is also looking into “ways to fast-track an approval process for new AI-enabled functionality that comes from our pre-existing vendors” in an effort to improve workflows.

The post House administration panel seeking monthly AI updates from Congress-related agencies appeared first on FedScoop.

]]>
72870
Once-experimental Congressional Hackathon gets new institutional support for fifth iteration https://fedscoop.com/congressional-hackathon-has-new-institutional-support/ Tue, 12 Sep 2023 20:35:01 +0000 https://fedscoop.com/?p=72735 The event has provided a “rare” opportunity for Congress and technologists to sit down together and brainstorm.

The post Once-experimental Congressional Hackathon gets new institutional support for fifth iteration appeared first on FedScoop.

]]>
Lawmakers, staffers, advocates, and technologists will come together Thursday to share ideas about technological solutions for legislative branch issues in a forum that has gradually become a fixture for those seeking to modernize Congress.

The fifth Congressional Hackathon will be held Sept. 14 with new support from the House’s Office of the Chief Administrative Officer, which is serving as a co-host for the first time alongside party leaders. That growing institutional backing is a milestone for the event, which was held for the first time in 2011 as an experiment and has endured through multiple majority changes and increasing polarization.

“It’s a place where staffers and technological experts get together and brainstorm, completely beyond any silo of their individual affiliations, which is all too rare on Capitol Hill,” said Matt Lira, who helped bring about the first hackathon as a then-staffer to House Majority Leader Eric Cantor, R-Va. “But each Hackathon has been defined by that, and it’s been interesting to see that carry on through multiple different political eras and personalities.”

In interviews with FedScoop, former staffers and tech advocates praised the event for providing a unique launchpad for a dialogue about technological solutions in Congress. Those discussions, they said, have contributed to major tech projects like providing public access to raw legislative data, digitizing casework processes for constituents, and even helping jumpstart an app for congressional tours.

“This is what’s been helping Congress better manage all the different data silos that it has, and modernize its tools,” said Daniel Schuman, policy director for the progressive tech policy advocacy group Demand Progress, adding “it’s a surprising point of leverage.”

The event’s outcome, Schuman said, “has been a fundamental transformation of the way that Congress manages this information technology over the last 15 years.”

Hackathon’s beginnings

The environment at the time the first hackathon took place was not unlike today with the emergence of artificial intelligence, Lira said. At that time, new technological tools, including social media and other digital platforms, were inspiring ideas. “It was a really interesting time to be dealing with Congress,” he said.

The concept for a congressionally hosted hackathon was initially suggested by Mark Zuckerberg, CEO of a Facebook that had yet to go public, at a meeting with Lira and Cantor in Silicon Valley. They took that idea back to Washington, where Lira said he later sat down with his counterpart in then-Democratic Whip Steny Hoyer’s office, Steve Dwyer, to talk about what a congressional version of a hackathon could be.

“I’d say it’s inspired by Silicon Valley circa 2010, but then Steve and I just sat down and really thought through what that could look like on the Hill,” Lira said.

Whereas traditional hackathons are generally time-limited events with computer programming or app development as the objective, the congressional version, they decided, would apply a similar collaborative approach to policy and operations. Less focus on actual code-writing would open participation up to more congressional stakeholders.

That first event was sponsored by Zuckerberg’s Facebook, now Meta, and was considered an experiment that wouldn’t necessarily be repeated. But interest in the event evolved into subsequent hackathons in 2015, 2017, and 2022 — though only the first had sponsorship from the social media giant.

Lira — who subsequently worked for Majority Leader Kevin McCarthy, R-Calif, and later in the White House — said it’s “been heartening to see that it’s continued” long after he left the Hill and after “any of the original people involved are still in any leadership offices.” Dwyer, too, shares that sentiment.

“I’m excited about all that is new with the Hackathon this year. Speaker McCarthy will be joining with new host Democratic Leader Jeffries. Additionally, we have institutional support with the CAO being an official host,” Dwyer, who is now senior director for innovation at the CAO, said in an emailed statement. “The CAO’s role helps us advance our mission of being Member Focused, Service Driven–as we try to advance innovative solutions within Congress.”

Institutionalizing the hackathon was also among recommendations the House Select Committee on the Modernization of Congress made last year. It warned that the member-sponsored event could dissolve if people leave Congress and suggested institutional support would encourage other legislative agencies to participate as well.

Ideas in action

Among the ideas discussed at hackathons that have been implemented was the application programming interface (API) for Congress.gov which was launched in September 2022. That idea, which provides more reliable access to legislative data for third-party developers, was among the recommendations from the 2015 hackathon

Prior to the API, people resorted to “scraping” Congress.gov or using “bulk data” collections from the Government Publishing Office, the Library of Congress said in its announcement at the time, acknowledging that those were “somewhat imperfect measures.”

Other ideas that have become a reality include creating a digital version of “casework” intake forms for constituents requesting assistance from a lawmaker’s office, and developing a way to convert draft legislation from PDFs into more usable formats. 

The latter idea was accomplished through a project Schuman worked on called BilltoText.com. The tool, he said, was built “based on information that we learned at one of the hackathons to transform the bills from PDFs into text, so that people could download them and use them.”

At least one company found early footing at a Congressional Hackathon. Melissa Dargan, a former congressional staffer who left the Hill to go to business school, started her company TourTrackr following interest in her idea at a hackathon. That app helps lawmakers’ staff schedule tours for constituents and is currently being used in over 125 congressional offices. 

Organizing tours, which can be requested through congressional offices, is often a time-consuming process for staffers — a process Dargan was personally familiar with. While in business school brainstorming ideas to innovate the industry she knew, Dargan learned many offices were still using paper records or Microsoft Excel to keep their records and began thinking of ways to improve that process. From there, the idea for TourTackr was born.

After her hackathon pitch, Dargan said she was immediately approached by companies that were focused on constituent relationship management, asking her whether the application was a real product yet and if they could work together. 

She said presenting her idea at the hackathon “exponentially expedited” its progress because it was a place to get people on board, get feedback, and get beta clients. “The hackathon helped light the fire,” she said.

Staying power

The fifth hackathon event comes amid the rising popularity of artificial intelligence, which has captured the imaginations of the public and private sector alike and is among the topics expected to be discussed. 

“This year, we hope to build on the successes of previous Hackathons and draw on new ideas and fresh perspectives to truly reform how Washington works,” McCarthy said in an announcement about the event. “I’m especially excited to see how we can incorporate Artificial Intelligence—a game-changing technology the House has already started to examine in bipartisan briefings with industry experts—into the legislative process.

The idea would be a continuation of a hackathon discussion, as using AI for the legislative process was initially discussed at the 2017 event.

The half-day event is expected to follow a similar structure to prior hackathons, including brief remarks from the hosts, breakout sessions for brainstorming on various topics, and a reconvening to share recommendations. In addition to AI, other topics are likely to include legislative workflow, constituent casework, and community engagement, according to the event announcement.

The event will also be the first without either of the original hosts of the event — Cantor and Hoyer. McCarthy has hosted several and Minority Leader Hakeem Jeffries, D-N.Y., will be a host for the first time.

“We look forward to brainstorming and developing new projects that have the power to transform our democracy and make Congress more transparent and accessible,” Jeffries said in a statement included in the announcement.

Ultimately, Lira said, a lasting impact of the event is its help to change how people in Congress think about technology in the legislative branch and how it could be used in their daily lives.

The first hackathon, he said, “was a watershed moment, culturally, where you saw some of the more institutional actors in Congress — not just political leaders, but also just career dedicated public servants, people at the Library of Congress, in the CAO’s office, or elsewhere — who sort of reflected on the event and said, ‘I get it.’”

The post Once-experimental Congressional Hackathon gets new institutional support for fifth iteration appeared first on FedScoop.

]]>
72735