Exclusive Archives | FedScoop https://fedscoop.com/category/exclusive/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 29 May 2024 13:07:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Exclusive Archives | FedScoop https://fedscoop.com/category/exclusive/ 32 32 Senate Democrat pushes for expansion to copyright act to include generative AI research https://fedscoop.com/senate-democrat-pushes-for-expansion-to-copyright-act-to-include-generative-ai-research/ Tue, 28 May 2024 20:56:28 +0000 https://fedscoop.com/?p=78529 In a letter to the Library of Congress, Sen. Mark Warner, D-Va., proposed an expansion to an exemption for generative AI “good-faith security research.”

The post Senate Democrat pushes for expansion to copyright act to include generative AI research appeared first on FedScoop.

]]>
An exemption under the Digital Millennium Copyright Act should be expanded to include generative artificial intelligence research focused specifically on embedded biases in AI systems and models, a top Senate Democrat argued in a new letter to the Library of Congress.

In the letter, shared exclusively with FedScoop, Sen. Mark Warner, D-Va., urged the LOC’s copyright office to expand an existing “good-faith security research exemption” to include research that exists outside of traditional security concerns, such as bias, arguing that it would be the best path for ensuring a “robust security ecosystem” for tools such as generative AI. 

The letter from Warner, co-chair of the Senate Cybersecurity Caucus, is in response to a petition from Jonathan Weiss, founder of the IT consulting firm Chinnu Inc., that asked the LOC to establish a new exemption to address security research on generative AI models and systems. 

A spokesperson for Warner said in an email that an expansion to the exemption rather than an entirely new exemption “is the best way to extend the existing protections that have enabled a robust cybersecurity research ecosystem to the emerging issues surrounding safe AI.”

Warner’s letter mirrors a Department of Justice response to the same petition last month. The Computer Crime and Intellectual Property Section of the DOJ’s Criminal Division wrote that “good faith research on potentially harmful outputs of AI and similar algorithmic systems should be similarly exempted from the DMCA’s circumvention provisions.”

Said Warner: “It is crucial that we allow researchers to test systems in ways that demonstrate how malfunctions, misuse and misoperation may lead to an increased risk of physical or psychological harm.”

The Virginia Democrat, who has introduced bipartisan legislation on artificial intelligence security and emerging tech standards, pointed to the National Institute of Standards and Technology’s AI Risk Management Framework to acknowledge that AI’s risks “differ from traditional software risks in key ways,” opening the door for not only security vulnerabilities but also dangerous and biased outputs. 

The use of generative AI for fraud and non-consensual image generation are among the deceptive practices Warner noted as reasons for consumer protections, such as watermarks and content credentials. Additionally, the lawmaker asked the LOC to ensure that the potential expanded exemption “does not immunize” research that would intentionally undermine protective measures. 

“Absent very clear indicia of good faith, efforts that undermine provenance technology should not be entitled to the expanded exemption,” Warner said. 

The senator also asked the LOC to include security and safety vulnerabilities, especially involving bias and additional harmful outputs, in its expanded good-faith security research definition.

In response to Warner’s letter, Weiss said in an email to FedScoop that he doesn’t “care whether the existing exemption is expanded to include research on AI bias/harmful output, or whether an entirely new exemption is created. Our main concern is to secure protections for good faith research on these emerging intelligent systems, whose inner workings even the brightest minds in the world cannot currently explain.”

The Weiss petition and letters from DOJ and Warner were prompted by the LOC Copyright Office’s ninth triennial rulemaking proceeding, which accepts public input for new exemptions to the DMCA.

The post Senate Democrat pushes for expansion to copyright act to include generative AI research appeared first on FedScoop.

]]>
78529
Federal government affected by Russian breach of Microsoft https://cyberscoop.com/federal-government-russian-breach-microsoft/ Thu, 04 Apr 2024 20:14:06 +0000 https://fedscoop.com/?p=77007 U.S. cybersecurity officials issued an emergency directive this week to address a breach by Russian operatives of Microsoft first disclosed in January.

The post Federal government affected by Russian breach of Microsoft appeared first on FedScoop.

]]>
The post Federal government affected by Russian breach of Microsoft appeared first on FedScoop.

]]>
77007
Bipartisan Senate bill pushes agencies on stronger telework oversight https://fedscoop.com/federal-agencies-telework-policies-senate-bill/ Wed, 03 Apr 2024 13:00:00 +0000 https://fedscoop.com/?p=76973 The Telework Transparency Act from Sens. Peters and Ernst requires agencies to bolster data collection on telework policies and monitor how those policies impact performance.

The post Bipartisan Senate bill pushes agencies on stronger telework oversight appeared first on FedScoop.

]]>
Four years after temporary pandemic telework policies were put in place for federal employees, a bipartisan pair of senators are seeking stronger agency oversight of the practice as return-to-office calls heat up. 

The Telework Transparency Act from Sens. Gary Peters, D-Mich., and Joni Ernst, R-Iowa, would require agencies to collect data on telework and monitor how the policies affect both agency performance and decisions on federal property. The bill, shared first with FedScoop, is intended to reveal the pros and cons of telework in the post-pandemic era. 

“Federal agencies must track and consider the impact of telework on their ability to deliver services, recruit and retain talent, and ensure office operations are cost-efficient,” Peters said in a statement. “My bipartisan bill will require agencies to gather accurate data on telework policies to provide more transparency and help ensure federal agencies are effectively carrying out their missions for the American people.” 

Referring to telework as a “remote lifestyle,” Ernst said in a statement that the practice “comes at the expense of the people federal agencies are meant to serve.” 

“For too long, Americans have been on hold while bureaucrats phone it in,” she said. “My bipartisan bill will provide full transparency into the inefficiencies of telework, so taxpayers are no longer on the hook for expensive wasted space at federal headquarters and misspent locality pay.” 

Though the Office of Personnel Management publishes a yearly report on agency telework practices, the agency said in its fiscal year 2022 report that data is more than a year old at the time of reporting and is often inaccurate and inconsistent.

Improved data collection is a major component of the legislation, which calls on agencies to use automated systems to track employees’ telework. The bill also requires OPM to set quality data standards and create and maintain a publicly available tool that shares agency telework data, using “data visualization or other data presentation techniques to support strategic executive agency workforce planning and talent management objectives.”

Agencies would also be charged with monitoring the use of federal buildings and gauging how telework impacts a variety of performance-related tasks, such as customer service, operational costs, investments in technology and recruitment and retention.

The introduction of Peters and Ernst’s legislation comes amid an increasingly concerted push across Washington, D.C., for federal workers to get back to the office. The White House has reportedly leaned on Cabinet secretaries to expedite the transition back to in-person work, while D.C. Mayor Muriel Bowser continues to urge the Biden administration to force the issue. 

During a November hearing before the House Oversight Subcommittee on Government Operations and the Federal Workforce, several agency officials reported rising rates of in-person work, while also making the case for continued telework flexibility.

“Regardless of where our employees are located, they are working,” Oren “Hank” McKnelly, executive counselor at the Social Security Administration, said during the hearing. “Telework is not one size fits all.”

The post Bipartisan Senate bill pushes agencies on stronger telework oversight appeared first on FedScoop.

]]>
76973
Congressional offices experimenting with generative AI, though widespread adoption appears limited https://fedscoop.com/congressional-offices-experimenting-with-generative-ai-though-widespread-adoption-appears-limited/ Tue, 26 Mar 2024 14:48:07 +0000 https://fedscoop.com/?p=76816 A handful of lawmakers indicated they’re using AI in their offices, in response to a FedScoop inquiry to House and Senate AI caucus members.

The post Congressional offices experimenting with generative AI, though widespread adoption appears limited appeared first on FedScoop.

]]>
As generative artificial intelligence tools have made their way into public use, a few offices on Capitol Hill have also begun to experiment with them. Widespread use, however, appears to be limited. 

FedScoop inquiries to every member of the House and Senate AI caucuses yielded over a dozen responses from lawmakers’ offices about whether they are using generative AI tools, as well as if they have their own AI policies. Seven offices indicated or had previously stated that staff were using generative AI tools, five said they were not currently using the technology, and three provided a response but didn’t address whether their offices were currently using it. 

The varied responses from lawmakers and evolving policies for use in each chamber paint a picture of a legislative body exploring how to potentially use the technology while remaining cautious about its outputs. The exploration of generative AI by lawmakers and staff also comes as Congress attempts to create guardrails for the rapidly growing technology.

“I have recommended to my staff that you have to think about how you use ChatGPT and other tools to enhance productivity,” Rep. Ami Bera, D-Calif., told FedScoop in an interview, pointing to responding to constituent letters as an example of an area where the process could be streamlined.

But Bera also noted that while he has accessed ChatGPT, he doesn’t often use it. “I’d rather do the human interaction,” he said.

Meanwhile, Sen. Gary Peters, D-Mich., has policies for generative AI use in both his office and the majority office of the Homeland Security and Governmental Affairs Committee, which he chairs. 

“The policy permits the use of generative AI, and provides strong parameters to ensure the accuracy of any information compiled using generative AI, protect the privacy and confidentiality of constituents, ensure sensitive information is not shared outside of secure Senate channels, and guarantee that human judgment is not supplanted,” a Peters aide told FedScoop.

And some lawmakers noted they’ve explored the technology themselves.

Rep. Scott Franklin, R-Fla., told FedScoop that when ChatGPT first became public, he asked the service to write a floor speech on the topic of the day as a Republican member of Congress from Florida. Once the machine responded, Franklin said he joked with his communication staff that “y’all are in big trouble.’”

While Franklin did not directly comment on AI use within his office during an interview with FedScoop, he did say that he’ll play with ChatGPT and doesn’t want to be “left behind” where the technology is concerned. 

House and Senate policies

As interest in the technology has grown, both House and Senate administrative arms have developed policies for generative tools. And while generative AI use is permitted in both chambers, each has its own restrictions.

The House Chief Administrative Officer’s House Digital Services purchased 40 ChatGPT Plus licenses last April to begin experimenting with the technology, and in June the CAO restricted ChatGPT use in the House to the ChatGPT Plus version only, while outlining guardrails. That was first reported by Axios and FedScoop independently confirmed with a House aide. 

There is also indication that work is continuing on that policy. At a January hearing, House Deputy Chief Administrative Officer John Clocker shared that the office is developing a new policy for AI with the Committee on House Administration and said the CAO plans on creating guidance and training for House staff.

In a statement to FedScoop, the Committee on House Administration acknowledged that offices are experimenting with AI tools — ChatGPT Plus, specifically — for research and evaluation, and noted some offices are developing “tip sheets to help guide their use.”

“This is a practice we encourage. CAO is able to work with interested offices to craft tip sheets using lessons learned from earlier pilots,” the committee said in a statement. 

The committee has also continued to focus on institutional policies for AI governance, the statement said. “Towards that end, last month we updated our 2024 User’s Guide to include mention of data governance and this month we held internal discussions on AI guardrails which included national AI experts and House Officials.”

On the Senate side, the Sergeant at Arms’ Chief Information Officer issued a notice to offices allowing the use of ChatGPT, Microsoft Bing Chat, and Google Bard and outlining guidance for use last year. PopVox Foundation was the first to share that document in a blog, and FedScoop independently confirmed with a Senate aide that the policy was received in September. The document also indicated that the Sergeant at Arms CIO determined that those three tools had a “moderate level of risk if controls are followed.”

Congressional support agencies, including the Library of Congress, the Government Accountability Office and the Government Publishing Office, have also recently shared how they’re exploring AI to improve their work and services in testimony before lawmakers. Those uses could eventually include tools that support the work of congressional staff as well.

Aubrey Wilson, director of government innovation at the nonprofit POPVOX Foundation who has written about AI use in the legislative branch, said the exploration of the technology is “really innovative for Congress.”

“Even though it might seem small, for these institutions that traditionally move slowly, the fact that you’re even seeing certain offices that have productively and proactively set these internal policies and are exploring these use cases,” Wilson said. “That is something to celebrate.”

Individual approaches

Of the offices that told FedScoop they do use the technology, most indicated that generative tools were used to assist with things like research and workflow, and a few, including Peters’ office, noted that they had their own policies to ensure the technology was being used appropriately. 

Clocker, of the CAO, had recommended offices adopt their own internal policies adjusted to their preferences and risk tolerance at the January Committee on House Administration hearing. POPVOX has also published a guide for congressional offices establishing their own policies for generative AI tools.

The office of Rep. Glenn Ivey, D-Md., for example, received approval from the House for its AI use and encouraged staff to use the account to assist in drafting materials. But they’ve also stressed that staff should use the account for work only, ensure they fact-check the outputs, and are transparent about their use of AI with supervisors, according to information provided by Ivey’s office. 

“Overall, it is a tool we have used to improve workflow and efficiencies, but it is not a prominent and redefining aspect of our operations,” said Ramón Korionoff, Ivey’s communications director.

Senate AI Caucus co-chair Martin Heinrich, D-N.M., also has a policy that provides guidance for responsible use of AI in his office. According to a Heinrich spokesperson, those policies “uphold a high standard of integrity rooted in the fundamental principle that his constituents ultimately benefit from the work of people.”

Even if they don’t have their own policies yet, other offices are looking into guidelines. Staff for one House Republican, for example, noted they were exploring best practices for AI for their office.

Two House lawmakers indicated they were keeping in line with CAO guidance when asked about a policy. Rep. Ro Khanna, D-Calif., said in a statement that his “office follows the guidance of the CAO and uses ChatGPT Plus for basic research and evaluation tasks.” 

Rep. Kevin Mullin, D-Calif., on the other hand, isn’t using generative AI tools in his office but  said it “will continue to follow the CAO’s guidance.”

“While Rep Mullin is interested in continuing to learn about the various applications of AI and find bipartisan policy solutions to issues that may arise from this technology, our staff is not using or experimenting with generative AI tools at this time,” his office shared with FedScoop in a written statement.

That guidance has been met with some criticism, however. Rep. Ted Lieu, D-Calif, initially pushed back on those guardrails after they were announced, arguing the decision about what to use should be left up to individual offices. He also noted, at the time, that his staff were free to use the tools without restrictions. 

Sen. Todd Young, R-Ind., has also previously indicated he and his staff use the technology. A spokesperson for Young pointed FedScoop to a statement the senator made last year noting that he regularly uses AI and encourages his staff to use it as well, though he said staff are ultimately responsible for the end product.

Parodies and potential uses

Some uses of generative tools have made their way into hearings and remarks, albeit the uses are generally more tongue-in-cheek or meant to underscore the capabilities of the technology.

Sen. Chris Coons, D-Del., for example, began his remarks at a July hearing with an AI-generated parody of “New York, New York;” Sen. Richard Blumenthal, D-Conn., played an AI-generated audio clip at a May hearing that mimicked the sound of his own voice; Rep. Nancy Mace, R-S.C., delivered remarks at a March 2023 hearing written by ChatGPT; and Rep. Jake Auchincloss, D-Mass., delivered a speech on the House floor in January 2023 written by ChatGPT.

Rep. Don Beyer, D-Va., said anecdotally in an interview that he’s heard of others using it to draft press releases or speeches, though it’s not something his office uses. “This is no criticism of GPT4, but when you are looking at an enormous amount of written material, and you’re averaging it all out, you’re going to get something pretty average,” Beyer said.

Other lawmakers seemed interested in the uses of technology but haven’t yet experimented with it in their offices. 

Rep. Adriano Espaillat, D-N.Y., for example, said in an interview that while his office isn’t using AI right now, he and his staff are exploring how it could be used.

“We are looking at potential use of AI for fact-finding, for the verification of any data that we may have available to us, fact-checking matters that are important for us in terms of background information for debate,” Espaillat said, adding “but we’re not there yet.”

POPVOX Foundation’s Wilson, a former congressional staffer, said one of her takeaways from her time working in Congress was “how absolutely underwater” staff is with keeping up with information, from corresponding with federal agencies to letters from constituents. She said that generative AI could help congressional staff sort through information and data faster, which could inform data-driven policymaking.

“In a situation where Congress is not willing to give itself more people to help with the increased workflow, the idea that it’s innovatively allowing the people who are in Congress to explore use of better tools is one way that I think congressional capacity can really be aided,” Wilson said. 

Rebecca Heilweil contributed to this story.

The post Congressional offices experimenting with generative AI, though widespread adoption appears limited appeared first on FedScoop.

]]>
76816
With worries about ‘epidemic of loneliness,’ Rep. Ami Bera shares hope for House AI Task Force https://fedscoop.com/ami-bera-ai-task-force-healthcare-loneliness/ Fri, 08 Mar 2024 20:02:42 +0000 https://fedscoop.com/?p=76523 The California Democrat said that he wants to focus on healthcare and encourage a public-private partnership as the AI Task Force moves forward.

The post With worries about ‘epidemic of loneliness,’ Rep. Ami Bera shares hope for House AI Task Force appeared first on FedScoop.

]]>
Loneliness isn’t an issue frequently brought up during Capitol Hill discussions around the development and adoption of artificial intelligence technologies. But it’s one that is of great concern to Rep. Ami Bera, D-Calif., a member of the House’s new AI Task Force.

“I do worry a lot about this epidemic of loneliness, and you’re seeing that epidemic of loneliness hit young men a lot harder,” Bera said in an interview with FedScoop. “Well, AI can have that ability of someone having a relationship with an artificial being on a computer, right? Those models already exist. Human interaction is incredibly important to our growth and development, our personality and development, our happiness. Does AI take folks into a very different place where it just continues this isolation?”

The California Democrat — who spent 20 years as a physician before his time in Congress and now serves on the House Select Subcommittee on the Coronavirus Pandemic — told FedScoop that the task force could do some “important work” with pandemic preparedness and global health security. Bera is also the ranking member of the Committee on Foreign Affairs’ Indo-Pacific Subcommittee and a member of both the National Intelligence Enterprise and National Security Agency and Cyber subcommittees under the Permanent Select Committee on Intelligence.

Bera shared that he’s thinking about how AI can enhance patient care outcomes, the implications of generative AI on mental health and that Congress should have a dialogue with industry leaders for education purposes. He added that he is largely focused on healthcare because that is the topic through which he believes he can make the “biggest contribution” to the task force.

“I think AI, if applied correctly, could really enhance patient-care outcomes, can enhance a physician’s ability to make the diagnosis in a much more expedient way,” Bera said. “That takes some of the administrative burden off of the physician so then they can have that doctor-patient interaction, which is what most of us went to medical school for.”

The congressman noted that physician groups are worried that the technology could change how medical personnel operate. However, some are embracing how AI tools can make things like billing and charting easier and more streamlined. 

Bera said that he isn’t starting from a place of fear for workforce displacement, “because we already have workforce concerns.” Instead, he predicted that patients won’t want to talk to a computer, and instead will want their provider to speak to them and interpret the information that the computer is giving them. 

In addition to his focus on healthcare, Bera wants the work Congress does for AI governance to involve the private sector. 

In response to AI’s developments and the call to govern and place guardrails on the technology, the representative said he would nurture private-public sector relationships by inviting industry leaders and doctor groups to share their insights and how they’re thinking about AI.

“I would invite industry, invite the doctor groups, invite the technology groups to get a sense of how are they thinking about AI,” Bera said. “To also, get a sense of what are the questions we should be asking…I think it should be a public-private partnership.”

The post With worries about ‘epidemic of loneliness,’ Rep. Ami Bera shares hope for House AI Task Force appeared first on FedScoop.

]]>
76523
Election integrity and ‘digital liberty’ are top of mind for House AI task force member Kat Cammack https://fedscoop.com/election-integrity-and-digital-liberty-kat-cammack/ Mon, 04 Mar 2024 22:07:02 +0000 https://fedscoop.com/?p=76400 The Florida Republican said the 2024 election cycle is “is going to be America’s first real up-close encounter with AI in a bad way.”

The post Election integrity and ‘digital liberty’ are top of mind for House AI task force member Kat Cammack appeared first on FedScoop.

]]>
As one of the younger members of this Congress, Rep. Kat Cammack grew up in both the analog and digital eras, a fact that has led her to jokingly refer to her office as “effectively House IT,” where other lawmakers who need tech-related help come with tasks such as resetting their iPhones.  

The Florida Republican will have a chance to burnish her tech-focused reputation as one of 12 members of her party appointed to the House AI task force. Cammack, who serves on the House Energy and Commerce Subcommittees on Communications and Technology and Innovation, Data and Commerce,  said in an interview with FedScoop that the House AI task force will have work to do this election season. 

The 2024 cycle “is going to be America’s first real up-close encounter with AI in a bad way,” Cammack said, calling on Congress to first approach AI as a “philosophical product” and engage with private sector leaders. Cammack, a member of the House Rural Broadband and Blockchain Caucuses, added that she “would love to see” the Federal Election Commission put together “top concerns” and work to establish guardrails around AI where it has the authority to do so, with Congress asked to fill in the remaining gaps. 

“These administration officials have tremendous latitude in how they can react in real time, and I feel like sometimes you have agencies that overreact and you have some that stand down,” Cammack said. “This is not a situation where we want them to stand down. We want folks to go into the polling booth and feel like they’re very confident that there’s not going to be interference, that they haven’t been lied to and that everything is as it seems.”

Cammack pointed to past elections where voters would receive text messages that a candidate was dropping out of the race. She also acknowledged the ease with which bad actors are able to remove watermarks for AI content, pointed to a proposed solution for AI-generated content that has the potential to confuse voters, and spoke about the rise in the use of deepfake technology

The FEC has indicated that it is reviewing public comments on the use of AI in campaign ads, and that the agency plans to “resolve the AI rulemaking by early summer,” according to the Washington Post

Cammack noted her concerns about the federal government coming in with “a heavy hand” on AI matters and stifling “innovation and development.” She’d like to see private sector providers share what they are developing with Congress and what they envision for AI’s future. 

“I don’t want us to overregulate because I’m fearful that that will stamp out innovation. I’m fearful that if you don’t address the philosophical issue in these language models, that we’re gonna see real implications immediately and long term,” Cammack said. “AI is not going anywhere; it’s going to be a very big part of every aspect of our lives for the foreseeable future. So we have to make sure that we’re doing everything right on the front end. … For once, we need to actually force the government to look at private sector and say, ‘tell us what you know so we can be better.’”

Cammack said that when someone asks ChatGPT a question, the answer will reflect natural bias and “we want equal opportunity for people to use these systems with the understanding that there’s not going to be an equal outcome, but it’s going to be a truthful one,” adding that language models need “to be a position of digital liberty versus digital authoritarianism.”

“If we don’t approach the philosophical development of the language model, the brain, with a mindset of those basic values and tenants — equal opportunity, freedom, liberty, diversity of thought, expression [and] constitutional protections — then we are going to end up with what we currently have today,” Cammack said. “Which is, a system that will write a poem about Nancy Pelosi but not Donald Trump, where it paints conservatives in a harsh light but a glowing light when it comes to a Democrat.”

The post Election integrity and ‘digital liberty’ are top of mind for House AI task force member Kat Cammack appeared first on FedScoop.

]]>
76400
As House task force work begins, Rep. Bonamici is ‘very worried’ about AI — ‘and we all should be’ https://fedscoop.com/ai-task-force-work-rep-bonamici/ Mon, 26 Feb 2024 18:45:09 +0000 https://fedscoop.com/?p=76205 In a Q&A with FedScoop, the Oregon Democrat discusses her legislative priorities with the task force, as well her focus on the need to address bias, lack of consent, discrimination and privacy issues with the technology.

The post As House task force work begins, Rep. Bonamici is ‘very worried’ about AI — ‘and we all should be’ appeared first on FedScoop.

]]>
Rep. Suzanne Bonamici is no stranger to high-level, bipartisan tech discussions on Capitol Hill, having assisted in the negotiation and passage of the CHIPS and Science Act and co-founded the Science, Technology, Engineering, Arts and Mathematics Caucus. The Oregon Democrat’s next assignment, as a member of the new House AI task force, could be her most consequential.

Bonamici, one of 12 Democrats appointed to the 24-member House AI task force announced last week by Speaker Mike Johnson, R-La., and Minority Leader Hakeem Jeffries, D-N.Y., said in an interview with FedScoop that her focus will be on the ethical use of AI, pointing to a need to address bias, lack of consent, discrimination and privacy issues.

The congresswoman also revealed that she is working on a piece of legislation that mirrors the Senate’s “No Robot Bosses Act of 2024.” The House version, which is set to be introduced in a matter of weeks, addresses the risks of job displacement as AI is implemented in practical applications across industry, according to an email shared with FedScoop. 

Bonamici spoke with FedScoop about her legislative priorities with the task force, why it’s important that AI regulation is a bipartisan effort and her concerns about the technology.

Editor’s note: The transcript has been edited for clarity and length.

FedScoop: I know that AI has remained nonpartisan, and I wanted to know, where do you see the biggest difference between Democrats and Republicans on AI regulation and AI work?

Rep. Suzanne Bonamici: I have, in my dozen years of Congress, always tried to find common ground, and I’m convinced that we’ll be able to do that on the task force. … I think we are there as a bipartisan task force to actually address the issues that our constituents are asking about [such as] responsible use and ethical development of artificial intelligence, which has been something that I’ve been talking about for years, and then let’s figure out the regulatory structure. Those are two issues that I’ve been asking about in more informal settings, and I look forward to working on them in the task force. 

FS: How worried are you about AI-generated content this election season?

SB: Oh, I’m very worried — and we all should be. We have already seen examples of problems, whether it be deepfakes or theft of someone’s identity and making it sound like a candidate sending a message. So I’m hopeful that there are enough people out there looking at this and monitoring it and calling it out. I just don’t [know] what the remedy is going to be. And in the long term, I am very supportive of education and media literacy — to help people recognize when content is AI-generated. I just heard a story the other day about how even young kids, they think there’s a little person in an [Amazon] Alexa, they draw it with the face of a person. Early on, age-appropriate education so people know what to look for. 

FS: Is that the AI issue that worries you most? The ethical dilemmas and the threats that those pose regarding AI?

SB: Depending on how you’re defining ethical dilemmas, which is what I’ve been asking. Sometimes in these briefings, you come out with more questions than answers, but people have different definitions of what ethical AI means. I know that there’s tremendous potential, but I also know there are risks. I mentioned the privacy concerns, algorithmic bias, job displacement — lots of questions about that — and then, of course, all the nefarious uses we were seeing with deepfakes. I’m interested in the energy use and the vast amount of energy that this takes. In fact, I had a meeting [insert day of the week] about some work that’s being done to make running these models more energy-efficient. 

FS: What are some lessons learned, perhaps from social media and data privacy, that you are keeping in mind when it comes to placing guardrails for artificial intelligence?

SB: I want to start by saying that whenever we regulate around technology, we have to do it in a way that provides the needed protections but does not stifle innovation, because we don’t want to hold back the good potential, which is challenging. But I think that’s one of the reasons why the bipartisan task force was set up by the Leader and the Speaker, because they realized that this is urgent. And we know with social media, now people are saying [that] this is dangerous to some and we really have to look at the experts and what is best for especially young people, and keep that in mind. … I don’t think we can have 50 different systems; I think it needs to be done at the federal level.

FS: We’ve seen a lot of voluntary commitments from companies where AI is concerned, and I know that you are advocating for fair competition across the private sector. How can Congress ensure that smaller tech companies can compete with big tech companies?

SB: Obviously look at anti-competitive behavior and [we] have [the Federal Trade Commission] and [the Department of Justice] to do that, and they are working on those and I’m sure that anti-competitive behavior that falls under existing antitrust laws is important to look at. Expanding opportunities for small industries is going to be really important. I’m out here in Northwest Oregon, which is known as Silicon Forest, where we have a lot of big but also small semiconductor companies and all the ancillary businesses that go with them. So we are really working hard out here to develop a workforce. I was just hearing about a partnership [insert day] with one of the tech hubs and some of our research universities but also smaller colleges and universities and workforce boards. I think there are many ways that we can look to not only increase opportunities but also increase diversity, which is really critical in the workforce. 

FS: Of course, as you’re well aware, digital literacy and the broadband gap remain a major problem. How worried are you about AI and AI-generated content exacerbating those digital inequities?


SB: It’s a possibility, but we’ve been working hard with filling the gaps. It was exacerbated … by the pandemic when all of a sudden, students have to learn online and there’s a lot of places [that] didn’t have connectivity or devices. So we’ve been working on closing those gaps and I think we’ve made some progress with that and expanding connectivity.

The post As House task force work begins, Rep. Bonamici is ‘very worried’ about AI — ‘and we all should be’ appeared first on FedScoop.

]]>
76205
Bipartisan Senate proposal calls for AI workforce framework from NIST https://fedscoop.com/bipartisan-senate-proposal-calls-for-ai-workforce-framework/ Tue, 13 Feb 2024 21:30:06 +0000 https://fedscoop.com/?p=76037 The new legislation would direct NIST to develop a workforce framework for artificial intelligence and explore frameworks for other emerging and critical technology roles.

The post Bipartisan Senate proposal calls for AI workforce framework from NIST appeared first on FedScoop.

]]>
A new bipartisan Senate bill seeks to improve the U.S. pipeline for jobs in artificial intelligence and other emerging technologies through the development of a workforce framework from the National Institute of Standards and Technology.

The “AI and Critical Technology Workforce Framework Act,” introduced by Sens. Gary Peters, D-Mich., and Eric Schmitt, R-Mo., would direct NIST to create a workforce framework for AI and assess whether other critical or emerging technology areas might also benefit from frameworks, according to bill text and a release provided to FedScoop.

“As artificial intelligence continues to play a bigger role in our society, it’s critical the future of this groundbreaking technology is formed in the United States. The way to ensure that happens is by building a workforce engaged in these new technologies,” Peters, chairman of the Senate Homeland Security and Governmental Affairs Committee, said in a written statement.

The bill is intended to build upon NIST’s existing National Initiative for Cybersecurity Education (NICE) framework — which outlines cybersecurity roles in an effort to help employers build their cyber workforces — as AI is poised to reshuffle the workforce.

Over the next five years, demand for AI and machine learning specialists is expected to increase by 40%, according to a 2023 World Economic Forum report on workforce trends across the world. 

“This bill will ensure that America continues to have a strong and increasingly skilled workforce, will utilize AI to bolster American industry, and will incentivize companies to keep their jobs in the United States rather than outsourcing them overseas,” Schmitt said in a written statement. “Additionally, this bill’s potential to benefit our defense capabilities is endless.”

Under the bill, NIST would be required to report to Congress about other critical and emerging technology areas it finds could benefit from a workforce framework. It would also direct NIST to update the NICE framework to reflect changes in the cybersecurity field and “encourage” the agency to provide resources and guidance on cybersecurity careers to students and adults, according to the release.

The post Bipartisan Senate proposal calls for AI workforce framework from NIST appeared first on FedScoop.

]]>
76037
Nuclear Regulatory Commission CIO David Nelson set to retire https://fedscoop.com/nuclear-regulatory-commission-cio-david-nelson-set-to-retire/ Wed, 24 Jan 2024 23:34:34 +0000 https://fedscoop.com/?p=75717 Scott Flanders, the NRC’s deputy chief information officer, will serve as the acting CIO and acting chief AI officer until a permanent one is selected.

The post Nuclear Regulatory Commission CIO David Nelson set to retire appeared first on FedScoop.

]]>
The Nuclear Regulatory Commission’s chief information officer, David Nelson, will be retiring at the end of the week, according to an agency spokesperson. 

In an email to FedScoop, the NRC spokesperson said Nelson will be leaving the agency effective Jan. 26. Taking his place as acting chief AI officer and CIO is Scott Flanders, the commission’s current deputy CIO. 

Nelson was appointed as the regulatory agency’s CIO in 2016, leaving his previous position as CIO and director of the Office of Enterprise Information for the Centers for Medicare and Medicaid Services. 

Nelson was recently appointed as the NRC’s CAIO, in light of a long-awaited executive order on AI from President Joe Biden. While the order did not include the NRC as an agency that will be required to eventually name a CAIO, the commission told FedScoop previously that it was “assessing whether and how it applies.”

Additionally, the NRC spokesperson confirmed that Victor Hall, the deputy director of the Division of Systems Analysis in the Office of Nuclear Regulatory Research, serves as the responsible AI official under Executive Order 13960, issued by the Trump administration. The NRC was also exempted from that requirement as an independent regulatory agency.

The post Nuclear Regulatory Commission CIO David Nelson set to retire appeared first on FedScoop.

]]>
75717
First crack at comprehensive AI legislation coming early 2024 from Senate Commerce Chair Cantwell https://fedscoop.com/bipartisan-ai-legislation-senate-commerce-committee-cantwell/ Thu, 11 Jan 2024 17:14:46 +0000 https://fedscoop.com/?p=75558 Sources tell FedScoop that the Washington Democrat will introduce a series of bipartisan bills related to artificial intelligence issues in the coming weeks.

The post First crack at comprehensive AI legislation coming early 2024 from Senate Commerce Chair Cantwell appeared first on FedScoop.

]]>
Senate Commerce Committee Chair Maria Cantwell is readying a series of significant bipartisan bills related to artificial intelligence, including efforts to balance the regulation of popular generative AI tools as well as initiatives to boost innovation, making it the first true comprehensive legislation in Congress to tackle the issue of AI.

Cantwell, a Democrat from Washington, is expected in the coming weeks to introduce the legislation with a series of bills related to relevant AI issues like deepfakes, jobs and training, algorithmic bias, digital privacy, national security, and AI innovation and competitiveness, according to Cantwell’s staff and four sources familiar with the legislative effort.

The comprehensive series of AI bills has the support and blessing of Senate Majority Leader Chuck Schumer, D-N.Y., who has tapped multiple Senate committee chairs to lead on introducing and debating major AI legislation after the culmination of his bipartisan AI Insight Forums last year, three sources familiar with the legislative effort told FedScoop.  

“The AI bills won’t come out all at the same time; they’ll be dropped in a series, in a staggered fashion, but we’re aiming for the next few weeks and months as soon as possible,” a senior legislative aide for the Senate Commerce Committee majority staff told FedScoop. “It’s a top priority for the senator especially because other countries and the U.S. need to be ahead on AI policy and AI competitiveness.

“Senate Commerce has the primary or at least very important jurisdiction on AI policy and a majority of AI policy is already coming out of our committee. Many bills have been referred to us, so we want to build upon that and work with Republicans to put out something that can move,” the senior aide added.

Cantwell announced at various points in 2023 that she’s working on introducing AI-related bills, including legislation on threats posed by deepfakes, a federal privacy bill targeting AI discrimination, a reskilling “GI bill” for AI, as well as legislation on potential disruptions to jobs and education posed by AI. 

She has yet to actually introduce any AI legislation, but has made it a priority for herself and the Senate Commerce Committee in the next few months. 

Two AI scholars familiar with legislative efforts in Congress told FedScoop that they expect Cantwell’s comprehensive AI legislation to start with the introduction of bills that focus on a few areas of shared bipartisan interest.

“The low-lying fruits are AI bills related to deepfakes in a narrow fashion, AI research and development, consumer fraud, and workers displaced by AI,” said Samuel Hammond, a senior economist focused on AI policy at the Foundation for American Innovation, a tech-focused, libertarian-leaning think tank previously known as the Lincoln Network.

“The vibes are there is some agreement but nothing that’s clearly going to go all the way,” Hammond added. “It wouldn’t surprise me given this is the Commerce Committee that Cantwell uses the bills to follow up on the CHIPS and Science Act, to get the most bang for their buck.”

Daniel Colson, the founder and executive director of the AI Policy Institute, said that he expects Cantwell’s series of comprehensive AI bills to focus first on bias and discrimination caused by AI, followed by legislation to address the most displaced workers, like language translators. There could also be bills to regulate the most extreme risks that large AI models that cost $10 billion or more could bring, he said. 

Three AI scholars familiar with Cantwell’s AI legislative efforts said the legislation could include a spending package related to AI policymaking between $8 billion and $10 billion.

Gathering bipartisan momentum for any major AI legislative effort has proven challenging, given the chasm between Democrats and Republicans in Congress and within the Senate Commerce Committee in particular. 

“Republican Commerce Committee staff said at a meeting with some of us recently that ‘we’re just going to hold the line’ on all AI-related legislation,” a senior AI scholar who met with Senate Republican Commerce staffers at the end of 2023 told FedScoop. The source added that Commerce Committee ranking member Ted Cruz of Texas and other Republican members appear to favor Silicon Valley venture capitalist Marc Andreessen’s anti-regulation stance, and have expressed aversion toward doing “anything on AI proactively in contrast to the Democrats.”  

Some AI experts plugged into the legislative efforts on Capitol Hill who participated in Schumer’s bipartisan AI Insight Forums would like to see Cantwell’s comprehensive bills focus on a narrow set of key issues where there has already been agreement within both major parties.

“I think if we pursue the path of bipartisanship, we should be focused on, how do we stay ahead when it comes to AI and the investments needed?” Ylli Bajraktari, CEO of the nonprofit Special Competitive Studies Project, told FedScoop.

Bajraktari said that if the bills contain too many requests for more government spending, “then you’ll have these cracks of people defecting. But if the bill maintains focus on our national security, staying ahead in innovation, and the U.S. continuing to lead, then I think that increases the chances that comprehensive bills will be bipartisan and passable.”

Paul Lekas, senior vice president for global public policy & government affairs at the Software & Information Industry Association, which represents major tech players including Adobe, Apple and Google, said it’s important that future legislative efforts follow “the bipartisan spirit” of Schumer’s AI Insight Forums. 

“It should promote and incentivize safe and trustworthy AI, mitigate potential harms to rights and safety, while allowing for continued innovation,” Lekas told FedScoop. “We encourage Congress to pass legislation establishing a nationwide standard for AI that advances public trust in the digital ecosystem, consumer confidence in AI tools, continued innovation, and U.S. competitiveness. And it should begin that effort by passing a comprehensive federal privacy bill, because AI is only as good and reliable as the data that goes into it.”

Editor’s note 1/11/2024 at 4:50 p.m.: This story was updated to reflect the rebranding of the Lincoln Network to the Foundation for American Innovation.

The post First crack at comprehensive AI legislation coming early 2024 from Senate Commerce Chair Cantwell appeared first on FedScoop.

]]>
75558