Nihal Krishan Archives | FedScoop https://fedscoop.com/author/nihal-krishan/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Fri, 12 Jan 2024 23:17:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Nihal Krishan Archives | FedScoop https://fedscoop.com/author/nihal-krishan/ 32 32 First crack at comprehensive AI legislation coming early 2024 from Senate Commerce Chair Cantwell https://fedscoop.com/bipartisan-ai-legislation-senate-commerce-committee-cantwell/ Thu, 11 Jan 2024 17:14:46 +0000 https://fedscoop.com/?p=75558 Sources tell FedScoop that the Washington Democrat will introduce a series of bipartisan bills related to artificial intelligence issues in the coming weeks.

The post First crack at comprehensive AI legislation coming early 2024 from Senate Commerce Chair Cantwell appeared first on FedScoop.

]]>
Senate Commerce Committee Chair Maria Cantwell is readying a series of significant bipartisan bills related to artificial intelligence, including efforts to balance the regulation of popular generative AI tools as well as initiatives to boost innovation, making it the first true comprehensive legislation in Congress to tackle the issue of AI.

Cantwell, a Democrat from Washington, is expected in the coming weeks to introduce the legislation with a series of bills related to relevant AI issues like deepfakes, jobs and training, algorithmic bias, digital privacy, national security, and AI innovation and competitiveness, according to Cantwell’s staff and four sources familiar with the legislative effort.

The comprehensive series of AI bills has the support and blessing of Senate Majority Leader Chuck Schumer, D-N.Y., who has tapped multiple Senate committee chairs to lead on introducing and debating major AI legislation after the culmination of his bipartisan AI Insight Forums last year, three sources familiar with the legislative effort told FedScoop.  

“The AI bills won’t come out all at the same time; they’ll be dropped in a series, in a staggered fashion, but we’re aiming for the next few weeks and months as soon as possible,” a senior legislative aide for the Senate Commerce Committee majority staff told FedScoop. “It’s a top priority for the senator especially because other countries and the U.S. need to be ahead on AI policy and AI competitiveness.

“Senate Commerce has the primary or at least very important jurisdiction on AI policy and a majority of AI policy is already coming out of our committee. Many bills have been referred to us, so we want to build upon that and work with Republicans to put out something that can move,” the senior aide added.

Cantwell announced at various points in 2023 that she’s working on introducing AI-related bills, including legislation on threats posed by deepfakes, a federal privacy bill targeting AI discrimination, a reskilling “GI bill” for AI, as well as legislation on potential disruptions to jobs and education posed by AI. 

She has yet to actually introduce any AI legislation, but has made it a priority for herself and the Senate Commerce Committee in the next few months. 

Two AI scholars familiar with legislative efforts in Congress told FedScoop that they expect Cantwell’s comprehensive AI legislation to start with the introduction of bills that focus on a few areas of shared bipartisan interest.

“The low-lying fruits are AI bills related to deepfakes in a narrow fashion, AI research and development, consumer fraud, and workers displaced by AI,” said Samuel Hammond, a senior economist focused on AI policy at the Foundation for American Innovation, a tech-focused, libertarian-leaning think tank previously known as the Lincoln Network.

“The vibes are there is some agreement but nothing that’s clearly going to go all the way,” Hammond added. “It wouldn’t surprise me given this is the Commerce Committee that Cantwell uses the bills to follow up on the CHIPS and Science Act, to get the most bang for their buck.”

Daniel Colson, the founder and executive director of the AI Policy Institute, said that he expects Cantwell’s series of comprehensive AI bills to focus first on bias and discrimination caused by AI, followed by legislation to address the most displaced workers, like language translators. There could also be bills to regulate the most extreme risks that large AI models that cost $10 billion or more could bring, he said. 

Three AI scholars familiar with Cantwell’s AI legislative efforts said the legislation could include a spending package related to AI policymaking between $8 billion and $10 billion.

Gathering bipartisan momentum for any major AI legislative effort has proven challenging, given the chasm between Democrats and Republicans in Congress and within the Senate Commerce Committee in particular. 

“Republican Commerce Committee staff said at a meeting with some of us recently that ‘we’re just going to hold the line’ on all AI-related legislation,” a senior AI scholar who met with Senate Republican Commerce staffers at the end of 2023 told FedScoop. The source added that Commerce Committee ranking member Ted Cruz of Texas and other Republican members appear to favor Silicon Valley venture capitalist Marc Andreessen’s anti-regulation stance, and have expressed aversion toward doing “anything on AI proactively in contrast to the Democrats.”  

Some AI experts plugged into the legislative efforts on Capitol Hill who participated in Schumer’s bipartisan AI Insight Forums would like to see Cantwell’s comprehensive bills focus on a narrow set of key issues where there has already been agreement within both major parties.

“I think if we pursue the path of bipartisanship, we should be focused on, how do we stay ahead when it comes to AI and the investments needed?” Ylli Bajraktari, CEO of the nonprofit Special Competitive Studies Project, told FedScoop.

Bajraktari said that if the bills contain too many requests for more government spending, “then you’ll have these cracks of people defecting. But if the bill maintains focus on our national security, staying ahead in innovation, and the U.S. continuing to lead, then I think that increases the chances that comprehensive bills will be bipartisan and passable.”

Paul Lekas, senior vice president for global public policy & government affairs at the Software & Information Industry Association, which represents major tech players including Adobe, Apple and Google, said it’s important that future legislative efforts follow “the bipartisan spirit” of Schumer’s AI Insight Forums. 

“It should promote and incentivize safe and trustworthy AI, mitigate potential harms to rights and safety, while allowing for continued innovation,” Lekas told FedScoop. “We encourage Congress to pass legislation establishing a nationwide standard for AI that advances public trust in the digital ecosystem, consumer confidence in AI tools, continued innovation, and U.S. competitiveness. And it should begin that effort by passing a comprehensive federal privacy bill, because AI is only as good and reliable as the data that goes into it.”

Editor’s note 1/11/2024 at 4:50 p.m.: This story was updated to reflect the rebranding of the Lincoln Network to the Foundation for American Innovation.

The post First crack at comprehensive AI legislation coming early 2024 from Senate Commerce Chair Cantwell appeared first on FedScoop.

]]>
75558
National Science Foundation picks new CIO as part of CHIPS Act IT reorganization https://fedscoop.com/national-science-foundation-picks-new-cio-as-part-of-chips-act-it-reorganization/ Thu, 04 Jan 2024 18:55:28 +0000 https://fedscoop.com/?p=75466 NSF is establishing a new Office of the Chief Information Officer (OCIO) to consolidate resources as part of the CHIPS and Science Act of 2022.

The post National Science Foundation picks new CIO as part of CHIPS Act IT reorganization appeared first on FedScoop.

]]>
The National Science Foundation on Wednesday announced a major reorganization of its IT functions, including the appointments of a new chief information officer, chief technology officer, chief data officer and assistant CIO for artificial intelligence in support of the 2022 CHIPS Act

Terry Carpenter will take over the key role of CIO and CTO for the NSF, marking the establishment of a new independent and consolidated Office of the Chief Information Officer (OCIO). 

Dorothy Aronson is NSF’s new chief data officer and assistant CIO for artificial intelligence, while Dan Hofherr is the new chief information security officer and assistant CIO for operations, and Teresa Guillot is assistant CIO for enterprise services. 

“I am confident that the reorganization of our IT functions will propel NSF to new heights of innovation and efficiency,” NSF Director Sethuraman Panchanathan said in a statement. “This strategic initiative reflects our solid commitment to delivering unparalleled IT services and solutions across the agency.” 

The IT revitalization within NSF is meant to support the mission of the CHIPS and Science Act of 2022, which provides roughly $52.7 billion to explicitly drive semiconductor research, development, manufacturing, and workforce development in the U.S. 

Of that total, $39 billion is included for manufacturing incentives and $13.2 billion is for R&D and workforce development, according to the White House.

The establishment of the new OCIO office signifies NSF’s aim to adapt to evolving industry best practices and cutting-edge technologies using new tools, resources and expertise.

It also supports NSF’s push to further President Joe Biden’s priorities for federal agencies to use AI responsibly and protect information through cybersecurity practices. 

The post National Science Foundation picks new CIO as part of CHIPS Act IT reorganization appeared first on FedScoop.

]]>
75466
AI watermarking could be exploited by bad actors to spread misinformation. But experts say the tech still must be adopted quickly https://fedscoop.com/ai-watermarking-misinformation-election-bad-actors-congress/ Wed, 03 Jan 2024 21:56:04 +0000 https://fedscoop.com/?p=75453 As Washington putters on AI watermarking legislation, TikTok and Adobe are leading the way with transparency standards.

The post AI watermarking could be exploited by bad actors to spread misinformation. But experts say the tech still must be adopted quickly appeared first on FedScoop.

]]>
By and large, government and private-sector technologists agree that the use of digital watermarking to verify AI-generated content should be a key component for tackling deepfakes and other forms of malicious misinformation and disinformation. 

But there is no clear consensus regarding what a digital watermark is, or what common standards and policies around it should be, leading many AI experts and policymakers to fear that the technology could fall short of its potential and even empower bad actors.

Industry groups and a handful of tech giants — most notably TikTok and Adobe — have been singled out by experts as leading the charge on AI watermarking and embracing a transparent approach to the technology. They’ll need all the help they can get during what promises to be an especially chaotic year in digital spaces. 

With over 2 billion people expected to vote in elections around the world in 2024, AI creators, scholars and politicians said in interviews with FedScoop that standards on the watermarking of AI-generated content must be tackled in the coming months — or else the proliferation of sophisticated, viral deepfakes and fake audio or video of politicians will continue unabated.

“This idea of authenticity, of having authentic trustworthy content, is at the heart of AI watermarking,” said Ramayya Krishnan, dean of Carnegie Mellon University’s information systems and public policy school and a member of President Joe Biden’s National Artificial Intelligence Advisory Committee. 

“Having a technological way of labeling how content was made and having an AI detection tool to go with that would help, and there’s a lot of interest in that, but it’s not a silver bullet,” he added. “There’s all sorts of enforcement issues.” 

Digital watermarking “a triage tool for harm reduction”

There are three main types of watermarks created by major tech companies and AI creators to reduce misinformation and build trust with users: visible watermarks added to images, videos or text by companies like Google, OpenAI or Getty to verify the authenticity of content; invisible watermarks that can only be detected through special algorithms or software; and cryptographic metadata, which details when a piece of content was created and how it has been edited or modified before someone consumes it.

Using watermarking to try and reduce AI-generated misinformation and disinformation can be helpful when the average consumer is viewing a piece of content, but it can also backfire. Bad actors can manipulate a watermark and create even more misinformation, AI experts focused on watermarking told FedScoop.

It’s like Olympic athletes — now that I know that you’re looking for this drug, I’ll just take another drug.

Senior senate independent staffer on how bad actors can manipulate watermarks

“Watermarking technology has to be taken with a grain of salt because it is not so hard for someone with a knowledge of watermarks and AI to being able to break it and remove the watermark or manufacture one,” said Siwei Lyu, a University at Buffalo computer science professor who studies deepfakes and digital forgeries. 

Lyu added that digital watermarking is “not foolproof” and invisible watermarks are often more effective, though not without their flaws. 

“I think watermarks mostly play on people’s unawareness of their existence. So if they know they can, they will find a way to break it.”

A senior Senate independent staffer deeply involved in drafting legislation related to AI  watermarking said the concern of bad actors using well-intentioned watermarks for manipulative purposes is “1,000% valid. It’s like Olympic athletes — now that I know that you’re looking for this drug, I’ll just take another drug. It’s like we need to try our best we can to keep pace with the bad actors.”

When it comes to AI watermarking, the Senate is currently in an “education and defining the problem” phase, the senior staffer said. Once the main problems with the technology are better defined, the staffer said they’ll begin to explore whether there is a legislative fix or an appropriations fix.

Senate Majority Leader Chuck Schumer said in September that ahead of the 2024 election, tackling issues around AI-generated content that is fake or deceptive and can lead to widespread misinformation and disinformation was an exceedingly time-sensitive problem.

“There’s the issue of actually having deepfakes, where people really believe … that a candidate is saying something when they’re totally a creation of AI,” the New York Democrat said after his first closed-door AI insight forum

“We talked about watermarking … that one has a quicker timetable maybe than some of the others, and it’s very important to do,” he added.

Another AI expert said that watermarking can be manipulated by bad actors in a small but highly consequential number of scenarios. Sam Gregory, executive director at the nonprofit WITNESS, which helps people use technology to promote human rights, said it’s best to think of AI watermarking as “almost a triage tool for harm reduction.” 

”You’re making available a greater range of signals on where content has come from that works for 95% of people’s communication,” he said. “But then you’ve got like 5% or 10% of situations where someone doesn’t use the watermark to conceal their identity or strip out information or perhaps they’re a bad actor. 

“It’s not a 100% solution,” Gregory added.

TikTok, Adobe leading the way on watermarking

Among major social media platforms, Chinese-owned TikTok has taken an early lead on watermarking, requiring users to be highly transparent when AI tools and effects are used within their content, three AI scholars told FedScoop. Furthermore, the company has created a culture of encouraging users to be comfortable with sharing the role that AI plays in altering their videos or photos in fun ways.

“TikTok shows you the audio track that was used, it shows you the stitch that was made, it shows you the AI effects used,” Gregory said. And as “the most commonly used platform by young people,” TikTok makes it “easy and comfortable to be transparent about how a piece of content was made with presence of AI in the mix.” 

TikTok recently announced new labels for disclosing AI-generated content. In a statement, the social media platform said that its policy “requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualize the video and prevent the potential spread of misleading content. Creators can now do this through the new label (or other types of disclosures, like a sticker or caption).”

We realized that we can’t keep getting out in front to determine if something is false, so we decided to flip it and say, ‘Let’s have everybody expect to say this is true.’

Jeffrey young, adobe senior solutions consultant manager, on the company’s approach content authenticity

Major AI developers, including Adobe and Microsoft, also support some forms of labeling AI in their products. Both tech giants are members of the Coalition for Content Provenance and Authenticity (C2PA), which addresses the prevalence of misinformation online through the development of technical standards for certifying the source and history of online content.

Jeffrey Young, a senior solutions consultant manager at Adobe, said the company has “had a big drive for the content authenticity initiative” due in large part to its awareness that bad actors use Photoshop to manipulate images “for nefarious reasons.” 

“We realized that we can’t keep getting out in front to determine if something is false, so we decided to flip it and say, ‘Let’s have everybody expect to say this is true,’” Young said. “So we’re working with camera manufacturers, working with websites on their end product, that they’re able to rollover that image and say, this was generated by [the Department of Homeland Security], they’ve signed it, and this is confirmed, and it hasn’t been manipulated since this publication.”

Most major tech companies are in favor of labeling AI content through watermarking and are working to create transparent watermarks, but the tech industry recognizes that it’s a simplistic solution, and other actions must be taken as well to comprehensively reduce AI-generated misinformation online. 

Paul Lekas, senior vice president for global public policy & government affairs at the Software & Information Industry Association, said the trade group — which represents Amazon, Apple and Google, among others — is “very supportive” of watermarking labeling and provenance authentication but acknowledges that those measures do “not solve all the issues that are out there.” 

“Ideally we’d have a system where everything would be clear and transparent, but we don’t have that yet,” Lekas said. “I think another thing that we are very supportive of is nontechnical, which is literacy — media literacy, digital literacy for people — because we can’t just rely on technology alone to solve all of our problems.”

In Washington, some momentum on AI watermarking

The White House, certain federal agencies and multiple prominent members of Congress have made watermarking and the reduction of AI-generated misinformation a high priority, pushing through a patchwork of suggested solutions to regulate AI and create policy safeguards around the technology when it comes to deepfakes and other manipulative content.

Through Biden’s October AI executive order, the Commerce Department’s National Institute of Standards and Technology has been charged with creating authentication and watermarking standards for generative AI systems — following up on discussions in the Senate about similar kinds of verification technologies

Alondra Nelson, the former White House Office of Science and Technology Policy chief, said in an interview with FedScoop that there is enough familiarity with watermarking that it is no longer “a completely foreign kind of technological intervention or risk mitigation tactic.”

“I think that we have enough early days experience with watermarking that people have to use,” she said. “You’ve got to use it in different kinds of sectors for different kinds of concerns, like child sexual abuse and these sorts of things.” 

Congress has also introduced several pieces of legislation related to AI misinformation and watermarking, such as a bill from Rep. Yvette Clarke, D-N.Y., to regulate deepfakes by requiring content creators to digitally watermark certain content and make it a crime to fail to identify malicious deepfakes that are related to criminal conduct, incite violence or interfere with elections.

In September, Sens. Amy Klobuchar, D-Minn., Josh Hawley, R-Mo., Chris Coons, D-Del., and Susan Collins, R-Maine, proposed new bipartisan legislation focused on banning the use of deceptive AI-generated content in elections. In October, Sens. Brian Schatz, D-Hawaii, and John Kennedy, R-La., introduced the bipartisan AI Labeling Act of 2023, which would require clear labeling and disclosure on AI-generated content and chatbots so consumers are aware when they’re interacting with any product powered by AI.

Meanwhile, the Federal Election Commission has been asked to establish a new rule requiring political campaigns and groups to disclose when their ads include AI-generated content.

In the absence of any AI legislation within Congress becoming law or garnering significant bipartisan consensus, the White House has pushed to get tech giants to sign voluntary commitments governing AI, which require steps such as watermarking AI-generated content. Adobe, IBM, Nvidia and others are on board. The private commitments backed by the Biden administration are seen as a stopgap. 

From Nelson’s point of view, NIST’s work on the creation of AI watermarking standards will “be taking it to another level.” 

“One hopes that CIOs and CTOs will take it up,” she said. “That remains to be seen.”

The post AI watermarking could be exploited by bad actors to spread misinformation. But experts say the tech still must be adopted quickly appeared first on FedScoop.

]]>
75453
FDA cybersecurity agreement on medical devices needs updating, watchdog finds https://fedscoop.com/fda-cisa-medical-devices-cybersecurity-agreement-updated-gao/ Tue, 26 Dec 2023 22:56:41 +0000 https://fedscoop.com/?p=75405 GAO report says FDA's pact with CISA on cybersecurity protocols for medical devices is five years old and needs to be updated.

The post FDA cybersecurity agreement on medical devices needs updating, watchdog finds appeared first on FedScoop.

]]>
Medical devices like heart monitors, which are under the purview of the Food and Drug Administration, have cybersecurity vulnerabilities that aren’t frequently exploited but nevertheless pose risks to hospital networks and patients, according to a recent watchdog report

The Government Accountability Office highlighted that the FDA’s medical device cybersecurity formal agreement is five years old and needs to be updated with the help of the Cybersecurity and Infrastructure Security Agency, a move that would improve agency coordination and clarify responsibilities.  

“According to the Department of Health and Human Services (HHS), available data on cybersecurity incidents in hospitals do not show that medical device vulnerabilities have been common exploits,” the GAO report stated. 

“Nevertheless, HHS maintains that such devices are a source of cybersecurity concern warranting significant attention and can introduce threats to hospital cybersecurity.”

The GAO report found that the FDA’s authority over medical device cybersecurity has increased in recent years. This is attributable to December 2022 legislation that mandated that medical device manufacturers submit to FDA their plans to identify and address cybersecurity vulnerabilities for any new medical device that were introduced to consumers starting in March 2023. 

The GAO report also noted that FDA officials are currently implementing new cybersecurity authorities from past legislation and have not yet identified the need for any additional authority. 

According to FDA guidance, if medical device manufacturers do not fix cyber vulnerabilities, the agency can find that the manufacturers have violated federal law and can be penalized through enforcement actions.

The GAO report recommended that the FDA and CISA update their medical device cyber agreement to reflect organizational and procedural changes that have occurred. Both agencies agreed with the recommendations.

The post FDA cybersecurity agreement on medical devices needs updating, watchdog finds appeared first on FedScoop.

]]>
75405
VOA faces internal backlash over newsroom guidance on use of generative AI to voice news reports https://fedscoop.com/voice-of-america-ai-newsroom-synthetic-voicing-scripts/ Fri, 22 Dec 2023 16:07:32 +0000 https://fedscoop.com/?p=75380 Journalists at VOA have pushed back on newsroom leadership’s AI policy regarding “synthetic voices,” documents obtained by FedScoop show.

The post VOA faces internal backlash over newsroom guidance on use of generative AI to voice news reports appeared first on FedScoop.

]]>
Dozens of journalists and staff at Voice of America are strongly opposed to the state-owned news organization’s plan to use AI-generated synthetic voices, documents obtained by FedScoop show, with employees expressing concerns that the tool could breed mistrust with its audience, cause misinformation to spread and potentially eliminate jobs within the newsroom.

VOA, which has a weekly worldwide audience of approximately 326 million, is the largest and oldest of U.S. government-funded news networks and international broadcasters. 

The news organization released internal guidance on the use of artificial intelligence in November, following months of discussions with journalists and labor representatives that stirred up backlash and controversy within the news organization. 

FedScoop obtained the new AI guidance as well as a letter of opposition — signed by dozens of journalists within the news organization — that was sent to VOA leadership in October and has not been made public until now. 

“We are deeply concerned that a portion of the Artificial Intelligence guidance that the agency is preparing to issue will do more harm than good,” the signed letter said. “Specifically, we object to language that would allow Artificial Intelligence to be used ‘for voicing scripts.’” 

“We are also concerned that guidance that permits Artificial Intelligence to voice news reports and other products, even with the caveat that ‘a human being retains full control over the journalistic content,’ will allow for VOA to begin to replace on-air journalists with AI,” the letter continued.

Two VOA employees who asked to remain anonymous to avoid internal backlash told FedScoop that the newsroom is already experimenting with using AI-generated synthetic voices in its broadcasts. 

The two employees said that the U.S. Agency for Global Media, VOA’s parent agency, told union representatives that AI usage at VOA was not something they could bargain over. The two employees also said that VOA leadership rejected all proposals put forth by staff to change the AI guidance released in November. 

The VOA AI policy guidelines state that AI “may be used for voicing scripts, so long as a human being retains full control over the journalistic content. Furthermore, synthetic voices should never be used to impersonate or duplicate any individuals, including agency employees or public figures. This includes AI-generated content using an individual’s or employee’s likeness, image, and character.” 

Furthermore, the AI guidelines state that “if generative AI is used to research, develop ideas for, voice, or review VOA content, it must be acknowledged in a credit, tagline or ender or otherwise attributed in a script.”

Some editorial staffers at VOA say they are concerned that the newsroom’s AI guidelines are ambiguously written, making it unclear the extent to which AI-generated synthetic voices could be used within news broadcasts sent to its audience of hundreds of millions of people.

“We’re not luddites,” a senior VOA journalist said. “This isn’t a knee-jerk opposition to AI, but if all of a sudden someone reading the news for a TV package with a voiceover or a radio report is not a real person, then how would anybody believe that what the rest of what we’re doing is real? That’s our main objection.” 

“It’s going to blur the line between what is real and what is fake,” the journalist continued. “Our competitors are state broadcasters from Russia, China, Iran, and that’s what they do. They engage deliberately in misinformation and disinformation, blurring the line between real and fake. 

“And if we’re putting on the air AI voices that aren’t real, why would people trust that the content is real? I think there should be a real person behind every story, and my colleagues share that concern as well, the overwhelming majority of the primary correspondents of VOA, which is dozens and dozens of us.” 

In the objection letter sent to leadership by VOA staff and in interviews conducted by FedScoop with organization members, employees expressed frustration with management’s handling of AI policy in its guidelines and in private conversations, and worried that maintaining high journalistic standards was not the top priority.  

“A lot of the journalists at VOA are upset about this and have expressed this to management,” said Paula Hickey, a VOA employee and president of AFGE Local chapter 1812. “They feel VOA management is not doing anything about their concerns, management is not talking about it or saying how they’re going to use the synthetic voices or if someday it will replace employees with it. It’s all very confusing and unsettling.”

“Voice of America already has many detractors — foreign governments around the world —  that accuse it of being propaganda,” Hickey continued. “And VOA has worked very hard over the years to battle this image. These AI-generated synthetic voices create a lot of concerns and risks about maintaining our integrity and could damage trust with our audience. If that happens, you can’t put the genie back in the bottle.” 

Some journalists at VOA are concerned that the new use of generative AI outlined in the guidelines could lead to job losses in the near future. The senior VOA journalist said those concerns have been relayed to and rebuffed by management.

“I would say a number of us believe they are making decisions not based fundamentally on concern for journalism. They see this as an efficiency move, and in the future it may be a way to significantly reduce the number of personnel, which would save them money.”

The senior VOA journalist added that the organization’s newscasters, “especially in Africa and in Asia, have a personal relationship with their audiences. A number of our broadcasters have been on the air for many, many years, through good times and bad. And there’s an emotional connection, a two-way emotional connection. You’re not going to have that with AI voices and newscasters.” 

VOA leadership responded to the concerns of its journalists and staff by highlighting the limited scope of AI usage that they envision for the newsroom. 

“Regarding the use of AI in voicing scripts, our briefings about the guidelines have made it clear that VOA will only narrowly use AI to add synthetic voicing to text-only copy that appears on our news and Learning English websites,” acting VOA Director John Lippman 

said in a statement to FedScoop. “That’s a practice that the Washington Post and many other news organizations have been doing for years.” 

Lippman and VOA leadership also said that the opposition letter signed by dozens of journalists in October was “represented by one of the three unions and constitute fewer than 30 of the more than 1,500 VOA broadcasters.” 

The VOA leadership highlighted that the AI guidelines emphasize the need for employees to “be judicious and be transparent” when using generative AI tools. 

Journalists at VOA say they are unlikely in the long run to win the fight to restrict the use of AI within their newsroom, given the explosion in demand for the emerging technology. But the senior VOA journalist said there’s a feeling among some colleagues that they need to keep fighting for as long as they can.

“Are we fighting a losing battle here over a period of 50 or 100 years? Probably,” the journalist said. “I don’t know that any of us can control what’s going to happen decades from now. But we certainly can voice our concerns now for contemporary journalism and for our current broadcasts.” 

The post VOA faces internal backlash over newsroom guidance on use of generative AI to voice news reports appeared first on FedScoop.

]]>
75380
Federal chief data officers need more resources and authority to execute AI mission, survey finds https://fedscoop.com/federal-chief-data-officers-need-more-resources-and-authority-to-execute-ai-mission-survey-finds/ Tue, 19 Dec 2023 22:08:20 +0000 https://fedscoop.com/?p=75310 Deloitte's Adita Karkera said the survey found that providing CDOs with more resources and authorities "can help agencies maximize the use of data to improve the delivery of important government services for people and families across the country."

The post Federal chief data officers need more resources and authority to execute AI mission, survey finds appeared first on FedScoop.

]]>
Federal chief data officers are embracing artificial intelligence but face challenges tied to resources, staff skillsets and lack of authority, according to a survey of federal CDOs released Tuesday. 

The nonprofit Data Foundation and Deloitte’s fourth annual Federal Chief Data Officers survey found that in order to successfully leverage cutting-edge technologies within their agencies, federal CDOs say they need more clarification from agency leadership about their roles and responsibilities.

“The 2023 Federal CDO survey underscores the continued expansion of the federal CDO role, including the critical function that federal CDOs play in their agency’s efforts to successfully adopt and implement AI technologies,” Adita Karkera, chief data officer for Deloitte’s government and public services practice, said in a statement. “Our survey also finds that providing federal CDOs with more resources and authorities can help agencies maximize the use of data to improve the delivery of important government services for people and families across the country.”

The role of CDOs within federal agencies as well as the CDO Council was established by Congress within a statute of the Foundations for Evidence-Based Policymaking Act of 2018, which required federal agencies to base new policy on data and created CDO roles responsible for cultivating agency data strategies.

The CDO Council is ushering in new leadership, Energy Department CDO Rob King announced during a live event Tuesday. Federal Energy Regulatory Commission CDO Kirsten Dalboe will be the new chair, King said, while he will assume the role of vice chair.

Nick Hart, president of the Data Foundation, said in a statement that “CDOs are key partners for program managers and the American people to enable data to be valuable and used, but CDOs also need the resources, guidance, and authority to ensure their role can effectively achieve the vision of the Evidence Act and be a core agency function.”

The Data Foundation and Deloitte outlined the following four key recommendations based on the 2023 CDO survey findings:

  • “Clarify CDO authorities and responsibilities to optimize organizational data and technology capabilities.
  • Provide training, professional development, and change management support to build maturity of CDO roles and data governance functions across federal agencies.
  • Equip CDOs with resources and staff needed to fully execute their mission of improving data infrastructure, governance, analytics, and strategic data use.
  • Develop clear ethical guidelines and governance frameworks to support CDOs in responsibly adopting emerging technologies like AI in service of their public mission.”

The post Federal chief data officers need more resources and authority to execute AI mission, survey finds appeared first on FedScoop.

]]>
75310
AI algorithms could be used to better forecast natural disasters, GAO report says https://fedscoop.com/machine-learning-forecast-natural-disasters-gao/ Sat, 16 Dec 2023 00:04:04 +0000 https://fedscoop.com/?p=75280 The GAO found that AI machine learning models could significantly improve warning time and preparedness for severe storms and natural disasters.

The post AI algorithms could be used to better forecast natural disasters, GAO report says appeared first on FedScoop.

]]>
Artificial intelligence-driven algorithms can be used to better forecast models for natural disasters, saving lives and protecting property by rapidly analyzing massive data sets and identifying relevant patterns, a top government watchdog said in a report released Thursday.

Natural disasters result in hundreds of U.S. deaths and billions of dollars in damage annually, and machine learning AI tools could automate processes and glean new insights into weather patterns to improve warning time and preparedness during those events, the Government Accountability Office found.

“GAO found that machine learning, a type of artificial intelligence (AI) that uses algorithms to identify patterns in information, is being applied to forecasting models for natural hazards such as severe storms, hurricanes, floods, and wildfires, which can lead to natural disasters,” the GAO stated. 

“A few machine learning models are used operationally — in routine forecasting — such as one that may improve the warning time for severe storms. Some uses of machine learning are considered close to operational, while others require years of development and testing.”

GAO conducted the study by reviewing the use of machine learning to model severe storms, hurricanes, floods and wildfires, in addition to interviewing government, industry, academia and professional organization stakeholders. The watchdog also reviewed key reports and scientific literature on the subject.

The GAO study found that applying machine learning to natural disaster detection could reduce the time required to make costly forecasts and increase model accuracy by more fully exploiting available data, using other data that traditional models cannot and creating synthetic data to fill gaps as well as reducing uncertainty in the forecasting models.

The GAO study also found some challenges with the use of machine learning and AI, such as: data limitations that hinder ML model training and result in lower accuracy in some regions, especially those in rural areas; concerns about bias and general distrust and misunderstanding of algorithms; the costliness of developing and running ML models; a lack of understanding of the data that is being modeled.

GAO highlighted five policy options that could mitigate those challenges: work toward better data collection, sharing, and use; create more education and training options; target hiring and retention hurdles and specific resource shortfalls; take steps to account for bias and build trust in data and ML models; and maintain current efforts.

The post AI algorithms could be used to better forecast natural disasters, GAO report says appeared first on FedScoop.

]]>
75280
AI executive order will face challenges with talent pipeline, experts say  https://fedscoop.com/ai-executive-order-will-face-challenges-with-talent-pipeline-experts-say/ Fri, 08 Dec 2023 00:04:46 +0000 https://fedscoop.com/?p=75186 Upskilling the current federal workforce and utilizing the Intergovernmental Personnel Act would be productive ways of making it easier to implement the AI executive order.

The post AI executive order will face challenges with talent pipeline, experts say  appeared first on FedScoop.

]]>
Artificial intelligence experts who are supportive of the White House’s recent executive order on AI expressed significant concerns Wednesday about the ability of federal agencies to meet the order’s aggressive deadlines, particularly due to a major shortage of tech talent and the reality that agency leaders are already spread too thin.

After a House Oversight Subcommittee hearing on the White House’s AI policy, key experts on the technology from academia, industry and nonprofits highlighted several challenges surrounding implementation of the policy document signed by President Joe Biden in October.

“Part of the recognition here that everyone concedes is, there’s not enough AI talent within agencies,” Daniel Ho, associate director of the Stanford Institute for Human-Centered Artificial Intelligence, told FedScoop. “And all you’re doing is slapping this title of [chief AI officer] onto an existing officer like a CIO or CTO, and that may not bring as much talent into the agency as you would like.” 

Ho, who serves on the Biden administration’s National Artificial Intelligence Advisory Committee, added that chief technology and chief information officers will likely not have the time and attention to handle the hundreds of items that the White House EO and the corresponding OMB AI memo require, noting that agency tech leaders can’t spend just 5 percent of their time as chief AI officers. 

The wide-ranging executive order, which aims to tackle everything from AI privacy risks to federal procurement, calls on several agencies to take on new responsibilities related to artificial intelligence. 

The order also addresses new strategies for federal use of the technology, including the issuing of guidance for agency deployment, helping agencies access AI systems through more efficient and less expensive contracting, and hiring more AI professionals within the government. 

As part of that recruitment effort, the White House’s AI.gov website is set to reveal a new AI-related jobs portal for prospective federal workers. The Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps and Presidential Innovation Fellowship are supposed to lead this hiring initiative. 

Nevertheless, some tech industry executives are concerned that AI innovations will not be fully utilized within the government, and said that there needs to be more funding to manage the risks around AI.

“Government has very thorny problems that are specific to government use, and we need to be able to leverage the AI,” Ross Nodurft, executive director of tech trade group Alliance for Digital Innovation, told FedScoop. “We need to make sure that the AI executive order and the OMB memo have very clear ways of taking advantage of current AI processes that are already set up so that it’s easier for agencies to consume that technology.”

Although the AI executive order has largely received a warm welcome from AI experts and government leaders, a handful of tech industry associations have said the EO is too confusing, too broad and could potentially stifle innovation

“I also think that there needs to be continued funding for some of the agencies, whether it’s federal entities doing the oversight, or federal entities that are buying and consuming the technology,” added Nodurft, the former chief of OMB’s cyber team. “We have to focus on funding for us to be successful in managing the risks associated with AI.”

AI academic scholars and lawyers such as Ho say that pouring resources into upskilling the current federal workforce and more fully utilizing the Intergovernmental Personnel Act would be productive ways of making it easier to implement the AI executive order. 

“More training of the existing federal workforce when it comes to AI would be good,” Ho said. “The Intergovernmental Personnel Act, I think, is a really good model that a number of agencies have taken advantage of. … You can have folks from qualified nonprofits or academic institutions and onboard them to serve in the government in this kind of a capacity.”

Caroline Nihill contributed to this article.

The post AI executive order will face challenges with talent pipeline, experts say  appeared first on FedScoop.

]]>
75186
VA hires unnamed senior executive to clean up website benefits issues https://fedscoop.com/va-hires-unnamed-senior-executive-to-clean-up-website-benefits-issues/ Tue, 05 Dec 2023 00:57:10 +0000 https://fedscoop.com/?p=75098 During a House Veterans' Affairs subcommittee hearing on VA.gov, VA CIO Kurt DelBene couldn’t name the newly hired executive charged with fixing website issues when asked to.

The post VA hires unnamed senior executive to clean up website benefits issues appeared first on FedScoop.

]]>
The Department of Veterans Affairs has hired a new senior executive to quickly fix serious issues on the department’s VA.gov website related to mishandled claims and access benefits — but the identity of that executive isn’t known to a top VA tech official. 

During a House Veterans’ Affairs Subcommittee on Technology Modernization hearing Monday, Chairman Matt Rosendale, R-Mont., pressed Kurt DelBene, the VA’s assistant secretary for information and technology and CIO, on his agency’s response to a Sept. 6 letter regarding problems with VA.gov.

The letter, sent by Rep. Mike Bost, R-Ill., chairman of the full House Veterans’ Affairs Committee, to VA Secretary Denis McDonough, sought agency explanations for recent VA.gov problems.

A VA IT investigation found that more than 56,000 veterans who submitted a request to update their dependents — mostly adding or removing spouses or children — “did not have those claims successfully processed by VA.gov,” VA press secretary Terrence Hayes said in a statement in September. Those IT errors and website issues had been occurring for some veterans as far back as 2011 and could affect their monthly benefit payments, the VA acknowledged in September.

Rosendale said that subcommittee members received the VA response to Bost’s letter less than an hour before Monday’s hearing. “It took two hearings by this subcommittee to shake this response loose, and that is absolutely unacceptable,” he said.

According to Rosendale, the letter stated that the VA’s Office of Information and Technology “brought on a new senior executive who directly reports” to DelBene, and that person “will ensure issues related to mishandled claims and veterans unable to access a benefit application are rapidly fixed.”

Rosendale then asked DelBene for the name of the executive and why they weren’t present at the hearing.

“I actually hold myself responsible for making any VA.gov fixes,” DelBene said, “and I’m not sure” who the letter refers to. “I’ll have to get back on reference to the actual executive” in charge, per the letter.

“I know all my senior executives, but I’m trying to give you the correct information in terms of who you’re referencing,” he added.

Rosendale expressed surprise at the answer and said that within a day, he expected DelBene to provide the subcommittee with the name of the senior executive in charge of VA.gov and explain why they weren’t present at the hearing.

The post VA hires unnamed senior executive to clean up website benefits issues appeared first on FedScoop.

]]>
75098
Bipartisan Senate bill to ban TSA use of facial recognition technology gains support of civil rights groups https://fedscoop.com/bipartisan-senate-bill-to-ban-tsa-use-of-facial-recognition-technology-gains-support-of-civil-rights-groups/ Sat, 02 Dec 2023 00:32:34 +0000 https://fedscoop.com/?p=75084 The bill aims to tackle TSA’s proposed plan to implement facial recognition scans at over 430 U.S. airports within the next several years.

The post Bipartisan Senate bill to ban TSA use of facial recognition technology gains support of civil rights groups appeared first on FedScoop.

]]>
The Senate introduced bipartisan legislation this week that would ban the use of facial recognition technology and the collection of facial biometric data by the Transportation Security Administration in U.S. airports.

The Traveler Privacy Protection Act aims to tackle TSA’s proposed plan to implement facial recognition scans at over 430 U.S. airports within the next several years. The bill was sponsored by Sens. Jeff Merkley, D-Ore., John Kennedy, R-La., Edward Markey, D-Mass., Roger Marshall, R-Kan., Bernie Sanders, I-Vt., and Elizabeth Warren, D-Mass.

“Every day, TSA scans thousands of Americans’ faces without their permission and without making it clear that travelers can opt out of the invasive screening,” said Kennedy in a statement. “The Traveler Privacy Protection Act would protect every American from Big Brother’s intrusion by ending the facial recognition program.”

Civil and digital rights groups like the ACLU, Electronic Privacy Information Center and others have come out strongly in favor of the legislation, which they say will tackle facial recognition technology’s infringement on people’s privacy and discriminatory practices against people of color and women in particular.

“This bill will most help marginalized communities like Muslim Americans, Black, Indigenous, People of Color and  others systematically targeted by law enforcement and TSA,” said Albert Cahn, the executive director and founder of the Surveillance Technology Oversight Project (S.T.O.P.).

“No one should have this invasive and harmful tech used against them when the mistakes of this tech are so great. We’ve seen so many people wrongly committed for crimes they didn’t commit and TSA’s mass adoption of facial recognition could allow faulty algorithmic analysis arrests to go through the roof,” Cahn told FedScoop during an interview.

In particular, Cahn said that TSA “has a really dubious track record with tech procurement,” because it spends millions of tax dollars on bag scanners and other technology “that their own analysis shows misses weapons and aren’t effective.”

“Many of us are not willing to criticize TSA because we want peace of mind and security when we travel. But the agency’s track record doesn’t inspire much confidence at all, so we shouldn’t accept facial recognition as a false safety blanket,” said Cahn.

Some leaders in the Senate said attempts to stop TSA’s facial recognition technology from scaling have not succeeded and new legislation is needed.

“Passengers should not have to choose between safety and privacy when they travel. Despite our repeated calls for TSA to halt its unacceptable use of facial recognition technologies, the agency has continued to expand its use across the country,” Sen. Markey said in a statement.

The post Bipartisan Senate bill to ban TSA use of facial recognition technology gains support of civil rights groups appeared first on FedScoop.

]]>
75084