AI models Archives | FedScoop https://fedscoop.com/tag/ai-models/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 24 Apr 2024 20:24:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 AI models Archives | FedScoop https://fedscoop.com/tag/ai-models/ 32 32 CISA’s chief data officer: Bias in AI models won’t be the same for every agency https://fedscoop.com/ai-models-bias-datasets-cisa-chief-data-officer/ Wed, 24 Apr 2024 20:24:19 +0000 https://fedscoop.com/?p=77573 Monitoring and logging are critical for agencies as they assess datasets, though “bias-free data might be a place we don’t get to,” the federal cyber agency’s CDO says.

The post CISA’s chief data officer: Bias in AI models won’t be the same for every agency appeared first on FedScoop.

]]>
As chief data officer for the Cybersecurity and Infrastructure Security Agency, Preston Werntz has made it his business to understand bias in the datasets that fuel artificial intelligence systems. With a dozen AI use cases listed in CISA’s inventory and more on the way, one especially conspicuous data-related realization has set in.

“Bias means different things for different agencies,” Werntz said during a virtual agency event Tuesday. Bias that “deals with people and rights” will be relevant for many agencies, he added, but for CISA, the questions become: “Did I collect data from a number of large federal agencies versus a small federal agency [and] did I collect a lot of data in one critical infrastructure sector versus in another?”

Internal gut checks of this kind are likely to become increasingly important for chief data officers across the federal government. CDO Council callouts in President Joe Biden’s AI executive order cover everything from the hiring of data scientists to the development of guidelines for performing security reviews.

For Werntz, those added AI-related responsibilities come with an acknowledgment that “bias-free data might be a place we don’t get to,” making it all the more important for CISA to “have that conversation with the vendors internally about … where that bias is.”

“I might have a large dataset that I think is enough to train a model,” Werntz said. “But if I realize that data is skewed in some way and there’s some bias … I might have to go out and get other datasets that help fill in some of the gaps.”

Given the high-profile nature of agency AI use cases — and critiques that inventories are not fully comprehensive or accurate — Werntz said there’s an expectation of additional scrutiny on data asset purchases and AI procurement. As CISA acquires more data to train AI models, that will have to be “tracked properly” in the agency’s inventory so IT officials “know which models have been trained by which data assets.” 

Adopting “data best practices and fundamentals” and monitoring for model drift and other potentially problematic AI concepts is also top of mind for Werntz, who emphasized the importance of performance security logging. That comes back to having an awareness of AI models’ “data lineage,” especially as data is “handed off between systems.” 

Beyond CISA’s walls, Werntz said he’s focused on sharing lessons learned with other agencies, especially when it comes to how they acquire, consume, deploy and maintain AI tools. He’s also keeping an eye out for technologies that will support data-specific efforts, including those involving tagging, categorization and lineage.

“There’s a lot of onus on humans to do this kind of work,” he said. “I think there’s a lot of AI technologies that can help us with the volume of data we’ve got.” CISA wants “to be better about open data,” Werntz added, making more of it available to security researchers and the general public. 

The agency also wants its workforce to be trained on commercial generative AI tools, with some guardrails in place. As AI “becomes more prolific,” Werntz said internal trainings are all about “changing the culture” at CISA to instill more comfort in working with the technology.

“We want to adopt this. We want to embrace this,” Werntz said. “We just need to make sure we do it in a secure, smart way where we’re not introducing privacy and safety and ethical kinds of concerns.” 

The post CISA’s chief data officer: Bias in AI models won’t be the same for every agency appeared first on FedScoop.

]]>
77573
Senate bill would require platforms to get consumer consent before their data is used on AI-model training https://fedscoop.com/consumer-data-consent-training-ai-models-senate-bill/ Wed, 20 Mar 2024 19:35:51 +0000 https://fedscoop.com/?p=76728 The legislation from Democratic Sens. Welch and Luján calls on the FTC to pursue enforcement actions against companies that don’t get sign-off from consumers.

The post Senate bill would require platforms to get consumer consent before their data is used on AI-model training appeared first on FedScoop.

]]>
Online platforms would need to get consent from consumers before using their data to train AI models under new legislation from a pair of Senate Democrats.

If a company fails to obtain that express informed consent from consumers prior to AI model training, it would be deemed a deceptive or unfair practice and result in enforcement action from the Federal Trade Commission, under the Artificial Intelligence Consumer Opt-In, Notification Standards, and Ethical Norms for Training (AI CONSENT) Act, introduced Wednesday by Sens. Peter Welch, D-Vt., and Ben Ray Luján, D-N.M.

“The AI CONSENT Act gives a commonsense directive to artificial intelligence innovators: get the express consent of the public before using their private, personal data to train your AI models,” Welch said in a statement. “This legislation will help strengthen consumer protections and give Americans the power to determine how their data is used by online platforms. We cannot allow the public to be caught in the crossfire of a data arms race, which is why these privacy protections are so crucial.”

Added Luján: “Personally identifiable information should not be used to train AI models without consent. The use of personal data by online platforms already pose great risks to our communities, and artificial intelligence increases the potential for misuse.” 

The bill seeks to create standards for disclosures, including a requirement that platforms provide instructions to consumers for how they can affirm or rescind their consent. The option to grant or revoke consent should be made available “at any time through an accessible and easily navigable mechanism,” the bill states; and the selection to withhold or reverse consent must be “at least as prominent as the option to accept” while taking “the same number of steps or fewer as the option to accept.”

The legislation includes various provisions to regulate how the disclosures are presented by platforms, including specifications on visual effects such as font and type size, the placement of a disclosure on a platform, and how to ensure that the “brevity, accessibility and clarity” of disclosures ensure that they can be “understood by a reasonable person.”

Within a year of the proposed legislation’s adoption, the FTC would be required to produce a report for the Senate Commerce, Science and Technology and House Energy and Commerce committees on how technically feasible it would be to “de-identify” data as the pace of AI developments quicken. The agency would also be charged in the report with assessing measures that platforms could pursue to de-identify user data.

The legislation, which has the backing of the National Consumers League and the consumer rights advocacy nonprofit Public Citizen, is one in a series of AI-related bills coming out of the Senate this year, following 2023 efforts in the chamber on everything from labels and disclosures on AI products to certification processes for critical-impact AI systems.   

The Biden administration, meanwhile, has shown a particular interest in open foundation models, while FTC Chair Lina Khan earlier this year announced an agency probe into AI models that unlawfully collect data that jeopardizes fair competition. 

“The drive to refine your algorithm cannot come at the expense of people’s privacy or security, and privileged access to customers’ data cannot be used to undermine competition,” Khan said during the FTC Tech Summit in January. “We similarly recognize the ways that consumer protection and competition enforcement are deeply connected with privacy violations fueling market power, and market power, in turn, enabling firms to violate consumer protection laws.”

The post Senate bill would require platforms to get consumer consent before their data is used on AI-model training appeared first on FedScoop.

]]>
76728