bias Archives | FedScoop https://fedscoop.com/tag/bias/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 24 Apr 2024 20:24:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 bias Archives | FedScoop https://fedscoop.com/tag/bias/ 32 32 CISA’s chief data officer: Bias in AI models won’t be the same for every agency https://fedscoop.com/ai-models-bias-datasets-cisa-chief-data-officer/ Wed, 24 Apr 2024 20:24:19 +0000 https://fedscoop.com/?p=77573 Monitoring and logging are critical for agencies as they assess datasets, though “bias-free data might be a place we don’t get to,” the federal cyber agency’s CDO says.

The post CISA’s chief data officer: Bias in AI models won’t be the same for every agency appeared first on FedScoop.

]]>
As chief data officer for the Cybersecurity and Infrastructure Security Agency, Preston Werntz has made it his business to understand bias in the datasets that fuel artificial intelligence systems. With a dozen AI use cases listed in CISA’s inventory and more on the way, one especially conspicuous data-related realization has set in.

“Bias means different things for different agencies,” Werntz said during a virtual agency event Tuesday. Bias that “deals with people and rights” will be relevant for many agencies, he added, but for CISA, the questions become: “Did I collect data from a number of large federal agencies versus a small federal agency [and] did I collect a lot of data in one critical infrastructure sector versus in another?”

Internal gut checks of this kind are likely to become increasingly important for chief data officers across the federal government. CDO Council callouts in President Joe Biden’s AI executive order cover everything from the hiring of data scientists to the development of guidelines for performing security reviews.

For Werntz, those added AI-related responsibilities come with an acknowledgment that “bias-free data might be a place we don’t get to,” making it all the more important for CISA to “have that conversation with the vendors internally about … where that bias is.”

“I might have a large dataset that I think is enough to train a model,” Werntz said. “But if I realize that data is skewed in some way and there’s some bias … I might have to go out and get other datasets that help fill in some of the gaps.”

Given the high-profile nature of agency AI use cases — and critiques that inventories are not fully comprehensive or accurate — Werntz said there’s an expectation of additional scrutiny on data asset purchases and AI procurement. As CISA acquires more data to train AI models, that will have to be “tracked properly” in the agency’s inventory so IT officials “know which models have been trained by which data assets.” 

Adopting “data best practices and fundamentals” and monitoring for model drift and other potentially problematic AI concepts is also top of mind for Werntz, who emphasized the importance of performance security logging. That comes back to having an awareness of AI models’ “data lineage,” especially as data is “handed off between systems.” 

Beyond CISA’s walls, Werntz said he’s focused on sharing lessons learned with other agencies, especially when it comes to how they acquire, consume, deploy and maintain AI tools. He’s also keeping an eye out for technologies that will support data-specific efforts, including those involving tagging, categorization and lineage.

“There’s a lot of onus on humans to do this kind of work,” he said. “I think there’s a lot of AI technologies that can help us with the volume of data we’ve got.” CISA wants “to be better about open data,” Werntz added, making more of it available to security researchers and the general public. 

The agency also wants its workforce to be trained on commercial generative AI tools, with some guardrails in place. As AI “becomes more prolific,” Werntz said internal trainings are all about “changing the culture” at CISA to instill more comfort in working with the technology.

“We want to adopt this. We want to embrace this,” Werntz said. “We just need to make sure we do it in a secure, smart way where we’re not introducing privacy and safety and ethical kinds of concerns.” 

The post CISA’s chief data officer: Bias in AI models won’t be the same for every agency appeared first on FedScoop.

]]>
77573
DHS names Eric Hysen chief AI officer, announces new policies for AI acquisition and facial recognition  https://fedscoop.com/dhs-names-eric-hysen-chief-ai-officer-announces-new-policies-for-ai-acquisition-and-facial-recognition/ Fri, 15 Sep 2023 18:35:20 +0000 https://fedscoop.com/?p=72952 The new policies focus on responsible acquisition and use of AI and machine learning, and governance of facial recognition applications.

The post DHS names Eric Hysen chief AI officer, announces new policies for AI acquisition and facial recognition  appeared first on FedScoop.

]]>
The Department of Homeland Security on Thursday released new policies regarding the acquisition and use of artificial intelligence and named its first chief AI officer to help champion the department’s responsible adoption of AI.

In a release, DHS Secretary Alejandro Mayorkas announced the directives — one to guide the acquisition and use of AI and machine learning, and another to govern facial recognition applications — and named department CIO Eric Hysen as chief of AI.

The new policies were developed by DHS’s Artificial Intelligence Task Force (AITF), which was created in April 2023.

The news comes after the Government Accountability Office released a report earlier this month outlining the DHS’s lack of policies and training for law enforcement personnel on facial recognition technology. 

“Artificial intelligence is a powerful tool we must harness effectively and responsibly,” said DHS Secretary Alejandro Mayorkas said in a statement. “Our Department must continue to keep pace with this rapidly evolving technology, and do so in a way that is transparent and respectful of the privacy, civil rights, and civil liberties of everyone we serve.”

The release explains that DHS already uses AI in several ways, “including combatting fentanyl trafficking, strengthening supply chain security, countering child sexual exploitation, and protecting critical infrastructure. These new policies establish key principles for the responsible use of AI and specify how DHS will ensure that its use of face recognition and face capture technologies is subject to extensive testing and oversight.”

As DHS’s appointed chief of AI, Eric Hysen will work to promote innovation and safety in the department’s uses of AI and advise Mayorkas and other DHS leadership.

 “Artificial intelligence provides the department with new ways to carry out our mission to secure the homeland,” Hysen said in a statement. “The policies we are announcing today will ensure that the Department’s use of AI is free from discrimination and in full compliance with the law, ensuring that we retain the public’s trust.”

During the past two years of the Biden administration, multiple prominent civil rights groups have harshly criticized DHS’s approach to facial recognition, particularly its contracts with controversial tech company, Clearview AI, which continues to work with the agency.

“DHS claims this technology is for our public safety, but we know the use of AI technology by DHS, including ICE, increases the tools at their disposal to surveil and criminalize immigrants at a new level,” Paromita Shah, executive director of Just Futures Law, a legal nonprofit focused on immigrants and criminal justice issues, said in a statement on the new policies. 

“We remain skeptical that DHS will be able to follow basic civil rights standards and transparency measures, given their troubling record with existing technologies. The infiltration of AI into the law enforcement sector will ultimately impact immigrant communities,” Shah added. 

The post DHS names Eric Hysen chief AI officer, announces new policies for AI acquisition and facial recognition  appeared first on FedScoop.

]]>
72952
Reps. Buck and Lieu: AI regulation must reduce risk without sacrificing innovation https://fedscoop.com/reps-buck-and-lieu-ai-regulation-must-reduce-risk-without-sacrificing-innovation/ https://fedscoop.com/reps-buck-and-lieu-ai-regulation-must-reduce-risk-without-sacrificing-innovation/#respond Wed, 05 Jul 2023 15:06:48 +0000 https://fedscoop.com/?p=70059 In interviews with FedScoop, the congressional AI leaders share their unique and at times contrasting visions for regulation of the technology.

The post Reps. Buck and Lieu: AI regulation must reduce risk without sacrificing innovation appeared first on FedScoop.

]]>
Two leading congressional AI proponents, Rep. Ted Lieu, a California Democrat, and Rep. Ken Buck, a Colorado Republican, are working to boost the federal government’s ability to foster AI innovation through increased funding and competition while also reducing major risks associated with the technology.

Last week each lawmaker shared with FedScoop their own unique vision for how Congress and the federal government should approach AI in the coming months, with Lieu criticizing parts of the European Union’s proposed AI Act while Buck took a shot at the White House’s AI Bill of Rights blueprint.

Buck and Lieu recently worked together to introduce a bill which would create a blue-ribbon commission on AI to develop a comprehensive framework for the regulation of the emerging technology and earlier this year introduced a bipartisan bill to prevent AI from making nuclear launch decisions.

The bicameral National AI Commission Act would create a 20-member commission to explore AI regulation, including how regulation responsibility is distributed across agencies, the capacity of agencies to address challenges relating to regulation, and alignment among agencies in their enforcement actions. 

The AI Commission bill is one of several potential solutions for regulating the technology proposed by lawmakers, including Senate Majority Leader Chuck Schumer, who recently introduced a plan to develop comprehensive legislation in Congress to regulate and advance the development of AI in the U.S.

Buck said he would like to see “experts studying AI from trusted groups like the Bull Moose project and other think tanks, including American Compass,” to be a part of the AI commission. 

Buck and Lieu are both strongly focused on ensuring Congress and the federal government allow AI companies and their tools to keep innovating to ensure the US stays ahead of adversaries like China while ensuring any harms caused by the technology are understood and mitigated. 

With respect to increasing and supporting AI innovation in the U.S., Lieu said he is currently pushing for more funding within the Congressional appropriations process for AI safety, research and innovation that the federal government would disperse to qualified entities and institutions.

“I would like to see more funding from the government to research centers that create AI and to have different grants available for people who want to work on AI safety and AI risks and AI innovation,” said Lieu, who is a member of the House Artificial Intelligence Caucus and one of three members of Congress with a computer science degree.

Buck on the other hand highlighted that one of the keys to encouraging AI innovation is the government ensuring that “we don’t have a single controlling entity that we have dispersed AI competition,” in order to “make sure that we don’t have a Google in the AI space. I don’t mean Google specifically but I mean, I want to make sure we have five or six major generative AI competitors in the space,” he said.

For the past two years, Buck was the top Republican on the powerful House antitrust subcommittee and has played a key role in forging a bipartisan agreement in Congress that would rein in Big Tech companies such as Google, Amazon, Facebook, and Apple for anti competitive activities.

Buck also said he’s not in favor of OpenAI and ChatGPT CEO Sam Altman’s key approach to regulating the technology, which calls for the creation of a new federal agency to license and regulate large AI models. That proposal was floated by Altman along with other legislative ideas during congressional testimony in May.

“I’m not in favor of one agency with one commission, because it’s too easy to be captured by an outside group. So I think dispersing that oversight within the government is important,” Buck told FedScoop during an interview in his Congressional office on Capitol Hill. 

“I’m not in favor of one agency with one commission, because it’s too easy to be captured by an outside group.”

Rep. ted buck, r-colo.

Tech giant Google has also pushed the federal government to divide up oversight of AI tools across agencies rather than creating a single regulator focused on the technology, in contrast with rivals like Microsoft and OpenAI. 

Kent Walker, Google’s president of global affairs, told the Washington Post in June that he was in favor of a “hub-and-spoke model” of federal regulations that he argued is better suited to deal with how AI is affecting U.S. economy than the “one-size-fits-all approach” of creating a single agency devoted to the issue.

When asked about which AI regulatory framework he supports, Buck said the main frameworks currently being debated in Washington including the National Institute of Standards and Technology’s (NIST) voluntary AI Risk Management Framework, the White House’s AI Bill of Rights Blueprint, and the EU’s proposed AI Act all have “salvageable items.”  

WASHINGTON, DC – JULY 28: Rep. Ken Buck (R-Colo.) questions U.S. Attorney General William Barr during a House Judiciary Committee hearing on Capitol Hill on July 28, 2020 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)

However, Buck added that the White House’s AI Bill of Rights “has some woke items that won’t find support across partisan lines,” indicating Republicans will push back against parts of the blueprint document which consists of five key principles for the regulation of AI technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback.

On the other hand, Lieu, a Democrat, is strongly in favor of the White House’s AI blueprint which is intended to address concerns that unfettered use of AI in certain scenarios may cause discrimination against minority groups and further systemic inequality.

“The biggest area of AI use with the government [of concern] would be AI that has some sort of societal harm, such as discrimination against certain groups. Facial recognition technology that is less accurate for people with darker skin, I think we have to put some guardrails on that,” Lieu told FedScoop during a phone interview last week.  

“I am concerned with any AI model that could lead to systematic discrimination against a certain group of people, whether that’s in facial recognition or loan approval,” Lieu said.

“I am concerned with any AI model that could lead to systematic discrimination against a certain group of people, whether that’s in facial recognition or loan approval.”

rep. ted lieu, d-calif.

Lieu added that the federal government should be focused on regulating or curtailing AI that could be used to hack or cyberattack institutions and companies and how to mitigate such dangerous activity. 

In a paper examining popular generative AI tool ChatGPT’s code-writing model known as Codex, which powers GitHub’s Co-Pilot assistant, OpenAI researchers observed that the AI model “can produce vulnerable or misaligned code” and could be “misused to aid cybercrime.” The researchers added that while “future code generation models may be able to be trained to produce more secure code than the average developer,” getting there “is far from certain.” 

Lieu also said that “AI that can be very good at spreading disinformation and microtargeting, people with misinformation,” which needs to be addressed and highlighted AI will cause “there to be disruption in the labor force. And we need to think about how we’re going to mitigate that kind of disruption.”

Alongside the White House’s AI blueprint, Lieu said he was strongly in favor of the voluntary NIST AI framework AI regulatory framework focused on helping the private sector and eventually federal agencies build responsible AI systems centered on four key principles: govern, map, measure and manage.

However, Lieu took issue with parts of the EU’s AI Act which was proposed earlier this year and is currently being debated but unlike the White House AI Blueprint and the NIST AI framework would be mandatory by law for all entities to follow.

“My understanding is that the EU AI Act has provisions in it that for example, would prevent or dissuade AI from analyzing human emotions. I think that’s just really stupid,” Lieu told FedScoop during the interview.  

“Because one of the ways humans communicate is through emotions. And I don’t understand why you would want to prevent AI from getting the full communications of the individual if the interviewer chooses to communicate that to the AI,” Lieu added.

The post Reps. Buck and Lieu: AI regulation must reduce risk without sacrificing innovation appeared first on FedScoop.

]]>
https://fedscoop.com/reps-buck-and-lieu-ai-regulation-must-reduce-risk-without-sacrificing-innovation/feed/ 0 70059
Labor Dept to address use of biometric tech by state workforce agencies https://fedscoop.com/labor-dept-to-address-use-of-biometric-tech-by-state-workforce-agencies/ Thu, 06 Apr 2023 22:21:24 +0000 https://fedscoop.com/?p=67474 The Labor Department’s Employment and Training Administration will in September issue new guidance on the use of facial recognition technology by state workforce agencies. The new guidance is expected to require that users are provided with an alternative identity verification option and that state labor agencies carry out testing and mitigation for bias before implementing […]

The post Labor Dept to address use of biometric tech by state workforce agencies appeared first on FedScoop.

]]>
The Labor Department’s Employment and Training Administration will in September issue new guidance on the use of facial recognition technology by state workforce agencies.

The new guidance is expected to require that users are provided with an alternative identity verification option and that state labor agencies carry out testing and mitigation for bias before implementing any such tools.

Labor is responding to an Office of Inspector General (OIG) report released last week that found that use of facial recognition technology by identity verification contractors may not result in equitable and secure access to unemployment insurance (UI) in SWAs.

“While [Labor Department’s] ETA has issued guidance on identity verification and also on UI benefit equity, ETA has provided minimal guidance that specifically addresses facial recognition technology in administering UI benefits,” the Labor Department’s assistant inspector general Carolyn Hantz wrote in a memo to Brent Parton, Acting Assistant Secretary for Employment and Training.

“Without comprehensive guidance, SWAs are at risk of using technology that discriminates against claimants entitled to receive UI benefits and of not adequately safeguarding claimants,” the memo stated.

The Labor Department will provide the new guidance by Sept. 30.

The post Labor Dept to address use of biometric tech by state workforce agencies appeared first on FedScoop.

]]>
67474
HHS IT coordinator researching algorithmic bias and implications for health equity https://fedscoop.com/onc-health-equity-algorithmic-bias/ Wed, 13 Apr 2022 20:58:06 +0000 https://fedscoop.com/?p=50445 The office is also working with the CDC on a cloud infrastructure that is intended to improve inter-agency data sharing.

The post HHS IT coordinator researching algorithmic bias and implications for health equity appeared first on FedScoop.

]]>
The Department of Health and Human Services is investigating sources of algorithmic bias as part of its effort to ensure health equity by design, according to Secretary Xavier Becerra.

Becerra tasked the Office of the National Coordinator for Health IT with the research given its work with vendors of electronic health records, which are increasingly the source of data used to train and develop algorithms.

ONC has found algorithms developed by, say, the Mayo Clinic in Rochester, Minnesota, might not apply to hospitals in San Juan, Puerto Rico, as the Biden administration prioritizes more equitable health outcomes nationally, according to the HHS secretary.

“As part of the effort, I’ve asked ONC to take a deep look at algorithmic bias and its implications for health equity to ensure that all Americans get the benefits that modern analytic technologies can provide,” Becerra said, during the ONC 2022 Annual Meeting on Wednesday.

ONC has also begun working with the Centers for Disease Control and Prevention on what’s being informally called its “north star architecture,” a more cloud-oriented environment to support the federated public health infrastructure across the U.S. The north star architecture is part of the CDC’s Public Health Data Modernization Initiative and includes a collaborative governance model co-chaired by both agencies and including state, local, tribal and territorial public health agencies.

For ONC’s part, it will release more use case-specific data as part of its U.S. Core Data for Interoperability+ (USCDI+) initiative to create a nationwide public health data model.

“The idea is to create an infrastructure that allows the benefits of what cloud-hosted architecture can provide and cloud-native solutions can provide but also still give the jurisdictions — the state, local, tribal and territorial public health agencies — the autonomy that they need and is a part of the Constitution,” said Micky Tripathi, national coordinator for health IT.

Tripathi called 2022 a “pivotal” year in the U.S.’s transition to “digitally native” health care while admitting faxing is still “hiding in plain sight” across the delivery system. He’s both encouraged by the commitment of health care providers, technology developers and health information networks to meeting the new Fast Healthcare Interoperability Resource (FHIR) data standard for health information sharing and concerned providers may not be aware of all the deadlines and requirements.

ONC plans to launch pilots of different patterns of support for FHIR in early 2023. Patterns include non-brokered or facilitated FHIR, which allows for the use of network infrastructure like endpoint directories, record-location services and security certificates to make it easier for applications to connect and the standard itself to scale.

The agency is additionally working with the CDC to launch the Helios FHIR Accelerator, a public-private initiative to streamline data sharing through new use cases and ultimately speed up modernization of public health technology.

Tripathi hopes to escape the cycle of industry doing the bare minimum, forcing agencies including ONC to issue more detailed regulations.

“I think one of the scourges of our industry is the minimum viable compliance problem,” Tripathi said. “That is doing just enough to meet the letter of a regulation and not embracing the spirit or the opportunity of what we can do together.”

The post HHS IT coordinator researching algorithmic bias and implications for health equity appeared first on FedScoop.

]]>
50445
NIST takes socio-technical approach to AI bias en route to management standard https://fedscoop.com/nist-socio-technical-ai-bias/ Wed, 30 Mar 2022 20:16:59 +0000 https://fedscoop.com/?p=49403 Experts have argued societal values should factor into AI development and use, and new guidance shows NIST has been listening.

The post NIST takes socio-technical approach to AI bias en route to management standard appeared first on FedScoop.

]]>
More research into human and societal sources of bias in artificial intelligence systems is needed before government can hope to establish a standard for assuring their trustworthiness, technology experts say.

AI developers and users tend to focus on the representativeness of datasets and fairness of machine learning algorithms. But according to National Institute of Standards and Technology guidance published March 15, a socio-technical approach to building and deploying systems is needed.

Trustworthy and responsible AI experts have argued societal values should factor into development and use, but Special Publication (SP) 1270 is the first document in which NIST consistently recognizes such systems as socio-technical ones.

“I’m not sure if anything is so lacking that it requires revisiting this particular document,” Cynthia Khoo, associate at the Georgetown Law Center on Privacy & Technology, told FedScoop. “Especially given the role that it’s meant to play, which is just providing an overarching framework from which further, more specific guidance will emerge.”

NIST’s IT Laboratory intends to host a workshop and release a draft version of a practical guide on managing a variety of risks, including bias, across the AI Risk Management Framework for public comment in the coming weeks.

Bias is context-dependent, so ITL has also adopted the National Cybersecurity Center of Excellence‘s model to engage with AI companies and establish best practices for identifying, managing and mitigating bias using commercial tools. Those best practices will be compiled within additional guidance working toward a standard.

“For bias guidance, we’re just going to continue to work internally and with the broader community to identify socio-technical governance practices that can eventually become a standard,” said Reva Schwartz, a research scientist with NIST’s Trustworthy and Responsible AI program and SP 1270 coauthor. “So that’s kind of our marching path right now.”

In addition to acknowledging non-technical factors like systemic discrimination contributing to bias, which AI developers can’t simply code against, SP 1270 clarifies the limitation of measures to mitigate bias.

Bias stems from a variety of sources like errors in test datasets — even 3% to 4% error can diminish a model’s performance on actual data — or mathematical technology that only approximates reality, and placing humans in the loop only introduces cognitive limitations.

“Do their biases attenuate each other or amplify each other?” asked Apostol Vassilev, ITL research team lead and SP 1270 coauthor. “That’s an open question; no one seems to know that, and yet people assume that if you throw an innocent person in there all the problems go away.”

Unresolved, unresearched problems like that are a “call to action” for NIST, Vassilev added.

SP 1270 warns of the dangers of techno-solutionism — relying on AI as a quick, foolproof fix for complex, real-world problems — an aspect of the guidance Khoo and other experts pushed for in their comments on the draft version.

An AI tool without an underlying process or business purpose will only amplify biases and, in particular, problematic bias, Schwartz said.

“We know there’s great potential for AI,” she said. “We know it’s a net positive, but to unlock its full potential we can’t just place it in high-risk locations, especially in the federal government.”

Khoo praised SP 1270 for encouraging the consultation of subject matter experts (SMEs) beyond computational and software engineers, like doctors, when developing automated technologies for medical diagnoses, and historically marginalized groups. Human-centered design requires SME involvement from an AI project’s outset and a common language across disciplines, Schwartz said.

A final, unique feature of SP 1270 is its analysis of how historical context and systems of power — and who among developers, data scientists and policymakers — influence algorithmic bias, Khoo said.

“Governance is a key factor, especially within a socio-technical frame,” Schwartz said. “So what goes into the organizational decisions that are made about AI systems, specifically in government agencies, when they reach out to vendors or internally to have AI developed.”

That makes organizations’ adherence to federal guidance on managing bias in AI all the more important, she added.

Agencies are increasingly launching AI systems for sensitive situations like deploying drones and managing and monitoring weapons, when often those systems are validated in optimized conditions, rather than real-world ones. SP 1270’s success will be gauged based on agencies’ ability to use it to avoid launching biased AI systems.

“To evaluate the effectiveness of this document, it’s almost a matter of the proof in the technological pudding,” Khoo said. “By the time further guidance has come out, will this framework actually prevent the development or deployment of harmful or discriminatory algorithmic technologies, and I think that’s ultimately what it has to be judged on.”

The post NIST takes socio-technical approach to AI bias en route to management standard appeared first on FedScoop.

]]>
49403
GSA won’t use facial recognition with Login.gov for now https://fedscoop.com/gsa-forgoes-facial-recognition-for-now/ Wed, 09 Feb 2022 18:18:20 +0000 https://fedscoop.com/?p=47507 The agency's secure sign-in team continues to research the technology and to conduct equity and accessibility studies.

The post GSA won’t use facial recognition with Login.gov for now appeared first on FedScoop.

]]>
The General Services Administration won’t use facial recognition to grant users access to government benefits and services for now, but its secure sign-in team continues to research the technology.

“Although the Login.gov team is researching facial recognition technology and conducting equity and accessibility studies, GSA has made the decision for now not to use facial recognition, liveness detection, or any other emerging technology in connection with government benefits and services until rigorous review has given us confidence that we can do so equitably and without causing harm to vulnerable populations,” said Dave Zvenyach, director of TTS, in a statement provided to FedScoop.

“There are a number of ways to authenticate identity using other proofing approaches that protect privacy and ensure accessibility and equity.”

Login.gov ensures users are properly authenticated for agencies’ services and verifies identities, and the Technology Transformation Services team that manages it is also studying facial recognition equity and accessibility.

GSA‘s methodical evaluation of the technology contrasts with that of the IRS, which announced Monday that it would transition away from using ID.me‘s service for verifying new online accounts after the company disclosed it lied about relying on 1:many facial recognition — a system proven to pose greater risks of inaccuracy and racial bias.

Login.gov currently collects a photo of a state-issued ID and other personally identifiable information, which are validated against authoritative data sources. The last step involves either sending a text message to the user’s phone number or a letter to their address containing a code that must be provided to Login.gov to complete identity verification.

More than 60 applications across 17 agencies — including USAJOBS at the Office of Personnel Management and the Paycheck Protection and Disaster Loan Application programs at the Small Business Administration — use Login.gov, encompassing more than 17 million users.

GSA’s rejection of facial recognition for Login.gov was first reported by The Washington Post, but the technology is most certainly in the agency’s, and the government’s, future.

The White House Office of Science and Technology Policy is crafting an Artificial Intelligence Bill of Rights to protect people from technology infringements and focused its initial request for information on biometrics like facial recognition.

While OSTP’s definition of biometrics needs refining, not all facial recognition algorithms are prejudicially biased. Technical and operational bias also exist and don’t necessarily lead to inequitable outcomes.

“There are not direct correlations between technical and operational biases and prejudicial bias,” Duane Blackburn, science and technology lead at MITRE‘s Center for Data-Driven Policy, told FedScoop in January. “Even though in a lot of policy analyses they’re treated as equivalent.”

The post GSA won’t use facial recognition with Login.gov for now appeared first on FedScoop.

]]>
47507
USCIS pursuing broader data-sharing agreements with other agencies https://fedscoop.com/uscis-broader-data-sharing/ https://fedscoop.com/uscis-broader-data-sharing/#respond Wed, 13 Oct 2021 20:33:34 +0000 https://fedscoop.com/?p=44124 Existing agreements are often old and fail to account for agencies' need to repurpose and reuse the data.

The post USCIS pursuing broader data-sharing agreements with other agencies appeared first on FedScoop.

]]>
U.S. Citizenship and Immigration Services needs to establish broader data-sharing agreements with partner agencies so all appropriate groups have access for automating aspects of the naturalization system, according to officials.

U.S. Citizenship and Immigration Services (USCIS) isn’t the only player in the “grand life cycle of immigration,” and often the State Department or Customs and Border Protection interact with people — collecting documentation or facial recognition data — before they enter the country.

Existing data-sharing agreements or memorandums of understanding (MOUs) may date back decades and cover specific uses, failing to account for how much agencies like USCIS need to repurpose and reuse that data.

“There are too many MOUs out there that say only this system can talk to explicitly only this system, but it’s like wait, wait, wait, wait five other groups in our agency need the exact same data,” said Damian Kostiuk, chief director of USCIS’s Data Analytics Division, during an ACT-IAC webinar Tuesday. “No, don’t lock me out when this is critical; if I had it I could automate and help people get through the system faster.”

USCIS is working toward semantic models that link disparate data from multiple agencies, rather than rely on a single datapoint like facial recognition. Biographic and fingerprint data can also help triangulate a person’s identity, and their original interaction with the government may have been when the State Department verified a marriage certificate that can be digitized.

The Office of the Chief Data Officer within USCIS is standing up data quality and management programs targeting external, “golden” data key to identity verification held by other agencies.

“If we can get these to work together through data-sharing agreements, honestly I’m really excited about where we’ll be and what we can do to try and remove, as much as possible, bias from these algorithms,” Kostiuk said. “We’ll just have such a huge pool of information to be able to train them, as opposed to them being trained on various, specific subsets.”

Bias in facial recognition and other biometrics data is one of several challenges USCIS must overcome before the technologies can be used to help adjudicate benefits like green cards.

Prior to the pandemic, USCIS was working on a pilot using facial recognition to remotely verify identities. That pilot is being dusted off.

“Right now USCIS, by policy, is not currently allowed to use facial recognition for any part of the process that involves adjudication of a benefit,” said Ryan Koder, chief of biometrics and scheduling. “That’s stuff that … is going to have to change.”

The Customer Profile Management System (CPMS) is USCIS’s centralized repository for all biometric data captured from immigration applicants and allowing for identity management in the form of background checks, re-checks and card production. USCIS wants to match a facial recognition image collected using the CPMS app on a mobile device with images in its catalogue, a “gold standard,” said Timothy Murray, acting chief of the Information Records and National Security Delivery Division.

Most matches have a high level of confidence, but USCIS must address other considerations like bias; certain ethnicities haven’t matched well. The good news is other Department of Homeland Security agencies have deployed mobile apps doing similar things, and its Science and Technology Directorate is working to isolate and measure specific features that lead to bias in order to come up with a response policy.

USCIS must also come up with policies for data protection involving encryption, usability so other apps can leverage the biometrics being created, and fraud addressing the “considerable threat” of deepfakes no longer being perpetrated solely by state actors, Murray said.

A final consideration is privacy and ensuring biometrics collected are used only for their intended purpose.

USCIS’s data-sharing agreements “absolutely go through the ringer” with general counsel and privacy officers to identify those purposes, while the agency’s data management program ensures flags, or else explicit controls, are in place to prevent misuse, Kostiuk said. The next step is parsing anonymized, but statistically relevant data among different groups.

The post USCIS pursuing broader data-sharing agreements with other agencies appeared first on FedScoop.

]]>
https://fedscoop.com/uscis-broader-data-sharing/feed/ 0 44124
Civil rights organizations want nondiscrimination steps laid out in NIST’s AI guidance https://fedscoop.com/civil-rights-organizations-nist-ai/ https://fedscoop.com/civil-rights-organizations-nist-ai/#respond Mon, 13 Sep 2021 19:55:43 +0000 https://fedscoop.com/?p=43638 New AI system initiatives should be reviewed for potentially illegal discriminatory treatment or effects, advocacy groups say.

The post Civil rights organizations want nondiscrimination steps laid out in NIST’s AI guidance appeared first on FedScoop.

]]>
A group of civil rights, tech and other advocacy organizations called for the National Institute of Standards and Technology to recommend steps needed to ensure nondiscriminatory and equitable outcomes for all users of artificial intelligence systems in the final draft of its Proposal for Identifying and Managing Bias with AI.

The definition of model risk — traditionally thought of as the risk of financial loss when inaccurate AI models are used — should be expanded to include the risk of discriminatory and inequitable outcomes, wrote the group in its Friday response to NIST’s draft proposal.

NIST released the proposal for public comment on June 22 with the goal of helping AI designers and deployers mitigate social biases throughout the development lifecycle. But the letter from 34 organizations including the NAACP, Southern Poverty Law Center and mostly those in the housing and consumer credit space makes 12 recommendations for improvements to NIST’s proposal and process.

“It is critically important for NIST to propose a framework that ensures that AI risk analysis works hand in hand with discrimination risk analysis. Moreover, efforts to identify AI risks must not exclude or undermine efforts to promote fair and equitable outcomes,” said Michael Akinwumi, chief tech equity officer at the National Fair Housing Alliance, in a statement. “Any new AI-related initiatives should be reviewed for potentially illegal discriminatory treatment or effect for communities of color and other underserved communities.”

Other proposal enhancements the group suggests include issuing actionable policy statements committing to consumer protection and civil rights laws and outlining expectations and best practices, as well as encourage AI developers to use a diverse workforce.

They are also calling on NIST to recommend regular civil rights and equity training to help personnel catch red flags and to ensure that AI providers are transparent about AI systems and their impact.

On the process side, the group called for NIST to issue a detailed action plan and engage with civil rights activists, consumer advocates and impacted communities.

NIST’s own staff should include people specializing in civil rights to assist agencies and organizations in assessing the potential discriminatory impact of their systems. Staffers working on AI issues should be diverse, according to the letter.

The group further recommended NIST share its methods, data, models, decisions and solutions openly.

Public research analyzing AI use cases and their impact on people and communities of color and other protected classes should be supported by NIST, the letter concludes.

The post Civil rights organizations want nondiscrimination steps laid out in NIST’s AI guidance appeared first on FedScoop.

]]>
https://fedscoop.com/civil-rights-organizations-nist-ai/feed/ 0 43638
GAO issues AI accountability framework for agencies https://fedscoop.com/gao-ai-accountability-framework-release/ https://fedscoop.com/gao-ai-accountability-framework-release/#respond Fri, 02 Jul 2021 19:06:35 +0000 https://fedscoop.com/?p=42531 The framework comes as GAO initiates investigations pertaining to national and homeland security and justice that involve AI.

The post GAO issues AI accountability framework for agencies appeared first on FedScoop.

]]>
The Government Accountability Office has released its much-anticipated artificial intelligence accountability framework in an effort to oversee how agencies are implementing the emerging technology.

GAO‘s framework describes key practices across four parts of the development life cycle — governance, data, performance and monitoring — to help agencies, industry, academia and nonprofits responsibly deploy AI.

Agencies’ inspectors general (IGs), legal counsels, auditors and other compliance professionals needed a framework to conduct their own credible assessments of AI notwithstanding congressional audit requests.

“The only way for you to verify that your AI, in fact, is not biased is through independent verification, and that piece of the conversation has been largely missing,” Taka Ariga, chief data scientist at GAO, told FedScoop. “So GAO, given our oversight role, decided to take a proactive step in filling that gap and not necessarily wait for some technology maturity plateau before we addressed it.”

GAO would always be playing catch-up, given the speed AI is advancing, otherwise, Ariga added.

AI systems are made up of components like machine-learning models that must operate according to the same mission values. For instance, self-driving cars with their cameras and computer vision are systems of systems all working to ensure passenger safety, and it falls not only to auditors but ethicists and civil liberties groups to discuss both their performance and societal impacts.

“We want to make sure that oversight is not being treated as a compliance function,” Ariga said. “There are complicated risks around privacy, complicated risks around technology, around procurement and around disparate impacts.”

GAO’s framework, released Wednesday, is a “forward-looking” way to address those risks absent a standard risk-management framework specific to AI, he added. The agency wants to ensure risk management, oversight and implementation co-evolve as the technology advances to what the Defense Advanced Research Projects Agency calls Wave 3: contextual adaptation, where AI models explain their decisions to drive further decisions.

Another goal of the framework is to include a human-centered element in AI deployment.

With agencies already procuring AI solutions, GAO’s framework makes requirements, documentation and evaluation inherently governmental functions. That’s why every practice outlined includes a set of questions for oversight bodies, auditors and third-party assessors to ask, in addition to procedures for the latter two groups.

The rights to audit AI, inspect models and access data are critical to their efforts.

“It will be detrimental long term if vendors are able to shield the intellectual property aspects of the conversation,” Ariga said.

Attempts to audit AI have already occurred, most notably the Department of Defense‘s effort when the Joint AI Center was created in 2018. But DOD ran into issues because there was no standard definition of AI, and it lacked AI inventories to assess. Fast forward to the present day, and many companies now offer AI and algorithmic assessments.

GAO is already using its new framework to investigate various AI use cases, and other agencies’ IGs have expressed interest in using it, too.

“The timing is great because we actually have a number of ongoing engagements in national security, in homeland security, in the justice domain that involve AI,” Ariga said.

The framework will evolve over time, possibly into an AI scorecard for agencies — an idea proposed by former Rep. Will Hurd, R-Texas, in September.

Google and the JAIC are considering AI model or data cards, while nonprofits have proposed something more akin to a nutrition label, but GAO’s framework doesn’t prescribe a particular accountability method— rather it evaluates the rationale behind whatever mechanism is chosen.

Future iterations of the framework will also ask what transparency and explainability mean for different AI use cases. From facial recognition to self-driving cars to application-screening algorithms to drug development, each carries with it varying degrees of privacy and technology risk.

People won’t need a justification for every turn a self-driving car makes, but they’ll eventually want to know why, to the nth degree, and algorithm is flagging an MRI as anomalous in a cancer diagnosis.

“We knew having to do individual use case nuances would’ve taken us decades before we could ever issue something like this,” Ariga said. “So we decided to focus on common elements of all AI development.”

At the same time departments like Transportation and Veterans Affairs have started collaborating to develop their AI strategies, even though the former’s focus is safety and the latter’s customer service — given their shared workforce, infrastructure, development and procurement issues.

In developing the framework, Ariga said he was “surprised” to find not everyone in government is on board with the notion of accountable AI.

Undergraduate data scientists don’t always receive ethics training and are instead taught to prioritize accuracy, performance and confidence. They carry that perspective with them into government jobs developing AI code, only to have people tell them to eliminate bias for the first time, Ariga said.

At the same time a competing camp argues data scientists shouldn’t shape the world that should be but reflect the one they live in, and AI bias and disparate impacts are someone else’s problem.

Ariga’s team kept that disagreement in mind, while engaging with government and industry AI experts and oversight officials, to avoid placing an undue onus on any one group while developing GAO’s framework.

Government will eventually need to provide additional AI ethics training to data scientists as part of workforce and implementation risk management, training that academic institutions will likely adopt — much the same way medical ethicists came about, Ariga said.

“Maybe not tomorrow but certainly in the near future because, at least in the public sector domain, our responsibility to get it right is so high,” he said. “A lot of these AI implementations actually do have life or death consequences.”

The post GAO issues AI accountability framework for agencies appeared first on FedScoop.

]]>
https://fedscoop.com/gao-ai-accountability-framework-release/feed/ 0 42531