Department of Justice (DOJ) Archives | FedScoop https://fedscoop.com/tag/department-of-justice-doj/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 05 Jun 2024 18:44:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Department of Justice (DOJ) Archives | FedScoop https://fedscoop.com/tag/department-of-justice-doj/ 32 32 FBI’s AI work includes ‘Shark Tank’-style idea exploration, tip line use case https://fedscoop.com/fbis-ai-work-includes-shark-tank-style-idea-exploration-tip-line-use-case/ Wed, 05 Jun 2024 18:44:25 +0000 https://fedscoop.com/?p=78689 Adopting AI for its own work — such as the FBI’s tip line — and identifying how adversaries could be using the technology are both in focus for the agency, officials said.

The post FBI’s AI work includes ‘Shark Tank’-style idea exploration, tip line use case appeared first on FedScoop.

]]>
The FBI’s approach to artificial intelligence ranges from figuring out how bad actors are harnessing the growing technology to adopting its own uses internally, officials said Tuesday, including through a “Shark Tank”-style model aimed at exploring ideas.

Four FBI technology officials who spoke at a GDIT event in Washington detailed the agency’s focus on promoting AI innovations where those tools are merited — such as in its tip line — and ensuring uses could ultimately meet the law enforcement agency’s need to have technology that could later be defended legally. 

In the generative AI space, the pace of change in models and use cases is a concern when the agency’s “work has to be defensible in court,” David Miller, the FBI’s interim chief technology officer, said during the Scoop News Group-produced event. “That means that when we deploy and build something, it has to be sustainable.”

That Shark Tank format, which the agency has noted it’s used previously, allows the FBI to educate its organization about its efforts to explore the technology in a “safe and secure way,” centralize use cases, and get outcomes it can explain to leadership.

Under the model, which ostensibly is named after the popular ABC show “Shark Tank,” Miller said the agency has put in place a constraint of 90 days to prove a concept and at the end the agency has “validated learnings” about cost, missing skill sets that are needed, and potentially any concerns for integrating it in the organization. 

“By establishing that director’s innovation Shark Tank model, it allows us to have really strategic innovation in doing outcomes,” Miller said. 

Some AI uses are already being deployed at the agency.

Cynthia Kaiser, deputy assistant director of the FBI’s Cyber Division, pointed to the agency’s use of AI to help manage the FBI tip line. That phone number serves as a way for the public to provide information to the agency. While Kaiser said there will always be a person taking down concerns or tips through that line, she also said people can miss things. 

Kaiser said the FBI is using natural language processing models to go over the synopsis of calls and online tips to see if anything was missed. That AI is trained using the expertise of people who have been taking in the tips for years and know what to flag, she said, adding that the technology helps the agency “fill in the cracks.” 

According to the Justice Department’s use case inventory for AI, that tool has been used since 2019, and is also used to “screen social media posts directed to the FBI.” It is one of five uses listed for the FBI. Other disclosed uses include translation software and Amazon’s Rekognition tool, which has attracted controversy in the past for its use as a facial recognition tool.

To assess AI uses and whether they’re needed, the officials also said the agency is looking to its AI Ethics Council, which has been around for several years.

Miller, who leads that body, said that council includes membership from across the agency, including the scientific technology and human resource branches, and offices for integrity and compliance, and diversity, equity and inclusion. Currently, the council is going through what Miller called “version two” in which it’s tackling scale and doing more “experimental activities.” 

At the time it was created, Miller said, the panel established a number of ethical controls similar to that of the National Institute of Standards and Technology’s Risk Management Framework. But he added that it can’t spend “weeks reviewing a model or reviewing one use case” and has to look at how it can “enable the organization to innovate” while still taking inequities and constraints into account. 

Officials also noted that important criteria for the agency’s own use of the technology are transparency and consistency. 

Kathleen Noyes, the FBI’s section chief of Next Generation Technology and Lawful Access, said on Tuesday that one of the agency’s requests for industry is that systems “can’t be a black box.”

“We need some transparency and accountability for knowing when we’re invoking an AI capability and when we’re not,” Noyes said.

She said the FBI started with a risk assessment in which it analyzed its needs and use cases to assist with acquisition and evaluation. “We had to start strategic — I think everyone does,” she said, adding that the first question to answer is “are we already doing this?”

At the same event, Justin Williams, deputy assistant director for the FBI’s Information Management Division, also noted that an important question when they’re using AI is whether they can explain the interface.

“I personally have used a variety of different AI tools, and I can ask the same question and get very similar but different answers,” Williams said. But, he added, it wouldn’t be good for the FBI if it can’t defend the consistency in the outputs it’s getting. That’s a “big consideration” for the agency as it slowly adopts emerging technologies, Williams said.

The post FBI’s AI work includes ‘Shark Tank’-style idea exploration, tip line use case appeared first on FedScoop.

]]>
78689
DOJ seeks public input on AI use in criminal justice system https://fedscoop.com/doj-seeks-input-on-criminal-justice-ai/ Wed, 24 Apr 2024 21:36:41 +0000 https://fedscoop.com/?p=77578 The department’s research, development and evaluation arm will use the information as it puts together a report on AI in the criminal justice system due later this year.

The post DOJ seeks public input on AI use in criminal justice system appeared first on FedScoop.

]]>
The Justice Department’s National Institute of Justice is looking for public input on the use of artificial intelligence in the criminal system.

In a document posted for public inspection on the Federal Register Wednesday, the research, development and evaluation arm of the department said it’s seeking feedback to “inform a report that addresses the use of artificial intelligence (AI) in the criminal justice system.” Those comments are due 30 days after the document is published.

That report is among the actions intended to strengthen AI and civil rights that President Joe Biden included in his October 2023 executive order on the technology. According to the order, its aim is to “promote the equitable treatment of individuals and adhere to the Federal Government’s fundamental obligation to ensure fair and impartial justice for all.”

Ultimately, the report is required to address the use of the technology throughout the criminal justice system — from sentencing and parole to policing surveillance and crime forecasting — as well as identify areas where AI could benefit law enforcement, outline recommended best practices, and make recommendations to the White House on additional actions. 

The DOJ must also work with the Homeland Security secretary and the director of the Office of Science and Technology Policy on that report, and it’s due 365 days after the order was issued.

The post DOJ seeks public input on AI use in criminal justice system appeared first on FedScoop.

]]>
77578
DOJ ‘not aware of any’ identity theft, fraud following consultant’s data breach https://fedscoop.com/doj-not-aware-of-identity-theft-following-consultant-breach/ Thu, 11 Apr 2024 20:19:32 +0000 https://fedscoop.com/?p=77162 The Justice Department, which provided the Medicare information to Greylock McKinnon Associates as part of a civil litigation matter, was notified of the breach in May 2023, a DOJ spokesperson said.

The post DOJ ‘not aware of any’ identity theft, fraud following consultant’s data breach appeared first on FedScoop.

]]>
A data breach that exposed Medicare information — including social security numbers — provided to consulting firm Greylock McKinnon Associates by the Justice Department doesn’t appear to have resulted in identity theft or fraud yet, according to a statement from the agency.

“While the Justice Department is not aware of any specific reports of identity theft or other fraud resulting from this incident, the Department has ensured that those impacted have been offered fraud resolution services and credit monitoring,” Wyn Hornbuckle, a DOJ spokesperson, said in an email to FedScoop. “The investigation of this matter is ongoing.”

The response from the DOJ follows a public disclosure of the Boston-based consulting firm’s  breach last week on the Office of the Maine Attorney General’s website. According to that disclosure, first reported by TechCrunch, Greylock McKinnon Associates experienced a cyberattack in May 2023 that likely compromised Medicare information of 341,650 people, including their social security numbers. 

That information was obtained by the Justice Department “as part of a civil litigation matter” and given to the firm, which provides litigation support, in its “provision of services to the DOJ in support of that matter,” according to a letter GMA sent to people affected by the incident.

In that letter, GMA said it “detected unusual activity on our internal network” last May and “promptly took steps to mitigate the incident.” The firm said it worked with a third-party cybersecurity specialist in its response, notified DOJ and law enforcement, and in February, received confirmation of who was affected and their contact information. 

Hornbuckle said the firm notified the DOJ of the breach in May, “after which the Department required that Greylock identify those affected and immediately began its own process to address the breach.”

GMA could not be reached for comment. 

The post DOJ ‘not aware of any’ identity theft, fraud following consultant’s data breach appeared first on FedScoop.

]]>
77162
Peaceful protests, lawful assembly can’t be sole reason for DOJ facial recognition use under interim policy https://fedscoop.com/doj-shares-interim-facial-recognition-policy-details/ Wed, 27 Mar 2024 15:17:08 +0000 https://fedscoop.com/?p=76858 The Justice Department shared details of its interim facial recognition technology policy in testimony to the U.S. Commission on Civil Rights, which is looking into federal use of that capability.

The post Peaceful protests, lawful assembly can’t be sole reason for DOJ facial recognition use under interim policy appeared first on FedScoop.

]]>
Activities protected under the First Amendment, such as peaceful protests and lawful assembly, “may not be the sole basis for the use of” facial recognition technology under the Justice Department’s interim policy governing its deployment of the technology, the agency told a civil rights panel. 

In written testimony submitted to the U.S. Commission on Civil Rights last week, the DOJ shared details of its approach to using facial recognition technology, or FRT, including its interim policy, which it issued in December but hasn’t shared publicly. The testimony came a couple of weeks after the civil rights panel held a briefing on federal use of facial recognition technology that the DOJ didn’t testify at in-person or submit testimony for in advance.

“Notably, the Interim FRT Policy mandates that activity protected by the First Amendment may not be the sole basis for the use of FRT,” the DOJ said in its testimony. “This would include peaceful protests and lawful assemblies, or the lawful exercise of other rights secured by the Constitution and laws of the United States.”

Additionally, the interim policy states that “FRT results alone may not be relied upon as the sole proof of identity,” the DOJ said. It also requires that facial recognition technology complies with the department’s AI policies and that employees never use the technology to “engage in or facilitate unlawful discriminatory conduct,” in addition to requiring risk assessments for the accuracy of facial recognition systems used by the department.

The interim policy could also lead to public disclosures of certain information about use of the technology at the department. Components using facial recognition systems are required to “develop a process to account for and track system use” under the interim policy and report on that use annually to the DOJ’s Emerging Technology Board, which was established to oversee the department’s use of AI and emerging technology, and its Data Governance Board. 

“Without compromising law-enforcement sensitive or national security information, each of these annual reports will be consolidated into a publicly released summary on the Department’s FRT use,” the testimony said.

The commission’s March 8 briefing explored federal use of facial recognition technology at DOJ, the Department of Homeland Security, and the Department of Housing and Urban Development as it prepares a report. Adoption of the technology in the federal government has prompted concerns about privacy and civil liberties, including from lawmakers and academics.

Neither the DOJ nor HUD participated in the hearing, and DOJ’s lack of participation, in particular, prompted two commissioners to indicate they were willing to use subpoena power to produce information. At the time of the briefing, a DOJ spokesperson told FedScoop it was communicating with the commission about a response. 

A Government Accountability Office review of facial recognition systems in the government found that agencies, including the DOJ, didn’t have policies specific to the use of the technology and initially didn’t require training. That report found that the DOJ had “taken steps to issue a department-wide policy” but “faced delays.” The GAO ultimately recommended, among other things, that the attorney general develop a plan for issuing a policy that addresses civil rights and civil liberties.

In testimony to the commission at its briefing, GAO’s Gretta Goodwin said the department informed the government watchdog that it had issued an interim policy but the GAO hadn’t yet seen that policy. Goodwin, who directs the watchdog’s Homeland Security and Justice team, said the GAO plans to review the interim policy as part of its follow-up process on the recommendation.

The description of the interim policy in the department’s testimony appears to address some of GAO’s findings. For example, the DOJ said that the policy mandates that employees using those systems receive training that includes information about privacy, civil rights and civil liberties laws relevant to the use of facial recognition technology. 

While the department acknowledged potential equity and fairness implications of the technology, it also underscored the potential benefits. According to the testimony, facial recognition technology was used by the FBI over the last year to combat crime, find missing children, and address threats on the border. The U.S. Marshals Service also uses the technology for investigations and protective security missions, DOJ said. 

“When employed correctly, FRT affirmatively strengthens our public safety system,” the DOJ said. 

The interim policy was created by a working group within the department that met throughout 2022 and 2023. That group included legal experts and subject matter experts throughout the DOJ. The interim policy will be updated after the department completes an interagency report on best practices required under President Joe Biden’s executive order on policing, the DOJ said.

The post Peaceful protests, lawful assembly can’t be sole reason for DOJ facial recognition use under interim policy appeared first on FedScoop.

]]>
76858
Tech issues are part of the problem — and solution — for FOIA backlog, GAO finds https://fedscoop.com/foia-backlog-technical-problems-gao-report/ Fri, 15 Mar 2024 15:35:24 +0000 https://fedscoop.com/?p=76627 A new report from the congressional watchdog finds a host of technical problems plaguing FOIA officers, who want standardized tech upgrades to help reduce a backlog that rose from 14% to 22% over nearly a decade.

The post Tech issues are part of the problem — and solution — for FOIA backlog, GAO finds appeared first on FedScoop.

]]>
The ever-increasing backlog of Freedom of Information Act requests for federal agencies is due in part to technological issues facing the workers charged with fulfilling them, a new Government Accountability Office report found. 

The congressional watchdog, tasked by a bicameral, bipartisan group of lawmakers to investigate FOIA response delays, found that the backlog jumped from 14% in 2013 to 22% in 2022, with the growing complexity of requests, staffing shortages and increasing threats of litigation also cited as impediments to the work.

A host of tech-related problems, including with FOIA request management systems and other processing tools, came up regularly during the January 2023 to March 2024 performance audit by the GAO, which conducted four virtual focus groups with senior officials representing 23 Chief Financial Officers Act agencies. 

“As technology has developed, we are able to store so many more records than in the past,” a senior FOIA agency official told the GAO. “When we search large volumes of data, we receive tons of records back that are potentially relevant. Unfortunately, we don’t have the budget to invest in sophisticated software that would help us to review the volume of records that we receive. So at the end of the day, it’s one person that’s having to review tens of thousands of records for potential responsiveness, which is a huge issue. It takes a lot of time.”

Another focus group participant called out requests related to email records or internal communications, such as chat and text, as their agency’s “most complex and time-intensive responses.” The large volume of records combined with coordination challenges with other agencies presents a legitimate problem, the official said.

When pressed by the GAO on how agencies could overcome technical challenges and pare down backlogs, some officials pointed to governmentwide adoption of technology upgrades, ensuring that FOIA offices use identical systems to streamline document review and general coordination. Standardization in tech upgrades will become increasingly important as agencies deal with a proliferation of agency records in electronic formats, officials added.

While standardized tech upgrades to support FOIA requests haven’t materialized yet, agency officials who have leveraged new technologies have seen a marked difference in their efficiencies. 

“We now have the ability in our FOIA office to search our agency’s email system,” one focus group participant said. “Before we were using our information technology staff to provide that service, where we would request that they run searches. They would use key terms, then come back to us, and we would have to go back and forth with different search terms, until we got it right. The ability for us to do the search has enabled us to finish these in a more timely and efficient way, which we couldn’t do previously.” 

Another official said their agency uses technology to remove “duplicative entries in extremely large volumes of responsive documents,” while another spoke of going from five different systems to process requests across multiple bureaus and offices to just one, “and that’s streamlined and automated a lot of our processes,” they said.

The FOIA office within Customs and Border Protection and the agency’s IT leaders were cited specifically by the GAO for their use of robotic process automation, a new technology that allowed staff to more “quickly search for records with specific criteria, and complete simple, routine FOIA processing tasks.” Per the Department of Homeland Security, CBP closed more than 12,400 simple requests thanks to the RPA tool, saving FOIA staffers over 1,500 hours of work. 

The GAO offered four recommendations to the Department of Justice, whose Office of Information Policy, along with the Office of Government Information Services, provides support and resources to agency FOIA offices. Those recommendations ask the Attorney General to direct OIP to issue guidance to agencies on effective backlog reduction plans, advise agencies to identify staff- and skill-related support efforts, develop a process to examine data reporting, and update training materials and related agency reporting standards.

The OIP has previously recommended that agencies “use enhanced technology to process requests,” per the GAO, pushing FOIA staff to partner with agency IT leaders to assess the efficacy and costliness of new tools, as well as acquiring “proper FOIA case management systems to automate the request intake process.”

On Capitol Hill, lawmakers applauded the release of the GAO’s report, which comes in the aftermath of the 2015 bipartisan FOIA Oversight and Implementation Act and was timed for release during Sunshine Week.

“An increasing backlog of FOIA requests compromises the already dwindling trust the American people have in their government,” House Oversight and Accountability ranking member Jamie Raskin, D-Md., said in a statement. The GAO’s report “not only outlines the key challenges that agencies face in their efforts to process FOIA requests in a timely manner, but it also highlights a pathway to transparency and regaining public trust in our institutions. Congress must ensure agencies have the resources needed to live up to FOIA’s promise.”

Sen. John Cornyn, R-Texas, added that FOIA serves as “a cornerstone” for the country’s “belief in open and transparent government.” The GAO report “should serve as a starting point to reduce the backlog of FOIA requests so Americans can continue to hold those who represent them accountable.”

The post Tech issues are part of the problem — and solution — for FOIA backlog, GAO finds appeared first on FedScoop.

]]>
76627
Civil rights commissioner slams DOJ, HUD absence at facial recognition briefing https://fedscoop.com/commissioner-slams-doj-hud-facial-recognition-briefing-absence/ Fri, 08 Mar 2024 21:57:41 +0000 https://fedscoop.com/?p=76528 Mondaire Jones, a Democratic appointee on the U.S. Commission on Civil Rights, said he would have urged the use of subpoenas if the panel was given “adequate notice” of the agencies’ refusal to cooperate.

The post Civil rights commissioner slams DOJ, HUD absence at facial recognition briefing appeared first on FedScoop.

]]>
The Department of Justice and the Department of Housing and Urban Development allegedly declined to testify at a U.S. Commission on Civil Rights briefing on government use of facial recognition technology Friday, drawing the ire of panel members.

Mondaire Jones, a commissioner on the panel and former Democratic member of the U.S. House of Representatives, skewered both departments for declining the commission’s invitation to appear and not providing written testimony at the Friday briefing. Jones called their absence “offensive” and alleged the departments are “embarrassed by their failures and are seeking to avoid public accountability.”

“I have not seen anything like this from this administration,” Jones said. “And had the commission been given adequate notice of the failure of these departments to cooperate, I would have urged this commission to exercise its statutory authority to issue subpoenas, which is something that we have rarely had to do in the course of this commission’s existence.”

Commissioner J. Christian Adams, a Republican appointee, said that he shared Jones’ concern about the DOJ’s absence and would support efforts to obtain information from them “even if it extends to exercising subpoena power.” 

In an emailed statement, a HUD spokesperson said the department “does not use any facial recognition technology and urges its program participants to find the right balance between addressing security concerns and respecting residents’ right to privacy.”

They added: “HUD is cooperating with the U.S. Commission on Civil Rights and provided answers to the Commission’s extensive interrogatories and document request in advance of the Commission’s briefing on facial recognition technology. HUD plans to submit written testimony and welcomes future opportunities to collaborate where appropriate.”

In an emailed statement, a spokesman for the DOJ told FedScoop the department is “in communication with the Commission about the Department’s response.” 

Lack of testimony from the departments was particularly notable as the hearing set out to specifically focus on the civil rights implications of facial recognition technology use by the DOJ, HUD and the Department of Homeland Security. 

Unlike the other departments, however, DHS did provide in-person testimony, which Jones praised as the department taking “its statutory obligations and the work of this commission seriously.”

Following Jones’ remarks, Adams underscored the impact of the DOJ’s absence, noting that DOJ’s Office of Legal Counsel and Civil Rights Division would be the “primary drivers of any federal policy related to facial recognition technology,” and DHS’s civil rights office is “effectively subservient” to whatever those offices say about this policy, he said.

“Not having them here takes away the central organizing component of the federal government to answer these questions, so I support you and your concern and whatever steps you think are appropriate going forward,” said Adams, who is the president and general counsel of the Public Interest Legal Foundation and a former DOJ Voting Section attorney.

The Friday hearing comes as use of facial recognition technology in the federal government has prompted concerns about privacy and civil liberties, including from lawmakers and academics. A Government Accountability Office report last year found that seven agencies using the technology had initially used the technology without requiring staff to take training and some agencies didn’t have policies addressing civil rights and civil liberties protections.

Just last month, the National AI Advisory Committee’s Law Enforcement Subcommittee sought to improve agency disclosures of such technologies in their AI use case inventories by approving proposed edits to clarify exclusions.

In opening remarks, Chair Rochelle Garza, a Democratic appointee, noted both the potential benefits of the technology and threats it poses to fundamental rights. She said the briefing marked the commission’s “first step towards investigating the breadth of the challenges that FRT may pose.”

The meeting featured testimony from representatives of the GAO, the White House, Clearview AI, subject matter experts, and federal and state law enforcement, and covered topics such as the capabilities and harms of facial recognition technology and guidance for federal oversight. In addition to the meeting, the commission is accepting public comments as it prepares its report until April 8.

The post Civil rights commissioner slams DOJ, HUD absence at facial recognition briefing appeared first on FedScoop.

]]>
76528
AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions https://fedscoop.com/ai-advisory-law-enforcement-use-case-recommendations/ Wed, 28 Feb 2024 18:12:01 +0000 https://fedscoop.com/?p=76246 The National AI Advisory Committee’s Law Enforcement Subcommittee voted unanimously to edit CIO Council recommendations on sensitive use case and common commercial product exclusions, moves intended to broaden law enforcement agency inventories.

The post AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions appeared first on FedScoop.

]]>
There’s little debate that facial recognition and automated license plate readers are forms of artificial intelligence used by police. So the omissions of those technologies in the Department of Justice’s AI use case inventory late last year were a surprise to a group of law enforcement experts charged with advising the president and the National AI Initiative Office on such matters.

“It just seemed to us that the law enforcement inventories were quite thin,” Farhang Heydari, a Law Enforcement Subcommittee member on the National AI Advisory Committee, said in an interview with FedScoop.

Though the DOJ and other federal law enforcement agencies in recent weeks made additions to their use case inventories — most notably with the FBI’s disclosure of Amazon’s image and video analysis software Rekognition — the NAIAC Law Enforcement Subcommittee wanted to get to the bottom of the initial exclusions. With that in mind, subcommittee members last week voted unanimously in favor of edits to two recommendations governing excluded AI use cases in Federal CIO Council guidance

The goal in delivering updated recommendations, committee members said, is to clarify the interpretations of those exemptions, ensuring more comprehensive inventories from federal law enforcement agencies.

“I think it’s important for all sorts of agencies whose work affects the rights and safety of the public,” said Heydari, a Vanderbilt University law professor who researches policing technologies and AI’s impact on the criminal justice system. “The use case inventories play a central role in the administration’s trustworthy AI practices — the foundation of trustworthy AI is being transparent about what you’re using and how you’re using it. And these inventories are supposed to guide that.” 

Office of Management and Budget guidance issued last November called for additional information from agencies on safety- or rights-impacting uses — an addendum especially relevant to law enforcement agencies like the DOJ. 

That guidance intersected neatly with the NAIAC subcommittee’s first AI use case recommendation, which permitted agencies to “exclude sensitive AI use cases,” defined by the Federal CIO Council as those “that cannot be released practically or consistent with applicable law and policy, including those concerning the protection of privacy and sensitive law-enforcement, national security, and other protected interests.”

Subcommittee members said during last week’s meeting that they’d like the CIO Council to go back to the drawing board and make a narrower recommendation, with more specificity around what it means for a use case to be sensitive. Every law enforcement use of AI “should begin with a strong presumption in favor of public disclosure,” the subcommittee said, with exceptions limited to information “that either would substantially undermine ongoing investigations or would put officers or members of the public at risk.”

“If a law enforcement agency wants to use this exception, they have to basically get clearance from the chief AI officer in their unit,” Jane Bambauer, NAIAC’s Law Enforcement Subcommittee chair and a University of Florida law professor, said in an interview with FedScoop. “And they have to document the reason that the technology is so sensitive that even its use at all would compromise something very important.”

It’s no surprise that law enforcement agencies use technologies like facial or gait recognition, Heydari added, making the initial omissions all the more puzzling. 

“We don’t need to know all the details, if it were to jeopardize some kind of ongoing investigation or security measures,” Heydari said. “But it’s kind of hard to believe that just mentioning that fact, which, you know, most people would probably guess on their own, is really sensitive.”

While gray areas may still exist when agencies assess sensitive AI use cases, the second AI use case exclusion targeted by the Law Enforcement Subcommittee appears more cut-and-dried. The CIO Council’s exemption for agency usage of “AI embedded within common commercial products, such as word processors or map navigation systems” resulted in technologies such as automated license plate readers and voice spoofing to often be left on the cutting-room floor. 

Bambauer said very basic AI uses, such as autocomplete or some Microsoft Edge features, shouldn’t be included in inventories because they aren’t rights-impacting technologies. But common commercial AI products might not have been listed because they’re not “bespoke or customized programs.”

“If you’re just going out into the open market and buying something that [appears to be exempt] because nothing is particularly new about it, we understand that logic,” Bambauer said. “But it’s not actually consistent with the goal of inventory, which is to document not just what’s available, but to document what is actually a use. So we recommended a limitation of the exceptions so that the end result is that inventory is more comprehensive.”

Added Heydari: “The focus should be on the use, impacting people’s rights and safety. And if it is, potentially, then we don’t care if it’s a common commercial product — you should be listing it on your inventory.” 

A third recommendation from the subcommittee, which was unrelated to the CIO Council exclusions, calls on law enforcement agencies to adopt an AI use policy that would set limits on when the technology can be used and by whom, as well as who outside the agency could access related data. The recommendation also includes several oversight mechanisms governing an agency’s use of AI.

After the subcommittee agrees on its final edits, the three recommendations will be posted publicly and sent to the White House and the National AI Initiative Office for consideration. Recommendations from NAIAC — a collection of AI experts from the private sector, academia and nonprofits — have no direct authority, but Law Enforcement Subcommittee members are hopeful that their work goes a long way toward improving transparency with AI and policing.

“If you’re not transparent, you’re going to engender mistrust,” Heydari said. “And I don’t think anybody would argue that mistrust between law enforcement and communities hasn’t been a problem, right? And so this seems like a simple place to start building trust.”

The post AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions appeared first on FedScoop.

]]>
76246
DOJ picks Princeton computer scientist as its chief AI officer https://fedscoop.com/doj-chief-ai-officer-jonathan-mayer/ Thu, 22 Feb 2024 21:57:27 +0000 https://fedscoop.com/?p=76160 Jonathan Mayer, a former FCC technologist and policy adviser to Kamala Harris, will lead the Justice Department’s artificial intelligence work, including its Emerging Technology Board.

The post DOJ picks Princeton computer scientist as its chief AI officer appeared first on FedScoop.

]]>
The Department of Justice has tapped Princeton University professor Jonathan Mayer as its first chief artificial intelligence officer and chief science and technology adviser, the agency announced Thursday.

The DOJ’s appointment of Mayer — who teaches in Princeton’s computer science department and in its school of public and international affairs — satisfies the White House AI executive order requirement that each of the Chief Financial Officers Act agencies designate a permanent CAIO. FedScoop has tracked the appointments of those AI officials across agencies.

In a statement announcing Mayer’s selection, Attorney General Merrick Garland said that the DOJ’s mission depends on its ability to “keep pace with rapidly evolving scientific and technological developments.”

“Jonathan’s expertise will be invaluable in ensuring that the entire Justice Department — including our law enforcement components, litigating components, grantmaking entities, and U.S. Attorneys’ Offices — is prepared for both the challenges and opportunities that new technologies present,” Garland added.

As the DOJ’s CAIO, Mayer will oversee the department’s Emerging Technology Board, which is tasked with coordinating and governing AI and other types of emerging tech throughout the agency. More broadly, Mayer — who holds a Ph.D. in computer science and a law degree from Stanford — will lead cross-agency and intra-department efforts on AI and related issues.

The DOJ currently has 15 AI use cases listed in its inventory, including a disclosure covered by FedScoop last month that the FBI is in the “initiation” phase of using Amazon Rekognition, an image and video analysis software. 

Neither the DOJ nor Amazon, which previously issued a moratorium on police use of Rekognition, would confirm to FedScoop at the time if Rekognition’s facial recognition capabilities were accessible to or in use by the FBI.

Other AI use cases revealed by the DOJ in its inventory include a machine translation service for the FBI, a voice transcription to text system for the agency’s Office of the Inspector General, and gunshot detection and identification software for the Bureau of Alcohol, Tobacco, Firearms and Explosives, among others.

Mayer’s move from Princeton — where his tech-, policy- and law-focused research has centered on criminal procedure, national security, and consumer protection — to the DOJ represents a return to public service. From November 2015 to March 2017, he served as chief technologist in the Federal Communications Commission’s Enforcement Bureau. After that, Mayer spent a year in then-Sen. Kamala Harris’s office as a technology law and policy adviser to the California Democrat.   

The post DOJ picks Princeton computer scientist as its chief AI officer appeared first on FedScoop.

]]>
76160
Department of Justice announces new AI initiative https://fedscoop.com/justice-ai-doj-new-ai-initiative/ Thu, 15 Feb 2024 21:55:25 +0000 https://fedscoop.com/?p=76076 Justice AI will bring together experts to speak about the relationship between artificial intelligence and the criminal justice system.

The post Department of Justice announces new AI initiative appeared first on FedScoop.

]]>
The Department of Justice is launching an artificial intelligence initiative that will host various interdisciplinary experts for discussions about the technology over the next half-year, and relay those findings in a report to President Joe Biden about AI and the criminal justice system. 

In a speech at the University of Oxford about the opportunities and risks of AI, DOJ Deputy Attorney General Lisa Monaco said that Justice AI will collect information from “individuals from across civil society, academia, science and industry” in an effort meant to assist the agency in preparing for and understanding how the technology will “affect the department’s mission,” ensuring that Justice harnesses AI’s potential and is “guarding against its risks.”

Justice AI will include foreign counterparts who are “grappling with many of the same questions,” Monaco said in her remarks, adding that the report to Biden will be delivered at the end of the year. 

“Technological advancements have and always will fundamentally challenge the department’s mission,” Monaco said. “Because at its core, technology impacts how we protect people and how we ensure equal treatment under the law. Our work at the Department of Justice is to make sure that whatever comes now or next adheres to the law and is consistent with our values.”

During CyberScoop’s Zero Trust Summit in Washington, D.C., on Thursday, the department’s CIO, Melinda Rogers, did not speak about the Justice AI announcement, but did address how DOJ is utilizing AI in security applications. 

“It’s actually applying, potentially, artificial intelligence where appropriate with consultation from our private attorneys,” Rogers said. The DOJ wants to “make sure that as we look at this material, how can we leverage information from all this log data that we’ve been collecting? Reality is, historically, we’ve used log data for incident response. If something happens, we go back and look at this, but I think there’s a real opportunity there to see what could be applied so that it’s more instant, more predictive.”

DOJ currently lists 15 AI use cases in its inventory, as required by a Trump-era executive order

In January, FedScoop reported on a DOJ AI disclosure regarding an FBI project with Amazon Rekognition, an Amazon Web Services product. The FBI’s Rekognition project is currently in its initiation phase to “customize and identify items containing nudity, weapons, explosives and other identifying information.” 

In response to a request for comment and additional information, DOJ said that it didn’t have anything further to share at this time.

The post Department of Justice announces new AI initiative appeared first on FedScoop.

]]>
76076
How risky is ChatGPT? Depends which federal agency you ask https://fedscoop.com/how-risky-is-chatgpt-depends-which-federal-agency-you-ask/ Mon, 05 Feb 2024 17:20:57 +0000 https://fedscoop.com/?p=75907 A majority of civilian CFO Act agencies have come up with generative AI strategies, according to a FedScoop analysis.

The post How risky is ChatGPT? Depends which federal agency you ask appeared first on FedScoop.

]]>
From exploratory pilots to temporary bans on the technology, most major federal agencies have now taken some kind of action on the use of tools like ChatGPT. 

While many of these actions are still preliminary, growing focus on the technology signals that federal officials expect to not only govern but eventually use generative AI. 

A majority of the civilian federal agencies that fall under the Chief Financial Officers Act have either created guidance, implemented a policy, or temporarily blocked the technology, according to a FedScoop analysis based on public records requests and inquiries to officials. The approaches vary, highlighting that different sectors of the federal government face unique risks — and unique opportunities — when it comes to generative AI. 

As of now, several agencies, including the Social Security Administration, the Department of Energy, and Veterans Affairs, have taken steps to block the technology on their systems. Some, including NASA, have or are working on establishing secure testing environments to evaluate generative AI systems. The Agriculture Department has even set up a board to review potential generative AI use cases within the agency. 

Some agencies, including the U.S. Agency for International Development, have discouraged employees from inputting private information into generative AI systems. Meanwhile, several agencies, including Energy and the Department of Homeland Security, are working on generative AI projects. 

The Departments of Commerce, Housing and Urban Development, Transportation, and Treasury did not respond to requests for comment, so their approach to the technology remains unclear. Other agencies, including the Small Business Administration, referenced their work on AI but did not specifically address FedScoop’s questions about guidance, while the Office of Personnel Management said it was still working on guidance. The Department of Labor didn’t respond to FedScoop’s questions about generative AI. FedScoop obtained details about the policies of Agriculture, USAID, and Interior through public records requests. 

The Biden administration’s recent executive order on artificial intelligence discourages agencies from outright banning the technology. Instead, agencies are encouraged to limit access to the tools as necessary and create guidelines for various use cases. Federal agencies are also supposed to focus on developing “appropriate terms of service with vendors,” protecting data, and “deploying other measures to prevent misuse of Federal Government information in generative AI.”

Agency policies on generative AI differ
AgencyPolicy or guidanceRisk assessmentSandboxRelationship with generative AI providerNotes
USAIDNeither banned nor approved, but employees discouraged from using private data in memo sent in April.Didn’t respond to a request for comment. Document was obtained via FOIA.
AgricultureInterim guidance distributed in October 2023 prohibits employee or contactor use in official capacity and on government equipment. Established review board for approving generative AI use cases.A March risk determination by the agency rated ChatGPT’s risk as “high.”OpenAI disputed the relevance of a vulnerability cited in USDA’s risk assessment, as FedScoop first reported.
EducationDistributed initial guidance to employees and contractors in October 2023. Developing comprehensive guidance and policy. Conditionally approved use of public generative AI tools.Is working with vendors to establish an enterprise platform for generative AI.Not at the time of inquiry.Agency isn’t aware of generative AI uses in the department and is establishing a review mechanism for future proposed uses.
EnergyIssued a temporary block of Chat GPT but said it’s making exceptions based on needs.Sandbox enabled.Microsoft Azure and Google Cloud.
Health and Human ServicesNo specific vendor or technology is excluded, though subagencies, like National Institutes of Health, prevent use of generative AI in certain circumstances.“The Department is continually working on developing and testing a variety of secure technologies and methods, such as advanced algorithmic approaches, to carry out federal missions,” Chief AI Officer Greg Singleton told FedScoop.
Homeland SecurityFor public, commercial tools, employees might seek approval and attend training. Four systems, ChatGPT, Bing Chat, Claude 2 and DALL-E2, are conditionally approved.Only for use with public information.In conversations.DHS is taking a separate approach to generative AI systems integrated directly into its IT assets, CIO and CAIO Eric Hysen told FedScoop.
InteriorEmployees “may not disclose non-public data” in a generative AI system “unless or until” the system is authorized by the agency. Generative AI systems “are subject to the Department’s prohibition on installing unauthorized software on agency devices.”Didn’t respond to a request for comment. Document was obtained via FOIA.
JusticeThe DOJ’s existing IT policies cover artificial intelligence, but there is no separate guidance for AI. No use cases have been ruled out.No plans to develop an environment for testing currently.No formal agreements beyond existing contracts with companies that now offer generative AI.DOJ spokesperson Wyn Hornbuckle said the department’s recently established Emerging Technologies Board will ensure that DOJ “remains alert to the opportunities and the attendant risks posed by artificial intelligence (AI) and other emerging technologies.”
StateInitial guidance doesn’t automatically exclude use cases. No software type is outright forbidden and generative AI tools can be used with unclassified information.Currently developing a tailored sandbox.Currently modifying terms of service with AI service providers to support State’s mission and security standards.A chapter in the Foreign Affairs Manual, as well as State’s Enterprise AI strategy, apply to generative AI, according to the department.
Veterans AffairsDeveloped internal guidance in July 2023 based on the agency’s existing ban on using sensitive data on unapproved systems. ChatGPT and similar software are not available on the VA network.Didn’t directly address but said the agency is  pursuing low-risk pilotsVA has contracts with cloud companies offering generative AI services.
Environmental Protection AgencyReleased a memo in May 2023 that personnel were prohibited from  using generative AI tools while the agency reviewed “legal, information security and privacy concerns.” Employees with “compelling” uses are directed to work with the information security officer on an exception.Conducting a risk assessment.No testbed currently.EPA is “considering several vendors and options in accordance with government acquisition policy,” and is “also considering open-source options,” a spokesperson said.The department intends to create a more formal policy in line with Biden’s AI order.
General Services AdministrationPublicly released policy in June 2023 saying it blocked third-party generative AI tools on government devices. According to a spokesperson, employees and contractors can only use public large language models for “research or experimental purposes and non-sensitive uses involving data inputs already in the public domain or generalized queries. LLM responses may not be used in production workflows.”Agency has “developed a secured virtualized data analysis solution that can be used for generative AI systems,” a spokesperson said.
NASAMay 2023 policy says public generative AI tools are not cleared for widespread use on sensitive data. Large language models can’t be used in production workflows.Cited security challenges and limited accuracy as risks.Currently testing the technology in a secure environment.
National Science FoundationGuidance for generative AI use in proposal reviews expected soon; also released guidance for the technology’s use in merit review. Set of acceptable use cases is being developed.“NSF is exploring options for safely implementing GAI technologies within NSF’s data ecosystem,” a spokesperson said.No formal relationships.
Nuclear Regulatory CommissionIn July 2023, the agency issued an internal policy statement to all employees on generative AI use.Conducted “some limited risk assessments of publicly available gen-AI tools” to develop policy statement, a spokesperson said. NRC plans to continue working with government partners on risk management, and will work on security and risk mitigation for internal implementation.NRC is “talking about starting with testing use cases without enabling for the entire agency, and we would leverage our development and test environments as we develop solutions,” a spokesperson said.Has Microsoft for Azure AI license. NRC is also exploring the implementation of Microsoft Copilot when it’s added to the Government Community Cloud.“The NRC is in the early stages with generative AI. We see potential for these tools to be powerful time savers to help make our regulatory reviews more efficient,” said Basia Sall, deputy director of the NRC’s IT Services Development & Operations Division.
Office of Personnel ManagementThe agency is currently working on generative AI guidance.“OPM will also conduct a review process with our team for testing, piloting, and adopting generative AI in our operations,” a spokesperson said.
Small Business AdministrationSBA didn’t address whether it had a specific generative AI policy.A spokesperson said the agency “follows strict internal and external communication practices to safeguard the privacy and personal data of small businesses.”
Social Security AdministrationIssued temporary block on the technology on agency devices, according to a 2023 agency reportDidn’t respond to a request for comment.
Sources: U.S. agency responses to FedScoop inquiries and public records.
Note: Chart displays information obtained through records requests and responses from agencies. The Departments of Commerce, Housing and Urban Development, Transportation, and Treasury didn’t respond to requests for comment. The Department of Labor didn’t respond to FedScoop’s questions about generative AI.

The post How risky is ChatGPT? Depends which federal agency you ask appeared first on FedScoop.

]]>
75907