Federal Bureau of Investigation (FBI) Archives | FedScoop https://fedscoop.com/tag/federal-bureau-of-investigation-fbi/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 05 Jun 2024 18:44:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Federal Bureau of Investigation (FBI) Archives | FedScoop https://fedscoop.com/tag/federal-bureau-of-investigation-fbi/ 32 32 FBI’s AI work includes ‘Shark Tank’-style idea exploration, tip line use case https://fedscoop.com/fbis-ai-work-includes-shark-tank-style-idea-exploration-tip-line-use-case/ Wed, 05 Jun 2024 18:44:25 +0000 https://fedscoop.com/?p=78689 Adopting AI for its own work — such as the FBI’s tip line — and identifying how adversaries could be using the technology are both in focus for the agency, officials said.

The post FBI’s AI work includes ‘Shark Tank’-style idea exploration, tip line use case appeared first on FedScoop.

]]>
The FBI’s approach to artificial intelligence ranges from figuring out how bad actors are harnessing the growing technology to adopting its own uses internally, officials said Tuesday, including through a “Shark Tank”-style model aimed at exploring ideas.

Four FBI technology officials who spoke at a GDIT event in Washington detailed the agency’s focus on promoting AI innovations where those tools are merited — such as in its tip line — and ensuring uses could ultimately meet the law enforcement agency’s need to have technology that could later be defended legally. 

In the generative AI space, the pace of change in models and use cases is a concern when the agency’s “work has to be defensible in court,” David Miller, the FBI’s interim chief technology officer, said during the Scoop News Group-produced event. “That means that when we deploy and build something, it has to be sustainable.”

That Shark Tank format, which the agency has noted it’s used previously, allows the FBI to educate its organization about its efforts to explore the technology in a “safe and secure way,” centralize use cases, and get outcomes it can explain to leadership.

Under the model, which ostensibly is named after the popular ABC show “Shark Tank,” Miller said the agency has put in place a constraint of 90 days to prove a concept and at the end the agency has “validated learnings” about cost, missing skill sets that are needed, and potentially any concerns for integrating it in the organization. 

“By establishing that director’s innovation Shark Tank model, it allows us to have really strategic innovation in doing outcomes,” Miller said. 

Some AI uses are already being deployed at the agency.

Cynthia Kaiser, deputy assistant director of the FBI’s Cyber Division, pointed to the agency’s use of AI to help manage the FBI tip line. That phone number serves as a way for the public to provide information to the agency. While Kaiser said there will always be a person taking down concerns or tips through that line, she also said people can miss things. 

Kaiser said the FBI is using natural language processing models to go over the synopsis of calls and online tips to see if anything was missed. That AI is trained using the expertise of people who have been taking in the tips for years and know what to flag, she said, adding that the technology helps the agency “fill in the cracks.” 

According to the Justice Department’s use case inventory for AI, that tool has been used since 2019, and is also used to “screen social media posts directed to the FBI.” It is one of five uses listed for the FBI. Other disclosed uses include translation software and Amazon’s Rekognition tool, which has attracted controversy in the past for its use as a facial recognition tool.

To assess AI uses and whether they’re needed, the officials also said the agency is looking to its AI Ethics Council, which has been around for several years.

Miller, who leads that body, said that council includes membership from across the agency, including the scientific technology and human resource branches, and offices for integrity and compliance, and diversity, equity and inclusion. Currently, the council is going through what Miller called “version two” in which it’s tackling scale and doing more “experimental activities.” 

At the time it was created, Miller said, the panel established a number of ethical controls similar to that of the National Institute of Standards and Technology’s Risk Management Framework. But he added that it can’t spend “weeks reviewing a model or reviewing one use case” and has to look at how it can “enable the organization to innovate” while still taking inequities and constraints into account. 

Officials also noted that important criteria for the agency’s own use of the technology are transparency and consistency. 

Kathleen Noyes, the FBI’s section chief of Next Generation Technology and Lawful Access, said on Tuesday that one of the agency’s requests for industry is that systems “can’t be a black box.”

“We need some transparency and accountability for knowing when we’re invoking an AI capability and when we’re not,” Noyes said.

She said the FBI started with a risk assessment in which it analyzed its needs and use cases to assist with acquisition and evaluation. “We had to start strategic — I think everyone does,” she said, adding that the first question to answer is “are we already doing this?”

At the same event, Justin Williams, deputy assistant director for the FBI’s Information Management Division, also noted that an important question when they’re using AI is whether they can explain the interface.

“I personally have used a variety of different AI tools, and I can ask the same question and get very similar but different answers,” Williams said. But, he added, it wouldn’t be good for the FBI if it can’t defend the consistency in the outputs it’s getting. That’s a “big consideration” for the agency as it slowly adopts emerging technologies, Williams said.

The post FBI’s AI work includes ‘Shark Tank’-style idea exploration, tip line use case appeared first on FedScoop.

]]>
78689
AI fuels rise in attacks from ‘unsophisticated threat actors,’ federal cyber leaders say https://fedscoop.com/ai-cyberattacks-federal-agencies-fbi-treasury-state-department/ Wed, 05 Jun 2024 15:07:46 +0000 https://fedscoop.com/?p=78674 Officials from Treasury, State and the FBI say information-sharing is increasingly important as AI enables so-so hackers to level up.

The post AI fuels rise in attacks from ‘unsophisticated threat actors,’ federal cyber leaders say appeared first on FedScoop.

]]>
A day in the life of the Treasury Department’s top cybersecurity official is an unrelenting game of Whac-a-Mole that has only grown more intense in the age of artificial intelligence and the corresponding rise of inexperienced-yet-prolific attackers. 

For Sarah Nur, Treasury’s chief information security officer and associate CIO for cyber, that arcade-style battle to protect federal networks from adversarial threats is “nonstop.”

AI has made it “a lot easier” for “unsophisticated threat actors … to create these attack scenarios,” Nur said, “so that they can go ahead and launch and play around in our current infrastructure.”

Speaking Tuesday at a Scoop News Group-produced GDIT event in Washington, Nur and other federal cyber officials spoke of the proliferation of AI-fueled cyberattacks and how much more critical coordination and information-sharing has become as use of the technology among amateur hackers has surged.     

Cynthia Kaiser, deputy assistant director of the FBI’s cyber division, said she’s seen “a crop of adversaries who are becoming at least mildly better” at their craft due to AI. The technology eases hackers’ ability to perform basic scripting tasks and identify coding errors, Kaiser said, while deepfakes are leveraged in social engineering campaigns and increasingly refined spearphishing messages.

“A beginner hacker can go to the intermediate level,” she said, “and even the most sophisticated adversaries can be more efficient.”

Gharun Lacy has also observed a leveling up among threat actors in his role as deputy assistant secretary for cyber and technology security in the State Department’s Bureau of Diplomatic Security. Those adversaries are “using AI as an amplifier,” bettering their best skills as a result. 

“Do you have a threat actor that is extremely proficient in human engineering? Then they’re going to get better at human engineering,” Lacy said. “That phishing email will now call you by a nickname that you had in high school.” 

The Treasury Department is especially susceptible to this onslaught of new-age threats given its role as the federal government’s sanctions arm, Nur said, not to mention the fact that the financial industry is one of the most targeted critical infrastructure sectors. Hackers today can simply look up a CVE, plug it into an AI system and ask it to provide “an undetected attack scenario that I can utilize,” Nur said, noting that packages of this kind on the dark web are “ready to go.”

“I heard someone say ‘fight AI with AI.’ I get what that means,” Nur said, “and I think that’s a very key concept. We really have to look at leveraging AI to quickly detect these anomalies and any kind of fraud or unusual suspicious activity.”

The silver lining for federal security officials is that AI still provides defenders with a decided advantage over attackers in cyberspace. The key to maintaining that advantage, they say, is doubling down on coordination with public and private-sector partners.

Kaiser said the use of large language models to “more rapidly draft text” for interagency memos and private-sector alerts represents “a huge win for everybody” in the battle against threat actors. 

At the State Department, the chief AI officer, chief data officer and members of the agency’s Center for Analytics have successfully leveraged AI in “reducing the noise in terms of threat intelligence,” Lacy said, sifting through “massive amounts of data” to make it “more actionable directly for us.” Streamlining data and threat intel leads to more valuable insights that State can provide to its partners, he added. 

“If I know this piece of information is not useful for me, but it may very well be useful to one of my private industry partners, I need to know how to get that information to them quickly,” Lacy said, noting that the White House has provided a quality blueprint for sharing intelligence and has encouraged agencies to be “very forthcoming now in terms of naming, blaming [and] shaming when incidents happen — and doing it quickly.”

Lacy pointed to a State Department collaboration with foreign ministries from the United Kingdom, Australia, Canada and New Zealand that brings together those countries’ cyber defenders in a quarterly meeting to “share a lot of information.” 

“I think we’re past the sharing; we’re on to collaborating,” Lacy said. “I think that’s … the phase we’re in right now. But the collaboration has to yield collective action.”

Treasury’s in a similarly collaborative mode at the moment, fresh off its launch last month of Project Fortress, a public-private partnership aimed at protecting the financial sector from cyber threats. Nur said the agency has been active in onboarding companies and organizations to the group, ensuring that participating financial institutions have access to top tools and are practicing good cyber hygiene before truly “aggressive AI attacks” become the norm.

Whether it’s meeting regularly with other CISOs, coordinating with international partners or establishing communication channels with industry, agency cyber officials across the board agree that mitigating AI-fueled threats will only be possible with more collaboration and better sharing of information.

“In the past, what really prevented us from sharing that information is that embarrassment, that reputational impact,” Nur said. “We can no longer think in those ways. We need to shift our mindset to say, ‘hey, look, we’re going to expect at least two to three a year, maybe even more, and that’s OK.’” 

The post AI fuels rise in attacks from ‘unsophisticated threat actors,’ federal cyber leaders say appeared first on FedScoop.

]]>
78674
FBI’s $8 billion information technology services contract is its largest ever https://fedscoop.com/fbis-8-billion-information-technology-services-contract-is-its-largest-ever/ Tue, 28 May 2024 21:19:41 +0000 https://fedscoop.com/?p=78535 The contract vehicle for IT services and supplies is the largest such agreement the FBI has ever established, the bureau said.

The post FBI’s $8 billion information technology services contract is its largest ever appeared first on FedScoop.

]]>
The FBI announced awards for the second iteration of a blanket purchase agreement for IT services and supplies Friday, estimating the spend will be $8 billion.

A total of 95 entities — 31 large businesses and 64 small businesses — received awards under the sequel to the Information Technology Supplies and Support Services contract, also known as ITSSS, the agency said in an update on SAM.gov. The new agreement will serve as the primary vehicle for the agency’s IT services for the next eight years.

The award marks the largest contract vehicle for IT services ever established by the FBI, according to the agency. Investments for the previous ITSSS totaled over $2 billion. 

“ITSSS-2 will provide the FBI with streamlined acquisition procedures and a vetted Vendor Pool to establish call orders more efficiently,” the agency said in the update. 

The FBI also noted that it will establish “a forecasting tool to identify upcoming requirements on a timely basis and to allow ITSSS-2 vendors to appropriately plan their proposals.” The bureau said it will be holding informational meetings with stakeholders in coming weeks.

Efforts to create the vehicle began in December 2021 when the FBI partnered with the General Services Administration on the blanket purchase agreement, according to the agency’s updates SAM.gov. In February 2024, the bureau said it was in the last phase of evaluation but an award wouldn’t be made until bid protest challenges to the contract filed with the Government Accountability Office were resolved. 

The post FBI’s $8 billion information technology services contract is its largest ever appeared first on FedScoop.

]]>
78535
FBI, DHS lack information-sharing strategies for domestic extremist threats online, GAO says https://fedscoop.com/fbi-dhs-domestic-extremist-violent-threats-social-media-gaming/ Thu, 29 Feb 2024 21:02:31 +0000 https://fedscoop.com/?p=76255 A new watchdog report finds that an absence of “strategy or goals” from the agencies in how they engage with social media and gaming companies on violent threats calls into question the effectiveness of their communications with those platforms.

The post FBI, DHS lack information-sharing strategies for domestic extremist threats online, GAO says appeared first on FedScoop.

]]>
The FBI and Department of Homeland Security’s information-sharing efforts on domestic extremist threats with social media and gaming companies lack an overarching strategy, a Government Accountability Office report found, raising questions about the effectiveness of the agencies’ communications to address violent warnings online.

In response to the proliferation in recent years of content on social media and gaming platforms that promote domestic violent extremism, the FBI and DHS have taken steps to increase the flow of information with those platforms. But “without a strategy or goals, the agencies may not be fully aware of how effective their communications are with companies, or how effectively their information-sharing mechanisms serve the agencies’ overall missions,” the GAO said.

For its report, the GAO requested interviews with 10 social media and gaming companies whose platforms were connected most frequently with domestic violent extremism terms, per article and report searches. Discord, Reddit and Roblox agreed to participate, as did a social media company and a game publisher, both of which asked to remain anonymous.

The platforms reported using a variety of measures to identify content that promotes domestic violent extremism, including machine learning tools to flag posts for review or automatic removal, reporting by users and trusted flaggers, reviews by human trust and safety teams, and design elements that discourage users from committing violations.

Once those companies have identified a violent threat, there are reporting mechanisms in place with both DHS and the FBI. “However, neither agency has a cohesive strategy that encompasses these mechanisms, nor overarching goals for its information-sharing efforts with companies about online content that promotes domestic violent extremism,” the GAO noted.

The agencies are engaged in multiple other efforts to stem the tide of domestic extremist threat content. The FBI, for example, is a participant in the Global Internet Forum to Counter Terrorism, and in the United Nations’ Tech Against Terrorism initiative. The agency also employs a program manager dedicated to communications with social media companies, conducts yearly meetings with private sector partners and operates the National Threat Operations Center, a centralized entity that processes tips.

DHS, meanwhile, has participated in a variety of non-governmental organizations aimed at bolstering information-sharing, in addition to providing briefings to social media and gaming companies through the agency’s Office of Intelligence and Analysis. 

There are also joint FBI-DHS efforts in progress, including the issuing of products tied to the online threat landscape, and a partnership in which the FBI delivers briefings, conducts webinars and distributes informational materials on various threats to Domestic Security Alliance Council member companies. 

Though the FBI and DHS are clearly engaged in myriad efforts to stem domestic extremist violent threats made on social media and gaming platforms, the GAO noted that implementing strategies and setting specific goals should be considered “a best practice” across agencies.

With that in mind, the GAO recommended that the FBI director and the I&A undersecretary both develop a strategy and goals for information-sharing on domestic violent extremism with social media and gaming companies. DHS said it expects to complete the strategy by June.

The post FBI, DHS lack information-sharing strategies for domestic extremist threats online, GAO says appeared first on FedScoop.

]]>
76255
AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions https://fedscoop.com/ai-advisory-law-enforcement-use-case-recommendations/ Wed, 28 Feb 2024 18:12:01 +0000 https://fedscoop.com/?p=76246 The National AI Advisory Committee’s Law Enforcement Subcommittee voted unanimously to edit CIO Council recommendations on sensitive use case and common commercial product exclusions, moves intended to broaden law enforcement agency inventories.

The post AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions appeared first on FedScoop.

]]>
There’s little debate that facial recognition and automated license plate readers are forms of artificial intelligence used by police. So the omissions of those technologies in the Department of Justice’s AI use case inventory late last year were a surprise to a group of law enforcement experts charged with advising the president and the National AI Initiative Office on such matters.

“It just seemed to us that the law enforcement inventories were quite thin,” Farhang Heydari, a Law Enforcement Subcommittee member on the National AI Advisory Committee, said in an interview with FedScoop.

Though the DOJ and other federal law enforcement agencies in recent weeks made additions to their use case inventories — most notably with the FBI’s disclosure of Amazon’s image and video analysis software Rekognition — the NAIAC Law Enforcement Subcommittee wanted to get to the bottom of the initial exclusions. With that in mind, subcommittee members last week voted unanimously in favor of edits to two recommendations governing excluded AI use cases in Federal CIO Council guidance

The goal in delivering updated recommendations, committee members said, is to clarify the interpretations of those exemptions, ensuring more comprehensive inventories from federal law enforcement agencies.

“I think it’s important for all sorts of agencies whose work affects the rights and safety of the public,” said Heydari, a Vanderbilt University law professor who researches policing technologies and AI’s impact on the criminal justice system. “The use case inventories play a central role in the administration’s trustworthy AI practices — the foundation of trustworthy AI is being transparent about what you’re using and how you’re using it. And these inventories are supposed to guide that.” 

Office of Management and Budget guidance issued last November called for additional information from agencies on safety- or rights-impacting uses — an addendum especially relevant to law enforcement agencies like the DOJ. 

That guidance intersected neatly with the NAIAC subcommittee’s first AI use case recommendation, which permitted agencies to “exclude sensitive AI use cases,” defined by the Federal CIO Council as those “that cannot be released practically or consistent with applicable law and policy, including those concerning the protection of privacy and sensitive law-enforcement, national security, and other protected interests.”

Subcommittee members said during last week’s meeting that they’d like the CIO Council to go back to the drawing board and make a narrower recommendation, with more specificity around what it means for a use case to be sensitive. Every law enforcement use of AI “should begin with a strong presumption in favor of public disclosure,” the subcommittee said, with exceptions limited to information “that either would substantially undermine ongoing investigations or would put officers or members of the public at risk.”

“If a law enforcement agency wants to use this exception, they have to basically get clearance from the chief AI officer in their unit,” Jane Bambauer, NAIAC’s Law Enforcement Subcommittee chair and a University of Florida law professor, said in an interview with FedScoop. “And they have to document the reason that the technology is so sensitive that even its use at all would compromise something very important.”

It’s no surprise that law enforcement agencies use technologies like facial or gait recognition, Heydari added, making the initial omissions all the more puzzling. 

“We don’t need to know all the details, if it were to jeopardize some kind of ongoing investigation or security measures,” Heydari said. “But it’s kind of hard to believe that just mentioning that fact, which, you know, most people would probably guess on their own, is really sensitive.”

While gray areas may still exist when agencies assess sensitive AI use cases, the second AI use case exclusion targeted by the Law Enforcement Subcommittee appears more cut-and-dried. The CIO Council’s exemption for agency usage of “AI embedded within common commercial products, such as word processors or map navigation systems” resulted in technologies such as automated license plate readers and voice spoofing to often be left on the cutting-room floor. 

Bambauer said very basic AI uses, such as autocomplete or some Microsoft Edge features, shouldn’t be included in inventories because they aren’t rights-impacting technologies. But common commercial AI products might not have been listed because they’re not “bespoke or customized programs.”

“If you’re just going out into the open market and buying something that [appears to be exempt] because nothing is particularly new about it, we understand that logic,” Bambauer said. “But it’s not actually consistent with the goal of inventory, which is to document not just what’s available, but to document what is actually a use. So we recommended a limitation of the exceptions so that the end result is that inventory is more comprehensive.”

Added Heydari: “The focus should be on the use, impacting people’s rights and safety. And if it is, potentially, then we don’t care if it’s a common commercial product — you should be listing it on your inventory.” 

A third recommendation from the subcommittee, which was unrelated to the CIO Council exclusions, calls on law enforcement agencies to adopt an AI use policy that would set limits on when the technology can be used and by whom, as well as who outside the agency could access related data. The recommendation also includes several oversight mechanisms governing an agency’s use of AI.

After the subcommittee agrees on its final edits, the three recommendations will be posted publicly and sent to the White House and the National AI Initiative Office for consideration. Recommendations from NAIAC — a collection of AI experts from the private sector, academia and nonprofits — have no direct authority, but Law Enforcement Subcommittee members are hopeful that their work goes a long way toward improving transparency with AI and policing.

“If you’re not transparent, you’re going to engender mistrust,” Heydari said. “And I don’t think anybody would argue that mistrust between law enforcement and communities hasn’t been a problem, right? And so this seems like a simple place to start building trust.”

The post AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions appeared first on FedScoop.

]]>
76246
Amazon says DOJ disclosure doesn’t indicate violation of facial recognition moratorium https://fedscoop.com/amazon-response-doj-fbi-use-rekognition-software/ Sat, 27 Jan 2024 02:43:20 +0000 https://fedscoop.com/?p=75755 The statement came after FedScoop reporting noting that, according to the DOJ, the FBI is in the “initiation” phase of using Rekognition.

The post Amazon says DOJ disclosure doesn’t indicate violation of facial recognition moratorium appeared first on FedScoop.

]]>
A Department of Justice disclosure that the FBI is in the “initiation” phase of using Amazon’s Rekognition tool for a project doesn’t run afoul of the company’s moratorium on police use of the software, an Amazon spokesperson said in response to FedScoop questions Friday.

The statement comes after FedScoop reported Thursday that the DOJ disclosed in its public inventory of AI use cases that the FBI was initiating use of Rekognition as part of something called “Project Tyr.” The disclosure is significant because Amazon had previously extended a moratorium on police use of Rekognition, though the company did not originally clarify how that moratorium might apply to federal law enforcement. 

In an emailed response to FedScoop, Amazon spokesperson Duncan Neasham said: “We imposed a moratorium on police departments’ use of Amazon Rekognition’s face comparison feature in connection with criminal investigations in June 2020, and to suggest we have relaxed this moratorium is false. Rekognition is an image and video analysis service that has many non-facial analysis and comparison features. Nothing in the Department of Justice’s disclosure indicates the FBI is violating the moratorium in any way.”

According to Amazon’s terms of service, the company placed a moratorium on the “use of Amazon Rekognition’s face comparison feature by police departments in connection with criminal investigations. This moratorium does not apply to use of Amazon Rekognition’s face comparison feature to help identify or locate missing persons.”

The company’s public statement about its one-year moratorium in 2020, which was reportedly extended indefinitely, stated that it applied to “police use of Rekognition.” That statement did not specifically call out the “face comparison feature” or use of the tool related to criminal investigations.

Neasham further stated on Friday that Amazon believes “governments should put in place regulations to govern the ethical use of facial recognition technology, and we are ready to help them design appropriate rules, if requested.”

The description of the use case in DOJ’s AI inventory doesn’t mention the term “facial recognition,” but it states that the agency is working on customizing the tool to “review and identify items containing nudity, weapons, explosives, and other identifying information.” Neither Amazon nor the DOJ have clarified FedScoop questions about whether the FBI had access to facial recognition technology through this work.

Civil liberties advocates told FedScoop that the use case surprised them, given Amazon’s previous statements on facial recognition, Rekognition, and police.

“After immense public pressure, Amazon committed to not providing a face recognition product to law enforcement, and so any provision of Rekognition to DOJ would raise serious questions about whether Amazon has broken that promise and engaged in deception,” American Civil Liberties Union of Northern California attorney Matt Cagle said in a Thursday statement to FedScoop.  

DOJ spokesperson Wyn Hornbuckle did not address several aspects of the project but provided a statement pointing to the agency’s creation of an Emerging Technologies Board to “coordinate and govern AI and other emerging technology issues across the Department.” The FBI declined to comment through the DOJ.

The post Amazon says DOJ disclosure doesn’t indicate violation of facial recognition moratorium appeared first on FedScoop.

]]>
75755
Justice Department discloses FBI project with Amazon Rekognition tool https://fedscoop.com/doj-fbi-amazon-rekognition-technology-ai-use-case/ Thu, 25 Jan 2024 23:36:25 +0000 https://fedscoop.com/?p=75733 The disclosure comes after Amazon said in 2020 that it would institute a moratorium on police use of Rekognition.

The post Justice Department discloses FBI project with Amazon Rekognition tool appeared first on FedScoop.

]]>
The Department of Justice has disclosed that the FBI is in the “initiation” phase of using Amazon Rekognition, an image and video analysis software that has sparked controversy for its facial recognition capabilities, according to an update to the agency’s AI inventory

In response to questions from FedScoop, neither Amazon nor the DOJ clarified whether the FBI had access to or is using facial recognition technology, specifically, through this work. But the disclosure is notable, given that Amazon had previously announced a moratorium on police use of Rekognition.

An AI inventory released on the DOJ website discloses that the FBI has a project named “Amazon Rekognition – AWS – Project Tyr.” The description does not mention the term “facial recognition” but states that the agency is working on customizing the tool to “review and identify items containing nudity, weapons, explosives, and other identifying information.” 

“Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from lawfully acquired images and videos,” states a summary of the use case that echoes the Amazon website’s description of the product. In regard to developer information, the disclosure says the system was commercial and off-the-shelf, and that it was purchased pre-built from a third party.

Other aspects of the project have not yet been finalized, according to the inventory. The disclosure says that in collaboration with Amazon Web Services, the agency will determine where the training data originates, whether the source code is made publicly available, and what specific AI techniques were used. The DOJ states that the agency is not able to conduct ongoing testing of the code but can perform audits. The justice agency also claims the use case is consistent with Executive Order 13960, a Trump-era order on artificial intelligence. 

“To ensure the Department remains alert to the opportunities and the attendant risks posed by artificial intelligence (AI) and other emerging technologies, the Deputy Attorney General recently established the Emerging Technologies Board to coordinate and govern AI and other emerging technology issues across the Department,” DOJ spokesperson Wyn Hornbuckle said in response to a series of questions from FedScoop about the use case.

He added: “The board will advance the use of AI and other emerging technologies in a manner that is lawful and respectful of our nation’s values, performance-driven, reliable and effective, safe and resilient, and that will promote information sharing and best practices, monitor taskings and progress on the department’s AI strategy, support interagency coordination, and provide regular updates to leadership.” 

The DOJ did not address several aspects of the work with Amazon, including questions about whether the FBI had put any limits on the use of its technology, the purpose of nudity detection, or the extent to which the law enforcement agency could access facial recognition through the work discussed in the disclosure. Through the DOJ, the FBI declined to comment. 

Amazon was given 24 hours to comment on a series of questions sent from FedScoop but did not respond by the time of publication. A day later, Amazon spokesperson Duncan Neasham emailed FedScoop the following statement:

“We imposed a moratorium on police departments’ use of Amazon Rekognition’s face comparison feature in connection with criminal investigations in June 2020, and to suggest we have relaxed this moratorium is false. Rekognition is an image and video analysis service that has many non-facial analysis and comparison features. Nothing in the Department of Justice’s disclosure indicates the FBI is violating the moratorium in any way.”

The tool was not disclosed in an earlier version of the DOJ’s AI inventory. While it’s not clear when the inventory was updated, a consolidated list of federal AI uses posted to AI.gov in September didn’t include the disclosure. The source date of the DOJ page appears to be incorrect and tags the page to October 2013, though the executive order requiring inventories wasn’t signed by President Donald Trump until late 2020. 

A page on Amazon’s website featuring the Rekognition technology highlights the tool’s applications in “face liveness,” “face compare and search,” and “face detection and analysis,” as well as applications such as “content moderation,” “custom labels,” and “celebrity recognition.” 

Beyond the application examples listed in the inventory, the DOJ did not explain the extent to which the FBI could or would use facial recognition as part of this work. Amazon previously told other media outlets that its moratorium on providing facial recognition to police had been extended indefinitely, though it’s not clear how Amazon interprets that moratorium for federal law enforcement. Notably, Amazon’s website has guidance for public safety uses.

But others have raised concerns about the technology. In 2019, a group of researchers called on Amazon to stop selling Rekognition to law enforcement following the release of a study by AI experts Inioluwa Deborah Raji and Joy Buolamwini that found that an August 2018 version of the technology had “much higher error rates while classifying the gender of darker skinned women than lighter skinned men,” according to the letter. 

Amazon had previously pushed back on those findings and has defended its technology. The National Institute of Standards and Technology confirmed that Amazon has not voluntarily submitted its algorithms for study by the agency. 

“Often times companies like Amazon provide AI services that analyze faces in a number of ways offering features like labeling the gender or providing identification services,” Buolamwini wrote in an early 2019 blog post. “All of these systems regardless of what you call them need to be continuously checked for harmful bias.”

The company has argued in a corporate blog defending its technology that the “mere existence of false positives doesn’t mean facial recognition is flawed. Rather, it emphasizes the need to follow best practices, such as setting a reasonable similarity threshold that correlates with the given use case.” 

The DOJ’s disclosure is also notable because, in the wake of George Floyd’s murder in 2020 — and following an extensive and pre-existing movement against the technology — Amazon said it would implement a one-year pause on providing Rekognition to police. In 2021, the company extended that moratorium indefinitely, according to multiple reports. Originally, Amazon said the moratorium was meant to give Congress time to pass regulation of the technology. 

“It would be a potential civil rights nightmare if the Department of Justice was indeed using Amazon’s facial recognition technology ‘Rekognition,’” Matt Cagle, a senior staff attorney at the American Civil Liberties Union of Northern California, said in a written statement to FedScoop, pointing to the racial bias issues with facial recognition. “After immense public pressure, Amazon committed to not providing a face recognition product to law enforcement, and so any provision of Rekognition to DOJ would raise serious questions about whether Amazon has broken that promise and engaged in deception.”

A 2018 test of Rekognition’s facial recognition capabilities by the ACLU incorrectly matched 28 members of Congress with mugshots. Those members were “disproportionately people of color,” according to the ACLU. 

The DOJ inventory update noting the use of the Amazon tool was “informative, but in some ways surprising,” said Caitlin Seeley George, the director of campaigns and operations at the digital rights group Fight for the Future, because “we haven’t seen specific examples of FBI using Amazon Rekognition in recent years and because Amazon has said and has continued to say that they will not sell their facial recognition technology to law enforcement.” 

“This is the problem with trusting a company like Amazon — or honestly any company — that’s selling this technology,” she added. “Not only could they change their mind at any point, but they can decide the barriers of what their word means and if and how they’re willing to make adjustments to what they have said that they would or wouldn’t do with their product and who they will or won’t sell it to.”

Ben Winters, senior counsel for the Electronic Privacy Information Center, said that “it feels like a weird time to be adopting this big, sensitive type system,” noting that once the technology is there, it’s “more entrenched.” He pointed to the recent executive order on AI and draft guidance for rights-impacting AI that’s due to be finalized by the Office of Management and Budget. 

A NextGov story from 2019 reported that the FBI was piloting Rekognition facial matching software for the purpose of mining through video surveillance footage. According to that story, the pilot started in 2018, though the DOJ did not address a FedScoop question about what happened to it or if it’s the same project discussed in the updated AI inventory.

A record available on the FBI’s Vault, the agency’s electronic Freedom of Information Act library, appears to show that the agency took issue with some reporting on that pilot at the time, but much of the document is redacted. 

The post Justice Department discloses FBI project with Amazon Rekognition tool appeared first on FedScoop.

]]>
75733
Homeland Security to launch explosives research database to help combat threats https://fedscoop.com/dhs-plans-explosives-database-to-combat-threats/ Thu, 10 Aug 2023 16:39:29 +0000 https://fedscoop.com/?p=71758 The database is undergoing internal DHS testing and will later be assessed by the Department of Energy and the Federal Bureau of Investigation.

The post Homeland Security to launch explosives research database to help combat threats appeared first on FedScoop.

]]>
The Department of Homeland Security plans to launch a database of explosives research, testing and evaluation data to assist personnel in mitigating threats in the fall.

DHS previewed the rollout of the Explosives Planning and Research Tool (ExPRT) in a Thursday blog post. The tool is currently undergoing testing and, when launched, will be a “secure, web-based one-stop-shop” for subject matter experts, first responders, and others in the explosives research community, the agency said.

“ExPRT provides critical explosives research to our [subject matter experts] who need it the most,” Anna Tedeschi, who manages the Explosives Threat Assessment program at DHS’s Science and Technology Directorate, said in the blog post.

“Once implemented it will vastly improve our collaborative efforts to continue protecting the nation from any future explosive threats, and also serve as a resource for ensuring best practices,” Tedeschi said.

ExPRT is intended to address the challenges presented by attempting to organize information and research about explosive threats, sharing that information, avoiding repetitive studies, preventing loss of institutional knowledge, and planning for investments in new future research, Tedeschi said.

The database will contain technical information, reports on screening and mitigation technology, completed and ongoing studies, and contact information for organizations involved in the research, according to the blog post. The information will span from the early 2000s to present day.

After internal testing is complete, the database will be independently assessed by the Department of Energy (DOE) and the Federal Bureau of Investigation (FBI), according to the blog post. The agencies will “perform functional use-case and other tests to validate how well it will work in the field.”

The post Homeland Security to launch explosives research database to help combat threats appeared first on FedScoop.

]]>
71758
FBI finance team working on first software bot https://fedscoop.com/fbi-finance-team-to-roll-out-first-software-bot/ https://fedscoop.com/fbi-finance-team-to-roll-out-first-software-bot/#respond Tue, 09 May 2023 20:09:39 +0000 https://fedscoop.com/?p=68205 The Federal Bureau of Investigation’s finance modernization team said Tuesday it will soon roll out a bot for automatically paying invoices and updating budget line items that could act as pilot for the future automation of back-office systems at the agency. The launch of the bot comes amid a push across federal government to use […]

The post FBI finance team working on first software bot appeared first on FedScoop.

]]>
The Federal Bureau of Investigation’s finance modernization team said Tuesday it will soon roll out a bot for automatically paying invoices and updating budget line items that could act as pilot for the future automation of back-office systems at the agency.

The launch of the bot comes amid a push across federal government to use robotic process automation to streamline agency processes. It will automate the currently manual process of paying invoices every month and updating budget lines items needed to pay invoices to customers or vendors. 

“It’s the first time we’re actually automating something through robotic process automation. So that’s what makes it so innovative for us is because the bureau doesn’t have bots right now, we were just sort of like putting our toes in that world,” Peter Sursi, head of finance modernization, accounts payable and relocation services said at the Adobe Government Forum in Washington on Tuesday. “So for us to get one on the finance side for us is pretty exciting. It’ll save us a lot of labor hours.” 

The tool would affect all 56 FBI field offices and approximately 250 task force officers that process the financial payments within those offices as well as FBI customers who get paid through the invoices which were previously manual processed and time intensive.

Sursi said the new finance bot was created in the past two months and is in final stages of testing, which has energized his team to create a longer list of FBI finance projects that could be automated and made much faster thanks to bot automation.

In March, State Department CIO Kelly Fletcher revealed that her agency had used robotic process automation to cut the processing time for its monthly financial statement from two months to two days.

Speaking at FedScoop’s ITModTalks, Fletcher said financial reporting was one of several areas where the agency is using AI to improve the efficiency of back-office operations, which has the ability to substantially improve reporting processes because of State’s federated structure and global operations.

The post FBI finance team working on first software bot appeared first on FedScoop.

]]>
https://fedscoop.com/fbi-finance-team-to-roll-out-first-software-bot/feed/ 0 68205
DHS launches website with research, grants, and tools to combat spike in domestic terrorism  https://fedscoop.com/dhs-launches-counterextremism-website/ https://fedscoop.com/dhs-launches-counterextremism-website/#respond Fri, 24 Mar 2023 19:13:43 +0000 https://fedscoop.com/?p=67051 PreventionResourceFinder.gov will be routinely updated with new evidence-based research and information about grants and additional resources.

The post DHS launches website with research, grants, and tools to combat spike in domestic terrorism  appeared first on FedScoop.

]]>
The Department of Homeland Security Thursday launched a new inter-agency website to help tackle a rise in targeted violence and domestic terrorism by providing resources like federal grants, training opportunities, and community support experts. 

DHS said the new website, PreventionResourceFinder.gov, was developed in collaboration with 17 different federal agencies including the U.S. Agency for International Development (USAID), State Department, and the Cybersecurity and Infrastructure Security Agency (CISA) to provide visitors with a “one-stop shop for federal resources to prevent targeted violence and terrorism.”

DHS Secretary Alejandro Mayorkas described targeted violence and terrorism as “grave threats” to homeland security in a statement announcing the launch of the new site.

“The website we are launching today equips our partners throughout the country with helpful resources to better prevent, prepare for and respond to acts of violence,” said Mayorkas. “From first responders to non-profit organizations, a whole-of-society approach is needed to keep our communities safe and secure.” 

A Government Accountability Office (GAO) report published earlier this month indicated that “domestic terrorism is on the rise” in the U.S., with 231 domestic terrorism incidents between 2010 and 2021. Of these incidents, approximately 35% were classified as racially- or ethnically-motivated, the largest category while anti-government or anti-authority motivated violent extremism was the second largest category of incidents.

The GAO report also found that DHS and the FBI did not share data on domestic terrorism incidents with each other due to miscommunications and did not submit comprehensive data on domestic terrorism incidents to Congress in required reports.

According to DHS, the new site will be routinely updated with new evidence-based research, grants and additional resources for community partners to leverage to reduce any possible domestic terrorist threats.

“These departments and agencies cooperate on programs, tools, grant funding, victim support, training, and technical expertise to help communities and groups prevent, detect, and mitigate acts of targeted violence and terrorism,” DHS said in a statement.

The post DHS launches website with research, grants, and tools to combat spike in domestic terrorism  appeared first on FedScoop.

]]>
https://fedscoop.com/dhs-launches-counterextremism-website/feed/ 0 67051