Department of Homeland Security (DHS) Archives | FedScoop https://fedscoop.com/tag/department-of-homeland-security-dhs/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Tue, 21 May 2024 18:55:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Department of Homeland Security (DHS) Archives | FedScoop https://fedscoop.com/tag/department-of-homeland-security-dhs/ 32 32 GSA taps seven federal tech experts for new FedRAMP advisory group https://fedscoop.com/gsa-taps-seven-federal-tech-experts-for-new-fedramp-advisory-group/ Tue, 21 May 2024 18:55:50 +0000 https://fedscoop.com/?p=78428 Officials from the GSA, CMS, CISA, DHS and other agencies will make up the inaugural Technical Advisory Group.

The post GSA taps seven federal tech experts for new FedRAMP advisory group appeared first on FedScoop.

]]>
Officials from the General Services Administration, the Department of Homeland Security, the Centers for Medicaid and Medicare Services and other agencies will serve as inaugural members in a new advisory group to the Federal Risk and Authorization Management Program. 

The Technical Advisory Group, part of a broader effort to engage stakeholders and support FedRAMP processes related to delivering emerging technology solutions to assist agencies, will inform decision-making on the technical, strategic and operational direction of the government-wide compliance program, according to a GSA press release

“This group will help make FedRAMP a smarter and more technology-forward operation that better meets its goals of making it safe and easy for federal agencies to take full advantage of cloud services,” Eric Mill, GSA’s executive director for cloud strategy in Technology Transformation Services, said in the statement. 

Members of the inaugural group are: Laura Beaufort, technical lead with the Federal Election Commission; Paul Hirsch, technical lead with TTS; Michael Boyce, director of DHS’s AI Corps; Elizabeth Schweinsberg, senior technical adviser at CMS; Grant Dasher, architecture branch chief in the Cybersecurity and Infrastructure Security Agency’s Office of the Technical Director; Nicole Thompson, cybersecurity engineer with the Department of Defense’s Defense Digital Service; and Brian Turnau, cloud authorization program manager with GSA’s Office of the Chief Information Officer.

Laura Gerhardt, director of technology modernization and data in the Office of Management and Budget, said in a statement that “the TAG is well-positioned to provide valuable insights into streamlining processes, enhancing security postures and adapting to novel technology implementations so that agencies can leverage the full potential of FedRAMP.” 

GSA released a new roadmap for modernization efforts through the FedRAMP program in March and has since revealed a slew of other FedRAMP-related announcements.

The post GSA taps seven federal tech experts for new FedRAMP advisory group appeared first on FedScoop.

]]>
78428
DHS official: AI could exacerbate chemical and biological threats https://fedscoop.com/dhs-official-ai-could-exacerbate-chemical-and-biological-threats/ Mon, 20 May 2024 10:00:00 +0000 https://fedscoop.com/?p=78378 The assistant secretary for DHS’s Countering Weapons of Mass Destruction office warned in an interview that AI could supercharge biological research — and invent new pathogens.

The post DHS official: AI could exacerbate chemical and biological threats appeared first on FedScoop.

]]>
A Department of Homeland Security team dedicated to deterring the use of weapons of mass destruction is now studying how artificial intelligence could exacerbate these kinds of threats. In the wake of a report announced last month, one of the top officials with that office is pointing to a series of potential strategies to confront the ways AI tools could be deployed — even inadvertently — to synthesize dangerous chemical and biological materials.  

In an interview with FedScoop, Mary Ellen Callahan, the assistant secretary for the DHS Countering Weapons of Mass Destruction (CWMD) office, outlined how the U.S. government could deal with this kind of challenge, including looking at intellectual property and copyright enforcement and encouraging journals with large stores of biological and chemical research to introduce more stringent access requirements. The effort needs to be whole-of-government and international, she argued. 

“Both the [DHS] secretary and the president have said that regulation in AI may not be effective or helpful because it’s reactive. It’s also answering probably yesterday’s problem,” she said. “We’re going to look to see if we can leverage the currently existing models.”

The interview comes after DHS submitted a report to the president looking at the intersection of Chemical, Biological, Radiological, and Nuclear (CBRN) threats and artificial intelligence. The president’s advisers have recommended making that report public, Callahan said, though only a fact sheet is available right now. AI labs, along with those representing the Energy Department, think tanks, and model evaluators, were consulted. The DOE is also working on a separate, classified report into AI and nuclear threats, specifically. “The effort to produce the report regarding nuclear threats and AI is ongoing,” a spokesperson for the agency told FedScoop. 

Editor’s note: The transcript has been edited for clarity and length.

FedScoop: Can you start by explaining what the threat actually is, here? 

Assistant Secretary Mary Ellen Callahan: Artificial intelligence and generative artificial intelligence is the processing of a lot of different data to try to find novel or new content. Let’s talk about biology, specifically: It is using artificial intelligence to update, enhance, and improve research. … We really want to maximize artificial intelligence for good for research while minimizing the malign actors’ ability to leverage artificial intelligence for bad. 

FS: Is the idea that someone could use something like OpenAI to just come up with something really bad instead of something really good?

MEC: We don’t want to give people ideas. But what we want to do allow the really novel research — the important biological and chemical research breakthroughs — to happen, while still providing hurdles for bad guys to try to get their hands on, say, known recipes for pathogens [and] to make sure that we are leveraging the promise of AI while minimizing the peril. 

FS: Are you making a distinction between chemical, biological, and nuclear? And is there a reason why one would be more relevant than another in terms of AI threats?

MEC: The Countering Weapons of Mass Destruction office here at DHS has been around for about five and a half years. It is intended to be the prevention and detection of all weapons of mass destruction threats. That is usually summarized as chemical, biological, radiological, and nuclear (CBRN). It’s all on the prevention and detection side. We’re really focused on how we deter people before they get to actually triggering something. … The executive order asked us to talk about CBRN threats. We do in the report that is before the president right now generally talk about CBRN threats, and the fact sheet that is out publicly does talk about that. 

We focus primarily on chemical and biological threats for two reasons: One is the access to chemical equations and bio-recipes is higher and it’s more advanced. Both the bio and the chemical [information] are pretty available in the common parlance and the common internet where they could be indexed by artificial intelligence models or frontier models.

… With regard to radiological and nuclear, the research associated with that is often on closed networks and maybe classified systems. The Department of Energy was asked to do a parallel report on nuclear threats specifically. Therefore, we’ve ceded that specific question about radiological or nuclear threats to the classified report the Department of Energy is working on right now.

FS: One of the points that’s made in the public fact sheet is the concern about companies taking heterogeneous approaches in terms of evaluation and red-teaming. Can you talk a little bit more about that?

MEC: All the frontier models have made voluntary commitments to the president from last year. Those [are] promises [like] safety and security, including focusing on high-risk threats, like CBRN. They all want to do a good job. They’re not quite sure exactly how to do that job. 

… We have to develop guidelines and procedures in collaboration with the U.S. government, the private sector, and academia to make sure that we understand how we try to approach these highly sensitive, high-risk areas of information. That we create a culture of responsibility for the AI developers — those voluntary commitments are the first step in that. … But [we need] to make sure that all the folks that are within the ecosystem are all looking at ways to deter bad actors from leveraging either new information or newly visible information that was distilled as a mosaic coming out of generative AI-identifying elements. So it’s really got a look at the whole universe on how to respond to this.

FS: Another thing that was both interesting and worrisome to me was the concern that’s highlighted about limitations in regulation and enforcement and where holes might be in terms of AI.

MEC: I am more sanguine now than I was when I started looking at that. So hopefully, that will give you some comfort. Really, we’re looking at a variety of different laws and factors. …We want to look at existing laws to see if there can be impacts taken, like, for example, export controls, intellectual property, tech transport, foreign investments. [We want to] look at things that already exist that we could already leverage to go and try to make it be successful and useful.

Some of the authorities are spread throughout the federal government, but that actually could make it stronger because then you have an ability to attack these issues in a couple of different ways, like the intellectual property of misuse [or] if somebody is using something that is copyrighted in order to leverage and create a technical or biological threat. 

The international coordination piece is very important. There’s a really significant interest in leaning in together and working on this whole-of-community effort to establish their appropriate guidelines and really to look to provide additional restraints on models, but also to amplify that culture of responsibility. 

We could look at updating regulatory requirements as the opportunity presents, but we’re not leading with regulations for a couple of reasons: Both the secretary and the president have said that regulation in AI may not be effective or helpful because it’s reactive. It’s also answering probably yesterday’s problem. 

FS: I’m curious about how you see the risks with open-source AI versus things that are not open source. I know that’s a big discussion in the AI community. 

MEC: There are pros and cons to open-source AI. From a CBRN perspective, understanding some of the weights may be helpful, but they also may reveal more information. … There’s a lot of information that’s on the internet and it’s going to be very hard to protect that existing content right now from AI. 

There are also a lot of high-end bio and chem databases that are behind firewalls, that are not on the internet, that are subscription-based, that are really very valuable for biologists. One of the things we’re recommending doing [for] data that isn’t on the internet — or that isn’t readily available to use for models —  is to actually have a higher standard, a higher customer standard, like a know-your-customer procedure. That benefits the promise of AI for good while detracting from bad actors and trying to get access to it. 

FS: Have you had conversations with some of the academic organizations and what are those conversations like? Are they open to this?

MEC: We spoke to a lot of academic organizations, a lot of think tanks, and all the major models. I don’t want to answer the question specifically about high-end databases, but I can say that across the board, people were very supportive of having appropriate controls around sensitive data. 

FS: How do we deal with companies that would not want to help with this or countries that would not want to help with this — like what’s the strategy there? 

MEC: That’s the whole idea. Everyone has to work collaboratively on this whole-of-community effort. Right now, there is a real appetite for that. All of this is early, but I think that people understand that [this is] the year and the time to try to build a governance framework in which to think about these issues.

FS: I’m curious if you would call this like a present threat or something that we should be worried about for the future, whether this is something we’re thinking about, like, this could happen tomorrow, or this could happen in a few years from now?

MEC: We tried to write the report to talk about present risk and near-term future risk. We can look at the speed and the rapidity in which AI models are developing and we can extrapolate kind of what the impact is. I want to highlight a couple of things with regard to the present-day risk to the near future. Right now, they say ChatGPT is like having an undergraduate biology student on your shoulder. There’s some discussion, as these models developed, that it would be like a graduate student on your shoulder. 

I also want to note that we’re talking about CBRN harms that are created by AI, but there also could be unintentional harm. We very much want to put in what I’m calling hurdles, or obstacles for people, who want to do harm, malign actors. But we also have to recognize that there could be unintentional harm that’s created by well-intending actors.

The other thing that we want to do with this whole-of-community effort with these guidelines and procedures that we’re encouraging to be created between international government, private sector, and academia, is to safeguard the digital to physical frontier. Right now, there’s a possibility that as I said, you could have an undergraduate student on your shoulder helping you to search to try to create a new chemical compound, but that — right now —  is mostly on the computer and then the screen and is not yet able to do it in real life. 

We’re really trying to make sure that the border between digital and frontier remains as strong as it can be. That’s probably the … three-to-five-year risk: something happens and is capable of being translated into real life. It’s still going to be hard though, hopefully. 

The post DHS official: AI could exacerbate chemical and biological threats appeared first on FedScoop.

]]>
78378
DHS launches safety and security board focused on AI and critical infrastructure  https://fedscoop.com/dhs-launches-safety-and-security-board-focused-on-ai-and-critical-infrastructure/ Fri, 26 Apr 2024 17:07:23 +0000 https://fedscoop.com/?p=77640 Executives from OpenAI, NVIDIA and Alphabet are among those taking part.

The post DHS launches safety and security board focused on AI and critical infrastructure  appeared first on FedScoop.

]]>
The Department of Homeland Security on Friday announced the creation of its new Artificial Intelligence Safety and Security Board. The formation of the group comes as the department ramps up its focus on AI and as concern grows about the technology’s impacts on critical services. 

The board includes representatives of major technology companies, including OpenAI CEO Sam Altman and Alphabet CEO Sundar Pichai, as well as experts focused on artificial intelligence and civil rights. Also represented are leaders of companies focused on computer chips, like Lisa Su of Advanced Micro Devices and Jensen Huang, president and CEO of NVIDIA. Government leaders in the group include Seattle Mayor Bruce Harrell and Arati Prabhakar, director of the White House Office of Science and Technology Policy. 

On a call with reporters Friday, DHS Secretary Alejandro Mayorkas gestured to a range of opportunities and concerns related to AI and critical infrastructure, which covers 16 sectors including defense, agriculture, energy, and internet technology. The department also expects to release guidelines related to critical infrastructure and artificial intelligence next week, he added.


The group will meet for the first time in May and will eventually form new recommendations for integrating artificial intelligence into critical infrastructure and protecting against any risks the technology might present. Mayorkas said he was personally involved in selecting members of the board, and addressed specific criticisms of OpenAI’s Altman by saying he had “no hesitation” about tapping the executive. 

“Ultimately AI is a tool, a potent tool, and it must be developed and applied with an understanding of how it will impact the individual, community, and society at large,” board member Fei-Fei Li, co-director of the Stanford Human-centered Artificial intelligence Institute, said in a statement. 

Maya Wiley, another board member and the president and CEO of the Leadership Conference on Civil and Human Rights, added that “it is critical to have a civil rights perspective on any board with the mission to responsibly deploy artificial intelligence in our nation’s infrastructure. Critical infrastructure plays a key role ensuring everyone has equal access to information, goods, and services. It also poses great threats, including the spread of bias and hate speech online, stoking fear, distrust, and hate in our communities of color.”

Social media companies, which are not technically part of the 16 critical infrastructure sectors, are not represented on the board. “That’s a discrete line of endeavor that I did not feel is really within the center of what we are focused upon,” Mayorkas said in response to a FedScoop question. 

The department also referenced progress with its quest to beef up its internal staff focused on artificial intelligence. The agency said it’s received 4,000 applications for its AI Corps, a group of 50 experts in the technology it hopes to hire over the course of this year. Michael Boyce, a former OMB official, will lead that group. DHS has also launched an AI roadmap and is developing several generative AI pilots

The post DHS launches safety and security board focused on AI and critical infrastructure  appeared first on FedScoop.

]]>
77640
DHS picks OMB official to lead its new AI Corps https://fedscoop.com/dhs-picks-omb-official-to-lead-its-new-ai-corps/ Thu, 25 Apr 2024 20:10:08 +0000 https://fedscoop.com/?p=77621 The agency is leaning heavily into the new technology, with a goal of hiring 50 AI experts throughout the year.

The post DHS picks OMB official to lead its new AI Corps appeared first on FedScoop.

]]>
The Department of Homeland Security has named Michael Boyce as the new director of its AI Corps, an initiative to hire 50 artificial intelligence experts for the agency throughout the year. The announcement comes as DHS continues to ramp up its focus on the technology, and follows the agency’s release of a new AI roadmap and the establishment of an AI task force. 

“The Department of Homeland Security is by most accounts the largest federal civilian agency, the largest federal law enforcement agency, and the federal agency with [the] most daily interactions with the public,” said Boyce in a LinkedIn post Thursday. “As a former refugee officer who used to travel all over the world to determine which refugee officers can receive protection in the United States, I am keenly aware of the life-and-death importance of that mission.” 

The AI Corps, part of the department’s office of the chief information officer, will be expected to provide guidance for topic areas including software engineering, machine learning, and data science. DHS CIO Eric Hysen currently serves as the agency’s chief AI officer. Notably, the department’s components are already using myriad forms of AI, according to the agency’s AI Inventory

Boyce was previously a senior policy analyst at the Office of Management and Budget, where he focused on FedRAMP and AI policy, according to DHS. He also worked as the chief of innovation and design for the Refugee, Asylum and International Operations Directorate at U.S. Citizenship and Immigration Services, and as a product and strategy lead at the U.S. Digital Service. 

The post DHS picks OMB official to lead its new AI Corps appeared first on FedScoop.

]]>
77621
DOJ seeks public input on AI use in criminal justice system https://fedscoop.com/doj-seeks-input-on-criminal-justice-ai/ Wed, 24 Apr 2024 21:36:41 +0000 https://fedscoop.com/?p=77578 The department’s research, development and evaluation arm will use the information as it puts together a report on AI in the criminal justice system due later this year.

The post DOJ seeks public input on AI use in criminal justice system appeared first on FedScoop.

]]>
The Justice Department’s National Institute of Justice is looking for public input on the use of artificial intelligence in the criminal system.

In a document posted for public inspection on the Federal Register Wednesday, the research, development and evaluation arm of the department said it’s seeking feedback to “inform a report that addresses the use of artificial intelligence (AI) in the criminal justice system.” Those comments are due 30 days after the document is published.

That report is among the actions intended to strengthen AI and civil rights that President Joe Biden included in his October 2023 executive order on the technology. According to the order, its aim is to “promote the equitable treatment of individuals and adhere to the Federal Government’s fundamental obligation to ensure fair and impartial justice for all.”

Ultimately, the report is required to address the use of the technology throughout the criminal justice system — from sentencing and parole to policing surveillance and crime forecasting — as well as identify areas where AI could benefit law enforcement, outline recommended best practices, and make recommendations to the White House on additional actions. 

The DOJ must also work with the Homeland Security secretary and the director of the Office of Science and Technology Policy on that report, and it’s due 365 days after the order was issued.

The post DOJ seeks public input on AI use in criminal justice system appeared first on FedScoop.

]]>
77578
Generative AI could raise questions for federal records laws https://fedscoop.com/generative-ai-could-raise-questions-for-federal-records-laws/ Mon, 22 Apr 2024 21:02:22 +0000 https://fedscoop.com/?p=77508 A clause in a DHS agreement with OpenAI opens the door to some debate on transparency issues.

The post Generative AI could raise questions for federal records laws appeared first on FedScoop.

]]>
The Department of Homeland Security has been eager to experiment with generative artificial intelligence, raising questions about what aspects of interactions with those tools might be subject to public records laws. 

In March, the agency announced several initiatives that aim to use the technology, including a pilot project that the Federal Emergency Management Agency will deploy to address hazard mitigation planning, and a training project involving U.S. Citizenship and Immigration Services staff. Last November, the agency released a memo meant to guide the agency’s use of the technology. A month later, Eric Hysen, the department’s chief information officer and chief AI officer, told FedScoop that there’s been “good interest” in using generative AI within the agency. 

But the agency’s provisional approval of a few generative AI products — which include ChatGPT, Bing Chat, Claude 2, DALL-E2, and Grammarly, per a privacy impact assessment — call for closer examination in regard to federal transparency. Specifically, an amendment to OpenAI’s terms of service uploaded to the DHS website established that outputs from the model are considered federal records, along with referencing freedom of information laws. 

“DHS processes all requests for records in accordance with the law and the Attorney General’s guidelines to ensure maximum transparency while protecting FOIA’s specified protected interests,” a DHS spokesperson told FedScoop in response to several questions related to DHS and FOIA. DHS tracks its FOIAs in a public log. OpenAI did not respond to a request for comment. 

“Agency acknowledges that use of Company’s Site and Services may require management of Federal records. Agency and user-generated content may meet the definition of Federal records as determined by the agency,” reads the agreement. “For clarity, any Federal Records-related obligations are Agency’s, not Company’s. Company will work with Agency in good faith to ensure that Company’s record management and data storage processes meet or exceed the thresholds required for Agency’s compliance with applicable records management laws and regulations.” 

Generative AI may introduce new questions related to the Freedom of Information Act, according to Enid Zhou, senior counsel at the Electronic Privacy Information Center, a digital rights group. She pointed to nuances related to “agency and user-generated content,” since the DHS-OpenAI clause doesn’t make clear whether inputs or user prompts are records, or also the outputs produced by the AI system. Zhou also pointed to record management and data storage as a potential issue. 

“The mention of ‘Company’s record management and data storage processes’ could raise an issue of whether an agency has the capacity to access and search for these records when fulfilling a FOIA request,” she said in an email to FedScoop. “It’s one thing for OpenAI to work with the agency to ensure that they are complying with federal records management obligations but it’s another when FOIA officers cannot or will not search these records management systems for responsive records.”

She added that agencies could also try shielding certain outputs of generative AI systems by citing an exemption related to deliberative process privilege. “Knowing how agencies are incorporating generative AI in their work, and whether or not they’re making decisions based off of these outputs, is critical for government oversight,” she said. “Agencies already abuse the deliberative process privilege to shield information that’s in the public interest, and I wouldn’t be surprised if some generative AI material falls within this category.”

Beryl Lipton, an investigative researcher at the Electronic Frontier Foundation, argued that generative AI outputs should be subject to FOIA and that agencies need a plan to “document and archive its use so that agencies can continue to comply properly with their FOIA responsibilities.”.  

“When FOIA officers conduct a search and review of records responsive to a FOIA request, there generally need to be notes on how the request was processed, including, for example, the files and databases the officer searched for records,” Lipton said. “If AI is being used in some of these processes, then this is important to cover in the processing notes, because requesters are entitled to a search and review conducted with integrity. “

The post Generative AI could raise questions for federal records laws appeared first on FedScoop.

]]>
77508
Cybersecurity executive order requirements are nearly complete, GAO says https://fedscoop.com/cybersecurity-executive-order-requirements-gao-omb-cisa/ Mon, 22 Apr 2024 20:20:47 +0000 https://fedscoop.com/?p=77495 CISA and OMB have just a handful of outstanding tasks to finish as part of the president’s 2021 order.

The post Cybersecurity executive order requirements are nearly complete, GAO says appeared first on FedScoop.

]]>
Just a half-dozen leadership and oversight requirements from the 2021 executive order on improving the nation’s cybersecurity remain unfinished by the agencies charged with implementing them, according to a new Government Accountability Office report.

Between the Cybersecurity and Infrastructure Security Agency, the National Institute of Standards and Technology and the Office of Management and Budget, 49 of the 55 requirements in President Joe Biden’s order aimed at safeguarding federal IT systems from cyberattacks have been fully completed. Another five have been partially finished and one was deemed to be “not applicable” because of “its timing with respect to other requirements,” per the GAO.

“Completing these requirements would provide the federal government with greater assurance that its systems and data are adequately protected,” the GAO stated

Under the order’s section on “removing barriers to threat information,” OMB only partially incorporated into its annual budget process a required cost analysis.

“OMB could not demonstrate that its communications with pertinent federal agencies included a cost analysis for implementation of recommendations made by CISA related to the sharing of cyber threat information,” the GAO said. “Documenting the results of communications between federal agencies and OMB would increase the likelihood that agency budgets are sufficient to implement these recommendations.”

OMB also was unable to demonstrate to GAO that it had “worked with agencies to ensure they had adequate resources to implement” approaches for the deployment of endpoint detection and response, an initiative to proactively detect cyber incidents within federal infrastructure. 

“An OMB staff member stated that, due to the large number of and decentralized nature of the conversations involved, it would not have been feasible for OMB to document the results of all EDR-related communications with agencies,” the GAO said.

OMB still has work to do on logging as well. The agency shared guidance with other agencies on how best to improve log retention, log management practices and logging capabilities but did not demonstrate to the GAO that agencies had proper resources for implementation. 

CISA, meanwhile, has fallen a bit short on identifying and making available to agencies a list of “critical software” in use or in the acquisition process. OMB and NIST fully completed that requirement, but a CISA official told the GAO that the agency “was concerned about how agencies and private industry would interpret the list and planned to review existing criteria needed to validate categories of software.” A new version of the category list and a companion document with clearer explanations is forthcoming, the official added. 

CISA also has some work to do concerning the Cyber Safety Review Board. The multi-agency board, made up of representatives from the public and private sectors, has felt the heat from members of Congress and industry leaders over what they say is a lack of authority and independence. According to the GAO, CISA hasn’t fully taken steps to implement recommendations on how to improve the board’s operations. 

“CISA officials stated that it has made progress in implementing the board’s recommendations and is planning further steps to improve the board’s operational policies and procedures,” the GAO wrote. “However, CISA has not provided evidence that it is implementing these recommendations. Without CISA’s implementation of the board’s recommendations, the board may be at risk of not effectively conducting its future incident reviews.”

Federal agencies have, however, checked off the vast majority of boxes in the EO’s list. “For example, they have developed procedures for improving the sharing of cyber threat information, guidance on security measures for critical software, and a playbook for conducting incident response,” the GAO wrote. Additionally, the Office of the National Cyber Director, “in its role as overall coordinator of the order, collaborated with agencies regarding specific implementations and tracked implementation of the order.”

The GAO issued two recommendations to the Department of Homeland Security, CISA’s parent agency, and three to OMB on full implementation of the EO’s requirements. OMB did not respond with comments, while DHS agreed with GAO recommendations on defining critical software and improving the Cyber Safety Review Board’s operations.

The post Cybersecurity executive order requirements are nearly complete, GAO says appeared first on FedScoop.

]]>
77495
ICE pursuing privacy approvals related to controversial phone location data https://fedscoop.com/ice-pursuing-privacy-approvals-related-to-controversial-phone-location-data/ Mon, 15 Apr 2024 21:48:14 +0000 https://fedscoop.com/?p=77255 The DHS component has had no new requests for commercial telemetry services since December 2022, but a privacy impact assessment filed by ICE is currently in review.

The post ICE pursuing privacy approvals related to controversial phone location data appeared first on FedScoop.

]]>
Back in January, Immigration and Customs Enforcement said it had stopped using commercial telemetry data that government agencies buy from private companies. That practice has been frequently criticized by civil rights groups that argue that by purchasing the phone location data from third parties, the government is essentially side-stepping the Fourth Amendment and violating peoples’ privacy. 

But, even though ICE says it has received no new requests for the use of commercial telemetry services since December 2022, the agency has filed a privacy impact assessment with the Department of Homeland Security’s privacy office for review. This kind of documentation is supposed to be released when agencies deploy a technology that could involve someone’s personal information. If and when that assessment is finalized, ICE plans to develop procedures focused on guiding the use of commercial telemetry services.

ICE did not address a series of questions posed by FedScoop regarding the draft PIA and CTD at the subcomponent.

The move comes as civil rights advocates have raised repeated concerns about the use of commercial telemetry data. Relatedly, Sens. Ron Wyden, D-Ore., and Rand Paul, R-Ky., have introduced the Fourth Amendment is Not for Sale Act, which seeks to rein in the government’s use of this information, among other measures.

Adam Schwartz, the privacy litigation director at the Electronic Frontier Foundation, previously told FedScoop that the digital rights nonprofit’s “view is that the Fourth Amendment to our Constitution ought to be interpreted by courts to bar the government from purchasing this kind of data given that — in order to acquire this data directly from a person or from their service provider — they would have to get a warrant.”

Within DHS, the use of this data has raised alarm bells. Last September, the department’s office of inspector general released a redacted report highlighting that agency subcomponents had not followed the law nor complied with policies surrounding privacy. ICE, in particular, was flagged for using this data without an approved privacy impact assessment. The OIG’s office identified nine contracts covering access to two different databases for this data between fiscal years 2019 and 2020. 

ICE appears to have sent mixed messages on its approach to telemetry data. The subcomponent originally rebuffed a recommendation by the OIG to stop using this data until it obtained an approved PIA. At that time, DHS argued that telemetry data was “an important mission contributor to the ICE investigative process” and “can fill knowledge gaps and produce investigative leads.” In response to the OIG report, the agency said ICE was working to finalize a draft geolocation services PIA and taking steps to mitigate privacy risk concerns. 

Then, in January, ICE told FedScoop it was in compliance with the OIG recommendation and that it had stopped using telemetry data, but the agency did not provide an update on whether any related PIAs surrounding the technology had been approved. In March, the nonprofit journalism outlet NOTUS reported that DHS expected to stop buying access to this data, citing three people familiar with the matter. 

The fact that ICE says it has a PIA under review at the department’s privacy office with procedures focused on the use of this data pending finalization, while at the same time noting that it hasn’t received any new telemetry data requests since December 2022 and that no operational units had received approval for buying the data since 2021, has raised questions among civil rights groups.

“We’re very concerned about, again, to what extent DHS is really able to provide accountability and oversight over ICE and whether ICE’s words or actions can be trusted,” said Julie Mao, co-founder and deputy director of Just Futures Law, a legal organization that focuses on immigrant rights. “Because they’re saying one thing about not wanting to use CTD — but then also clearly confirming that they’re moving forward with trying to use the same data.” 

The status of other recommendations made in the OIG report also remain unclear. Customs and Border Protection, for instance, was also instructed to discontinue use of phone location data until it obtained a privacy impact assessment. CBP told FedScoop that it didn’t have a need for telemetry data after the expiration of its contracts in fiscal year 2023, following an evaluation of the technology that began with a small group of staff in 2018 overseen by the CBP Privacy and Diversity Office and the CBP Office of Chief Counsel. 

The agency also said it discontinued a contract for the technology following FY2023. Contracts with its Field Operations’ National Targeting Center expired in September, and if the agency wants to use the data again, CBP said it will incorporate the OIG’s recommendations. 

But CBP did not respond to a question about the status of a privacy impact assessment for its past use of commercial telemetry data, which the DHS subcomponent originally said it expected to complete at the end of March. There doesn’t appear to be a PIA listed on the DHS website for this data.

DHS headquarters did not answer FedScoop’s question about the status of a department-wide commercial telemetry data policy, which the agency previously said it expected to complete at the end of June. The agency told FedScoop that its privacy policy and compliance instructions apply to personal information and that it has taken steps to implement recommendations from the OIG while continuing to address recommendations related to this data.

“The Department of Homeland Security (DHS) is committed to protecting individuals’ privacy, civil rights, and civil liberties,” an agency spokesperson told FedScoop. “The DHS Privacy Office coordinates with Components to embed and enforce privacy safeguards in DHS systems, technology, forms, and programs that collect personally identifiable information or have a privacy impact.”

The post ICE pursuing privacy approvals related to controversial phone location data appeared first on FedScoop.

]]>
77255
Bipartisan House bill calls on DHS to leverage AI for border security https://fedscoop.com/dhs-cbp-house-ai-bill-border-security/ Tue, 02 Apr 2024 16:11:11 +0000 https://fedscoop.com/?p=76939 The Emerging Innovative Border Technologies Act asks the agency to deliver a report to Congress on how it can use the technology operationally at the border.

The post Bipartisan House bill calls on DHS to leverage AI for border security appeared first on FedScoop.

]]>
The Department of Homeland Security would be charged with figuring out how artificial intelligence and other emerging technologies could be used to secure the border under a bipartisan bill introduced Tuesday in the House.

The Emerging Innovative Border Technologies Act from Reps. Lou Correa, D-Calif., and Morgan Luttrell, R-Texas, calls on DHS to deliver a plan to Congress for how the agency can leverage AI, machine learning and nanotechnology to “enhance, or address capability gaps in, border security operations.”

“Border security means keeping drug and human traffickers away from our communities — and new, bleeding-edge technology that is already available for commercial use would give our hard-working officers the tools they need to keep us safe,” Correa, ranking member on the House Border Security and Enforcement Subcommittee, said in a press release. “Through this bipartisan effort, Congress will better understand how our officers can use new technology to stop smugglers, as well as identify and respond when migrants are crossing in remote and deadly conditions, and hopefully deliver them the resources they so desperately need.”

Luttrell, who serves with Correa on the House Border Security and Enforcement Subcommittee, said in the press release that the legislation “aims to combat and neutralize threats at our borders.”

“As cartels and foreign adversary operations become more sophisticated amidst the ongoing border crisis, the United States must deploy the latest and most advanced technologies available to our borders to disrupt these threats,” Luttrell said. “I’ll continue to push for effective measures to safeguard our country and enforce our laws.”

The legislation tasks at least one innovation team member from DHS’s Customs and Border Protection with researching new, innovative and disruptive commercial technologies that could be adapted into border security operations, in an attempt to “address both capability gaps and urgent mission needs and assess their potential outcomes,” per the press release. 

The innovation team will coordinate with the agency’s acquisition program office and others within CPB on identifying technologies, analyzing procurement methods, assessing privacy and security implications on border communities, and looking into legacy CBP technologies. 

In conjunction with DHS’s Science and Technology Directorate, CBP would additionally be responsible for incentivizing the private sector “to develop technologies that maybe help CBP meet mission needs” on border security, while also exploring partnership opportunities with small and disadvantaged businesses, intra-governmental entities, university centers of excellence and federal laboratories. 

The report to Congress — specifically the House and Senate Homeland Security committees — from the commissioner of CBP is due within 180 days of the enactment of the legislation. It should include operating procedures and protocols for reaching agreements on the use of technologies for border security, as well as planning and strategic goals, such as projected costs and performance indicators.  

The legislation from Correa and Luttrell comes less than a month after DHS released its first-ever AI roadmap, outlining the agency’s current uses and future plans for the technology. The roadmap includes several callouts to border security, including the agency’s use of “non-intrusive inspection technology to make border screenings more efficient and to combat the risks associated with smuggling fentanyl and other illicit goods.”

The post Bipartisan House bill calls on DHS to leverage AI for border security appeared first on FedScoop.

]]>
76939
As TSA PreCheck enrollments surge, data shows complaints have followed https://fedscoop.com/tsa-precheck-complaints-data/ Fri, 22 Mar 2024 17:59:06 +0000 https://fedscoop.com/?p=76659 The TSA says increased customer service options are responsible for the jump in complaints, which is documented in newly available data.

The post As TSA PreCheck enrollments surge, data shows complaints have followed appeared first on FedScoop.

]]>
Complaints about the Transportation Security Administration’s PreCheck program, the popular expedited passenger screening system, have more than tripled since 2015, according to newly released data from an organization focused on obtaining government datasets and making them more accessible. 

That surge — from more than 2,500 in January 2015 to more than 8,500 in January of this year — is a notable increase, given that the number of passengers with TSA PreCheck who passed through the security checkpoint have only grown by 25% from 2015 to last year, according to separate data provided by the TSA to FedScoop. The metrics reflect a broader surge in complaints about the agency that were assembled from PDFs into data files by the Data Liberation Project, which recently digitized documents related to traveler complaints uploaded by the TSA to its online repository of Freedom of Information Act documents. 

“The TSA PreCheck program continues to be the premier trusted traveler program and has experienced significant growth since 2015. Total enrollments have surged by around 800%, from less than 2 million at the end of 2015 to 18 million at the end of 2023,” a spokesperson for the agency told FedScoop in response to questions about the data. “As the program continues to grow and new travelers begin using the program, we expect to experience an increased volume in queries. To enhance options for TSA PreCheck members, TSA has increased promotion of available customer service alternatives.”

The spokesperson said that changes to several platforms and customer service tools are responsible for the rise in complaints. In May 2021, the agency created a new TSA PreCheck webform that saw complaints increase around 79% in the following four months. That August, the agency deployed messaging enhancements that, in combination with the new online form, saw complaints grow by 62% in the subsequent four months. (Switching to Salesforce for the TSA Contact Center at the end of 2020 also meant that the airport field in the data started to populate). 

After publication of this story, a different TSA spokesperson said the jump in complaints can also be attributed to the fact that in 2015, many travelers that received expedited screening were “rules-based” as opposed to being enrolled in or paying for PreCheck. With “little expectation they would receive” the benefit, those “rules-based passengers were not likely to contact TSA to complain,” the spokesperson said. Now, there are roughly 36 million travelers eligible to use TSA PreCheck and the agency has taken steps to make registering complaints easier, the spokesperson added.

The data collected by the Data Liberation Project, and then analyzed by FedScoop, reflect complaints about the expedited passenger screening program, which the TSA refers to as PreCheck. Complaints within this category can also include specific concerns from active-duty military members, about TSA PreCheck applications, and regarding expedited screening for Department of Homeland Security employees. The second TSA spokesperson said 85.1% of individuals “contact TSA because they do not receive TSA PreCheck on their boarding pass.”

In the past, the TSA has suggested that complaints, which are aggregated by the Department of Transportation’s Office of Aviation Consumer Protection, are preventable issues related to passengers, such as people not checking their documentation correctly. 

Meanwhile, the Data Liberation Project is also calling on the DHS component to make the dataset more accessible. The project’s director, Jeremy Singer-Vine, said that among other formatting challenges, the PDF form in which the TSA data is published can prevent deeper analysis. To deal with the problem, the DLP developed custom software to read through those documents. 

“All the data were trapped in PDFs, a format that (unlike standard spreadsheets) allows for no sorting, filtering, or analysis,” Singer-Vine told FedScoop in an email. “Worse, the reports use a deeply idiosyncratic layout, which meant the public couldn’t accurately copy-paste the text in the PDF into a spreadsheet. Without the ability to analyze the complaint counts, what value do the PDFs actually provide to the public?”

This story was updated March, 28, 2024, with comments from a second TSA spokesperson.

The post As TSA PreCheck enrollments surge, data shows complaints have followed appeared first on FedScoop.

]]>
76659