Department of Health and Human Services (HHS) Archives | FedScoop https://fedscoop.com/tag/department-of-health-and-human-services-hhs/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 29 May 2024 22:20:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Department of Health and Human Services (HHS) Archives | FedScoop https://fedscoop.com/tag/department-of-health-and-human-services-hhs/ 32 32 HHS names acting chief AI officer as it searches for permanent official https://fedscoop.com/hhs-names-acting-chief-ai-officer/ Wed, 29 May 2024 15:58:57 +0000 https://fedscoop.com/?p=78559 Micky Tripathi will serve as acting CAIO in addition to his role as national coordinator for health IT, a spokesperson said.

The post HHS names acting chief AI officer as it searches for permanent official appeared first on FedScoop.

]]>
The Department of Health and Human Services has designated Micky Tripathi, its national coordinator for health IT, as acting chief artificial intelligence officer while it searches for a permanent replacement, a department spokesperson confirmed to FedScoop.

“Micky has been a leading expert in our AI work and will provide tremendous expertise and relationships across HHS and externally to guide our efforts in the coming months,” the spokesperson said. “Micky already serves as co-chair of the HHS AI Task Force. He will continue in his role as National Coordinator for Health IT during the search for a permanent Chief AI Officer.”  

Greg Singleton, the previous CAIO, is still part of the agency’s IT workforce, the spokesperson confirmed. But they also noted that the Office of Management and Budget required agencies to designate CAIOs at the executive level in an effort to improve accountability for AI issues. 

HHS didn’t say when the department had named Tripathi as acting CAIO, but the change appears to have been made recently on the agency’s website. Singleton was still listed as CAIO as of at least May 14, per a copy of HHS’s Office of the CAIO webpage archived in the Wayback Machine. According to the webpage at the time of this story, the content was last reviewed on May 24.

Under President Joe Biden’s AI executive order, CAIOs serve as the official in charge of promoting the use of the technology within an agency and managing its risks. The requirement to have such an official went into effect 60 days after OMB’s memo on AI governance, which would have been May 27.

Many agencies moved quickly to designate CAIOs after the order, tapping officials such as chief information, data and technology officers to carry out the role. Other agencies already had a CAIO, including HHS and the Department of Homeland Security. In fact, the position at HHS has been around since 2021 when the agency named Oki Mek as its first CAIO. Singleton replaced Mek as the department’s top AI official in March 2022.

The post HHS names acting chief AI officer as it searches for permanent official appeared first on FedScoop.

]]>
78559
HHS IT draft strategy aims to connect health data with systems https://fedscoop.com/hhs-it-draft-strategy-aims-to-connect-health-data-with-systems/ Thu, 28 Mar 2024 19:24:10 +0000 https://fedscoop.com/?p=76888 The health agency’s Office of the National Coordinator for Health Information Technology released a draft framework for health IT policies that leans into data, delivery and innovation.

The post HHS IT draft strategy aims to connect health data with systems appeared first on FedScoop.

]]>
The Department of Health and Human Services is seeking comments on a draft federal health IT strategic plan.

HHS’s Office of the National Coordinator for Health Information Technology (ONC), which released the draft Wednesday, establishes the following four goals: “promote health and wellness, enhance the delivery and experience of care, accelerate research and innovation and connect the health system with health data.” ONC said in a press release that it collaborated on the plan with 25 federal agencies that interact with health IT through purchasing, developing and regulating in order to improve health outcomes.

Among the overarching goals, the agency emphasized the need for a focus on “the policy and technology components needed to support” health IT users in connecting the system with data. 

Significantly, ONC states that the federal government plans to encourage “education, outreach and transparency” about artificial intelligence use so that both individuals and health care providers are informed about the performance and privacy practices of the technology. 

“The role of health IT and readily available access to health data have become increasingly essential to the administration of public health activities,” Jim Jirjis, director of the Centers for Disease Control and Prevention’s data policy and standards division, said in the release. “CDC appreciates how the draft 2024-2030 Federal Health IT Strategic Plan addresses the need to continue to advance the nation’s public health data infrastructure, while making sure that it is benefiting the communities that need it most.”

Additionally, Meg Marshall, director of informatics regulatory affairs at the Veterans Health Administration, said in the release that the VA is also seeking comments “so that veterans too can benefit from the goals of a coordinated federal health IT strategy.”

Marshall’s  statement follows a litany of problems with the VA’s Oracle Cerner-run electronic health record, including patient safety issues with EHR pharmacy software and a watchdog report about a veteran’s death tied to a scheduling error. The system was originally launched in 2020, in an effort to “establish interoperability of records between the VA and [Department of Defense] health care systems”.” EHR was later suspended in 2023 as part of a reset, and the department noted that it was working toward holding Oracle Cerner accountable for delivering high-quality services.

Marshall said in the statement that as the VA works to modernize its EHR system, ONC’s draft plan “provides direction towards a seamless health care experience” for patients and providers. 

“Not only that, the draft Federal Health IT Strategic Plan serves as an actionable roadmap for the federal government to align and coordinate health IT efforts in a transparent and accountable manner,” Marshall said.

The post HHS IT draft strategy aims to connect health data with systems appeared first on FedScoop.

]]>
76888
How risky is ChatGPT? Depends which federal agency you ask https://fedscoop.com/how-risky-is-chatgpt-depends-which-federal-agency-you-ask/ Mon, 05 Feb 2024 17:20:57 +0000 https://fedscoop.com/?p=75907 A majority of civilian CFO Act agencies have come up with generative AI strategies, according to a FedScoop analysis.

The post How risky is ChatGPT? Depends which federal agency you ask appeared first on FedScoop.

]]>
From exploratory pilots to temporary bans on the technology, most major federal agencies have now taken some kind of action on the use of tools like ChatGPT. 

While many of these actions are still preliminary, growing focus on the technology signals that federal officials expect to not only govern but eventually use generative AI. 

A majority of the civilian federal agencies that fall under the Chief Financial Officers Act have either created guidance, implemented a policy, or temporarily blocked the technology, according to a FedScoop analysis based on public records requests and inquiries to officials. The approaches vary, highlighting that different sectors of the federal government face unique risks — and unique opportunities — when it comes to generative AI. 

As of now, several agencies, including the Social Security Administration, the Department of Energy, and Veterans Affairs, have taken steps to block the technology on their systems. Some, including NASA, have or are working on establishing secure testing environments to evaluate generative AI systems. The Agriculture Department has even set up a board to review potential generative AI use cases within the agency. 

Some agencies, including the U.S. Agency for International Development, have discouraged employees from inputting private information into generative AI systems. Meanwhile, several agencies, including Energy and the Department of Homeland Security, are working on generative AI projects. 

The Departments of Commerce, Housing and Urban Development, Transportation, and Treasury did not respond to requests for comment, so their approach to the technology remains unclear. Other agencies, including the Small Business Administration, referenced their work on AI but did not specifically address FedScoop’s questions about guidance, while the Office of Personnel Management said it was still working on guidance. The Department of Labor didn’t respond to FedScoop’s questions about generative AI. FedScoop obtained details about the policies of Agriculture, USAID, and Interior through public records requests. 

The Biden administration’s recent executive order on artificial intelligence discourages agencies from outright banning the technology. Instead, agencies are encouraged to limit access to the tools as necessary and create guidelines for various use cases. Federal agencies are also supposed to focus on developing “appropriate terms of service with vendors,” protecting data, and “deploying other measures to prevent misuse of Federal Government information in generative AI.”

Agency policies on generative AI differ
AgencyPolicy or guidanceRisk assessmentSandboxRelationship with generative AI providerNotes
USAIDNeither banned nor approved, but employees discouraged from using private data in memo sent in April.Didn’t respond to a request for comment. Document was obtained via FOIA.
AgricultureInterim guidance distributed in October 2023 prohibits employee or contactor use in official capacity and on government equipment. Established review board for approving generative AI use cases.A March risk determination by the agency rated ChatGPT’s risk as “high.”OpenAI disputed the relevance of a vulnerability cited in USDA’s risk assessment, as FedScoop first reported.
EducationDistributed initial guidance to employees and contractors in October 2023. Developing comprehensive guidance and policy. Conditionally approved use of public generative AI tools.Is working with vendors to establish an enterprise platform for generative AI.Not at the time of inquiry.Agency isn’t aware of generative AI uses in the department and is establishing a review mechanism for future proposed uses.
EnergyIssued a temporary block of Chat GPT but said it’s making exceptions based on needs.Sandbox enabled.Microsoft Azure and Google Cloud.
Health and Human ServicesNo specific vendor or technology is excluded, though subagencies, like National Institutes of Health, prevent use of generative AI in certain circumstances.“The Department is continually working on developing and testing a variety of secure technologies and methods, such as advanced algorithmic approaches, to carry out federal missions,” Chief AI Officer Greg Singleton told FedScoop.
Homeland SecurityFor public, commercial tools, employees might seek approval and attend training. Four systems, ChatGPT, Bing Chat, Claude 2 and DALL-E2, are conditionally approved.Only for use with public information.In conversations.DHS is taking a separate approach to generative AI systems integrated directly into its IT assets, CIO and CAIO Eric Hysen told FedScoop.
InteriorEmployees “may not disclose non-public data” in a generative AI system “unless or until” the system is authorized by the agency. Generative AI systems “are subject to the Department’s prohibition on installing unauthorized software on agency devices.”Didn’t respond to a request for comment. Document was obtained via FOIA.
JusticeThe DOJ’s existing IT policies cover artificial intelligence, but there is no separate guidance for AI. No use cases have been ruled out.No plans to develop an environment for testing currently.No formal agreements beyond existing contracts with companies that now offer generative AI.DOJ spokesperson Wyn Hornbuckle said the department’s recently established Emerging Technologies Board will ensure that DOJ “remains alert to the opportunities and the attendant risks posed by artificial intelligence (AI) and other emerging technologies.”
StateInitial guidance doesn’t automatically exclude use cases. No software type is outright forbidden and generative AI tools can be used with unclassified information.Currently developing a tailored sandbox.Currently modifying terms of service with AI service providers to support State’s mission and security standards.A chapter in the Foreign Affairs Manual, as well as State’s Enterprise AI strategy, apply to generative AI, according to the department.
Veterans AffairsDeveloped internal guidance in July 2023 based on the agency’s existing ban on using sensitive data on unapproved systems. ChatGPT and similar software are not available on the VA network.Didn’t directly address but said the agency is  pursuing low-risk pilotsVA has contracts with cloud companies offering generative AI services.
Environmental Protection AgencyReleased a memo in May 2023 that personnel were prohibited from  using generative AI tools while the agency reviewed “legal, information security and privacy concerns.” Employees with “compelling” uses are directed to work with the information security officer on an exception.Conducting a risk assessment.No testbed currently.EPA is “considering several vendors and options in accordance with government acquisition policy,” and is “also considering open-source options,” a spokesperson said.The department intends to create a more formal policy in line with Biden’s AI order.
General Services AdministrationPublicly released policy in June 2023 saying it blocked third-party generative AI tools on government devices. According to a spokesperson, employees and contractors can only use public large language models for “research or experimental purposes and non-sensitive uses involving data inputs already in the public domain or generalized queries. LLM responses may not be used in production workflows.”Agency has “developed a secured virtualized data analysis solution that can be used for generative AI systems,” a spokesperson said.
NASAMay 2023 policy says public generative AI tools are not cleared for widespread use on sensitive data. Large language models can’t be used in production workflows.Cited security challenges and limited accuracy as risks.Currently testing the technology in a secure environment.
National Science FoundationGuidance for generative AI use in proposal reviews expected soon; also released guidance for the technology’s use in merit review. Set of acceptable use cases is being developed.“NSF is exploring options for safely implementing GAI technologies within NSF’s data ecosystem,” a spokesperson said.No formal relationships.
Nuclear Regulatory CommissionIn July 2023, the agency issued an internal policy statement to all employees on generative AI use.Conducted “some limited risk assessments of publicly available gen-AI tools” to develop policy statement, a spokesperson said. NRC plans to continue working with government partners on risk management, and will work on security and risk mitigation for internal implementation.NRC is “talking about starting with testing use cases without enabling for the entire agency, and we would leverage our development and test environments as we develop solutions,” a spokesperson said.Has Microsoft for Azure AI license. NRC is also exploring the implementation of Microsoft Copilot when it’s added to the Government Community Cloud.“The NRC is in the early stages with generative AI. We see potential for these tools to be powerful time savers to help make our regulatory reviews more efficient,” said Basia Sall, deputy director of the NRC’s IT Services Development & Operations Division.
Office of Personnel ManagementThe agency is currently working on generative AI guidance.“OPM will also conduct a review process with our team for testing, piloting, and adopting generative AI in our operations,” a spokesperson said.
Small Business AdministrationSBA didn’t address whether it had a specific generative AI policy.A spokesperson said the agency “follows strict internal and external communication practices to safeguard the privacy and personal data of small businesses.”
Social Security AdministrationIssued temporary block on the technology on agency devices, according to a 2023 agency reportDidn’t respond to a request for comment.
Sources: U.S. agency responses to FedScoop inquiries and public records.
Note: Chart displays information obtained through records requests and responses from agencies. The Departments of Commerce, Housing and Urban Development, Transportation, and Treasury didn’t respond to requests for comment. The Department of Labor didn’t respond to FedScoop’s questions about generative AI.

The post How risky is ChatGPT? Depends which federal agency you ask appeared first on FedScoop.

]]>
75907
CDC eyeing ‘model cards’ to detail generative AI tool information https://fedscoop.com/cdc-eyeing-model-cards-to-detail-generative-ai-tool-information/ Wed, 31 Jan 2024 22:36:27 +0000 https://fedscoop.com/?p=75845 “Model cards” could provide context for using generative AI tools, an official told attendees at AFCEA Bethesda’s Health IT Summit 2024.

The post CDC eyeing ‘model cards’ to detail generative AI tool information appeared first on FedScoop.

]]>
The Centers for Disease Control and Prevention is weighing the use of so-called “model cards” to detail key information about generative AI models it deploys, an agency data official said.

As part of its broader approach to AI governance, the CDC is considering “at least as a minimum” having model cards — which contain information like what’s in a model and how it’s made — deployed alongside its generative AI tools, Travis Hoppe, associate director for data science and analytics at the agency’s National Center for Health Statistics, said Tuesday.

“There’s always a risk when running a model, and you need that context for use,” Hoppe said at AFCEA Bethesda’s Health IT Summit 2024. “You need all of the quantitative metrics … but you also need this kind of qualitative sense, and the model card does capture that.” That information could be useful for evaluating potential risks when someone is considering new uses for a system years after it was initially deployed, Hoppe explained.

Considering model cards comes as the CDC, along with many other federal agencies, is exploring its own approach to governing generative AI use. The guardrails that agencies develop will ultimately play an important role in how the government interacts with the rapidly growing technology that it’s already using.

The CDC, for example, has started 15 generative AI pilots, Hoppe said, though he noted that those projects “are not particularly focused on public health impact.” Hoppe said the agency wanted to “tease out” things like security, how its IT infrastructure worked, and how employees interact with the tools before thinking about expanding uses in the rest of the agency. 

Meanwhile, Hoppe said the agency is in the process of developing guidance for generative AI. While the CDC is looking to executive orders, NIST’s AI Risk Management Framework, and the Department of Health and Human Services’ Trustworthy AI Playbook, he said much of what already exists isn’t “fully prescriptive” of what agencies should do.

“So we’re starting to write out some of these very prescriptive things that we should be doing, and kind of adapting it for our specific mission, which is obviously focused on public health,” Hoppe said. 

The panel discussion about generative AI featured several other HHS officials and was moderated by Paul Brubaker, deputy chief information officer for strategic integration of emerging concepts at the Department of Veterans Affairs Office of Information Technology.


Kevin Duvall, the Administration for Children and Families’ chief technology officer, said during the panel that his agency’s approach to generative AI is detailed in an interim policy that permits employee use of those tools with some constraints. That approach contrasted with other agencies’ prohibitions of third-party generative AI tools. 

Duvall said he doesn’t find it useful for the “government to artificially constrain something,” though he said there needs to be “checks and balances.” 

“I really make a comparison to probably discussions we were having 20, 25 years ago about search engines. You know, search engines can give unreliable results, so can gen AI,” Duvall said. 

One use case the agency has looked into for the technology is in the grants-making area, much of which is done through text, Duvall said, adding that the agency sees it as a “decision-assisting tool” and “not a decision-making tool.” 

The post CDC eyeing ‘model cards’ to detail generative AI tool information appeared first on FedScoop.

]]>
75845
ARPA-H launches mobile health program aimed at rural communities https://fedscoop.com/arpa-h-launches-mobile-health-program/ Tue, 16 Jan 2024 17:29:37 +0000 https://fedscoop.com/?p=75595 The new Platform Accelerating Rural Access to Distributed & InteGrated Medical care, or PARADIGM, sets out to develop a “rugged electric vehicle platform” to access rural communities.

The post ARPA-H launches mobile health program aimed at rural communities appeared first on FedScoop.

]]>
A new program launched Tuesday by the Advanced Research Projects Agency for Health aims to build an advanced mobile health option that can close the care gap for rural communities.

The ARPA-H program, called the Platform Accelerating Rural Access to Distributed & InteGrated Medical care, or PARADIGM, is intended to be “a multi-functional, rugged electric vehicle platform” that will employ medical devices capable of conducting screenings and testing for people in rural areas, the agency said in a release. 

That may include things like a mini CT scanner and digital training tools for health care workers, which the program is exploring. According to the release, the program will also seek to build software to connect devices both on the vehicles and remotely with electronic health records systems.

“What we aim to do is to develop a mobile health vehicle unit platform that essentially acts as a unit of a hospital,” Bon Ku, PARADIGM’s program manager, said during a call with reporters Tuesday. “This platform will allow patients not only to have virtual visits, but also to obtain imaging tests, advanced imaging tests, like CT scans, MRIs, ultrasounds, lab testing, and also interventions ranging from maternal health appointments to sophisticated appointments like obtaining dialysis.”

The program will use a program solicitation in five areas, including for a ruggedized CT scanner, a medical Internet of Things platform, and intelligent task guidance, according to the release. The program expects multiple awards and said resources “will depend on the quality of the proposals received and the availability of funds.”

The post ARPA-H launches mobile health program aimed at rural communities appeared first on FedScoop.

]]>
75595
HHS maintains deadline for AI transparency requirements in new tech certification rule https://fedscoop.com/hhs-maintains-ai-transparency-requirement-deadline/ Mon, 18 Dec 2023 22:06:23 +0000 https://fedscoop.com/?p=75293 Already certified health IT will need to comply with new artificial intelligence and algorithms requirements by the end of next year under an HHS final rule, underscoring the administration's focus on the technology.

The post HHS maintains deadline for AI transparency requirements in new tech certification rule appeared first on FedScoop.

]]>
A final Department of Health and Human Services rule will require developers seeking certification for health IT that employs artificial intelligence or other algorithms to meet certain transparency criteria by the end of 2024, despite calls in comments to push that deadline back.

While the final rule from HHS’s Office of the National Coordinator for Health Information Technology extended deadlines for other requirements between the proposed and final versions — such as requirements related to a new baseline standard for the health IT certification program — it maintained the end-of-next-year deadline for the AI and algorithms portion, underscoring the Biden administration’s focus on regulating the nascent and growing technology.

Under the rule — called Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, or HTI-1 — developers will need to update health IT currently certified under ONC’s old requirements by Dec. 31, 2024. Those new requirements mandate that tools used to aid decision-making that use AI and algorithms must share information about how the technology works as part of the agency’s certification process.

“I think it will be very interesting, you know, with that deadline coming a year out to see how the vendor community responds, and … can they make these new requirements work?” Jonathan French, senior director of public policy and content development at the Healthcare Information and Management Systems Society (HIMSS), said in an interview. 

In July, HIMSS recommended the agency delay the deadline to 2026. 

ONC Deputy National Coordinator Steven Posnack acknowledged the change in several deadlines in a call with reporters after the final rule was released last week, saying the agency “sought to space and pace many of the different requirements over time” to give industry time to make incremental adjustments. 

But Posnack added that “the algorithm-related transparency was a high priority for our office, the secretary, and administration. That’s one of the ones that we required straight out of the gate within a one-year time period.”

The algorithmic and AI requirements for decision support interventions (DSI) in the final rule come as the Biden administration has intensified its focus on regulating the technology. For example, the administration last week also announced voluntary commitments from health care companies to harness AI while managing its risks.

Generally, the algorithm requirements in the rule are aimed at promoting transparency and responsible AI use in health IT, such as electronic health records systems. In an interview with FedScoop in June, Micky Tripathi, the national coordinator for health IT, described the requirements as a “nutrition label” for algorithms.

In addition to the new AI and algorithm certification requirements, the rule also makes data interoperability updates to its health IT certification process and implements provisions under the 21st Century Cures Act.

ONC’s certification program is voluntary, but it’s incentivized by requirements that hospitals and physicians use certified systems when participating in certain Centers for Medicare and Medicaid Services payment programs. In a press release about the rule last week, ONC said health IT that it has certified “supports the care delivered by more than 96% of hospitals and 78% of office-based physicians around the country.”

While the deadline remains, industry experts analyzing the 916-page document said the final rule so far appears to address other concerns around the algorithm transparency portion. French, for example, pointed to more detailed information on the “source attribution” requirements and more detail on testing information. 

During the public comment period, the ONC received responses that said the rule’s AI and algorithmic requirements went too far, and others that said they didn’t go far enough. The American College of Cardiology called the proposal “overly broad,” whereas Ron Wyatt, the chief scientist and medical officer at the Society to Improve Diagnosis in Medicine, argued that the rule should go further, requiring information provided about the algorithms to be made publicly available. 

The final rule does include changes, according to ONC. “In response to public comments, the final DSI criterion includes clearer [and] more precisely scoped requirements for health IT developers,” ONC said in a fact sheet accompanying the rule. “In particular, the final criterion requires that health IT developers are responsible for only the predictive DSIs that they supply as part of their certified health IT.”

Joseph Cody, associate director for health IT and digital health policy at the American College of Cardiology, said ONC “made steps in the right direction” with some of the changes. 

In particular, he said “they did a much better job of delineating the difference between” evidence-based and predictive DSIs. Though he said some of the definitions are “still overly broad.”

“If you’re expecting clinicians to spend a lot of time to go through and look at all the different components that are required to be publicly transparent and available to them, it becomes very hard for that clinician to be able to spend that time,” Cody said. He added that ACC is looking forward to having conversations with federal agencies about additional steps.

The final rule also includes ongoing maintenance and risk management requirements for health IT developers to keep “source attributable” information in the DSIs up-to-date, according to ONC. Health IT will be required to comply with the maintenance certification starting January 2025.

Rebecca Heilweil contributed to this article.

The post HHS maintains deadline for AI transparency requirements in new tech certification rule appeared first on FedScoop.

]]>
75293
HHS exploring program management office support for departmentwide zero trust implementation https://fedscoop.com/hhs-exploring-zero-trust-program-management-office/ Mon, 20 Nov 2023 23:31:18 +0000 https://fedscoop.com/?p=74893 Achieving zero trust will require HHS to “significantly upgrade governance and Information Technology (IT) management” the department said in a request for information about establishing a program management office.

The post HHS exploring program management office support for departmentwide zero trust implementation appeared first on FedScoop.

]]>
The Department of Health and Human Services is exploring establishing a “program management office support” focused on assisting with zero-trust security implementation across the department, according to a Monday contracting solicitation.

As part of that process, the HHS’s Office of Chief Information Officer is looking for potential contractors that could identify capabilities and gaps related to zero trust in each operating division, develop and maintain a zero trust scorecard, and establish a zero-trust roadmap, among other things, according to the request for information posted to federal contracting website SAM.gov.

The information security office within the OCIO is currently conducting market research on the establishment and maintenance of a program management office support for zero trust, according to the solicitation, and is looking to get information from interested parties by Dec. 6.

“While a few [operating divisions] within HHS have Zero Trust Maturity (ZTM) plans in place, HHS is just beginning to align resources to a department wide Zero Trust Strategy,” according to the solicitation.

HHS didn’t respond to a request for comment.

The solicitation comes as agencies work to achieve the Biden administration’s standards to improve cybersecurity through governmentwide zero-trust security architecture by the end of fiscal year 2024. 

While the Biden administration issued a strategy for achieving those goals, efforts can vary by agency. For example, the Department of Commerce’s CIO Andre Mendes told FedScoop in July that the agency elected to have a department-wide rather than letting bureaus chart their own course. 

Although the department already has many of the skills and technologies required by Biden’s zero-trust architecture strategy, the solicitation said that “putting all the components together requires HHS to significantly upgrade governance and Information Technology (IT) management, and more deeply integrate teams and technologies.”

At least one agency is already establishing a zero-trust program management office. The Department of Education is getting funding under the General Services Administration’s Technology Modernization Fund to establish an “enterprise-wide program management office dedicated to zero trust,” according to the TMF website. 

The Department of Education awarded a contract to ShorePoint Inc. to provide program management office support.

The post HHS exploring program management office support for departmentwide zero trust implementation appeared first on FedScoop.

]]>
74893
ARPA-H announces up to $50 million for six health data security projects https://fedscoop.com/arpa-h-invests-50-million-health-data-security/ Mon, 02 Oct 2023 19:54:25 +0000 https://fedscoop.com/?p=73280 The investment will fund contracts to advance technology that addresses security vulnerabilities for health data through the agency’s Digital Health Security (DIGIHEALS) project.

The post ARPA-H announces up to $50 million for six health data security projects appeared first on FedScoop.

]]>
The latest investment from the Advanced Research Projects Agency for Health (ARPA-H) will direct millions toward projects looking to make advancements in technologies that protect the security of health data.

ARPA-H, which sits within the U.S. Department of Health and Human Services, on Friday announced up to $50 million in funding for six contracts through its Digital Health Security (DIGIHEALS) project, which is focused on the electronic infrastructure of the U.S. healthcare system.

That investment will go toward projects operated by a combination of universities and companies that are focused on automated patching for medical devices, ransomware intervention, cognitive health assistants, cyber reasoning techniques, and electronic health record consolidation. 

“Together, these six contract awards represent a step forward in funding cutting-edge data security technologies to address pressing vulnerabilities in our health systems that are currently not addressed through existing national security efforts or the public and private sectors,” Andrew Carney, a program manager at ARPA-H said in a release.

The agency, which is aimed at supporting biomedical and health breakthroughs, previously launched the DIGIHEALS this summer and began soliciting proposals for projects.

Among the awardees, Systems & Technology Research LLC will receive up to $16 million for a project that aims to leverage technology originally developed for the Defense Advanced Research Projects Agency (DARPA) to develop automated medical device patching (AMdP2) technology.

“If successful, AMdP2 will provide medical device manufacturers and cybersecurity firms with an automated firmware vulnerability detection and remediation capability,” the announcement said.

The University of California San Diego, meanwhile, will receive up to $9.4 million for a project to develop a healthcare ransomware resiliency and response program (H-R3P) that will mitigate the impact of cyberattacks on healthcare delivery organizations.

The post ARPA-H announces up to $50 million for six health data security projects appeared first on FedScoop.

]]>
73280
ARPA-H looks to strengthen US hospital infrastructure in face of continued cyberattacks https://fedscoop.com/arpa-h-looks-to-strengthen-us-hospital-infrastructure-in-face-of-continued-cyberattacks/ Mon, 28 Aug 2023 19:50:18 +0000 https://fedscoop.com/?p=72364 "DIGIHEALS aims to ensure patients continue to receive care in the wake of a widespread cyberattack on a medical facility — like those that have caused hospitals to close their doors permanently," ARPA-H said in an announcement.

The post ARPA-H looks to strengthen US hospital infrastructure in face of continued cyberattacks appeared first on FedScoop.

]]>
The Department of Health and Human Services’ cutting-edge research agency announced a new initiative to better protect the nation’s hospitals from mounting cyberattacks that can put patients’ lives at risk.

Advanced Research Projects Agency for Health (ARPA-H) last week launched its Digital Health Security (DIGIHEALS) project, looking to contract for “proven technologies developed for national security and apply them to civilian health systems, clinical care facilities, and personal health devices,” according to an agency announcement.

“DIGIHEALS aims to ensure patients continue to receive care in the wake of a widespread cyberattack on a medical facility — like those that have caused hospitals to close their doors permanently,” said an agency release.

Earlier this month, a medical system with hospitals in Connecticut, Pennsylvania, Rhode Island and Southern California was disrupted by ransomware attacks that forced it to close some facilities. The attack was the latest in a growing trend of hospitals being targeted by bad actors — a scenario that can become a matter of life and death if patients are unable to receive the care they need.

A similar “IT security incident” occurred late last year affecting hospitals in Iowa, Nebraska and Washington. That came shortly after the release of a report that found that 90% of IT professionals working in health care said their facilities suffered a cyberattack in the past year, with ransomware in particular on the rise.

ARPA-H is soliciting proposals from industry through its Scaling Health Applications Research for Everyone (SHARE) broad agency announcement that it opened earlier this month.

“The DIGIHEALS project comes when the U.S. healthcare system urgently requires rigorous cybersecurity capabilities to protect patient privacy, safety, and lives,” ARPA-H Director Dr. Renee Wegrzyn said in a statement. “Currently, off-the-shelf software tools fall short in detecting emerging cyberthreats and protecting our medical facilities, resulting in a technical gap we seek to bridge with this initiative.”

According to the BAA, the DIGIHEALS project aims to accomplish three main objectives: find and patch flaws in mission-critical hospital systems; develop novel approaches to data and analytics; and improve the resiliency of digital health technology code.

“By adapting and extending security, usability, and software assurance technologies, this digital health security effort will play a crucial role in addressing vulnerabilities in health systems,” said ARPA-H Program Manager Andrew Carney. “This project will also help us identify technical limitations of future technology deployments and contribute to the development of new innovations in digital security to better keep our health systems and patients’ information secure.”

The opportunity to submit proposals under the BAA closes Sept. 7.

The post ARPA-H looks to strengthen US hospital infrastructure in face of continued cyberattacks appeared first on FedScoop.

]]>
72364
HHS’s artificial intelligence use cases more than triple from previous year https://fedscoop.com/hhs-ai-use-cases-more-than-triple/ Tue, 15 Aug 2023 17:09:31 +0000 https://fedscoop.com/?p=71917 The Department of Health and Human Services' annual AI use case inventory for fiscal 2023 includes 163 instances — up from 50 the previous year.

The post HHS’s artificial intelligence use cases more than triple from previous year appeared first on FedScoop.

]]>
The Department of Health and Human Services’ publicly reported artificial intelligence footprint nearly tripled from the previous year, adding new current and planned uses to its AI inventory like classification of HIV grants and removal of personally identifiable information from data.

The agency’s updated fiscal year 2023 AI use case inventory — which is required of agencies under a Trump-era executive order — shows 163 instances of the technology being operated, implemented, or developed and acquired by the agency. HHS’s public inventory for the previous fiscal year had 50 use cases

“Artificial intelligence use cases tripling from FY22 to FY23 is indicative of HHS’s commitment to leverage trustworthy AI as a critical enabler of our mission,” HHS’s Chief Information Officer Karl S. Mathias told FedScoop in an email.

The increase in reported uses at the agency comes as the conversations about AI’s possible applications and risks have intensified with the rise in popularity of tools like ChatGPT. The Biden administration, which has made AI a focus, is crafting an executive order to address the budding technology and provide guidance to federal agencies on its use.

The majority of AI tools used by HHS – 47 of them – are managed by the National Institutes of Health, according to FedScoop’s analysis of the data. The FDA manages 44, the second-highest number of uses, and the Administration for Strategic Preparedness and Response follows with 25 AI tools.  

Among the new instances reported in the inventory are tools used by NIH for classifying HIV-related grants and predicting stem cell research subcategories of applications, which were both implemented earlier this year. 

Meanwhile, the Centers for Disease Control and Prevention’s National Center for Health Statistics (NCHS) is exploring using an AI tool to transcribe cognitive interviews, which are used to evaluate survey questions and offer a detailed depiction of respondents’ meanings. According to the inventory, it plans to compare outputs from OpenAI’s automatic speech recognition system Whisper to those of VideoBank, company that provides tools for management of digital assets such as recordings, and manual transcription.

Also at NCHS, the agency is evaluating a tool from Private AI to identify, redact, and replace personally identifiable information “free text data sets across platforms within the CDC network.” The database states that use is in the development and acquisition phase, though it also includes an implementation date of May 2, 2023.

AI use case inventories are required of federal agencies under a Trump-era executive order (EO 13960) aimed at promoting trustworthy AI in government. Under that order, agencies must review their current and planned AI uses annually, check for compliance with the order, share them with other agencies, and post them publicly.

A recent FedScoop review of large agencies’ handling of those inventories showed that efforts across the federal government have so far been inconsistent, varying in terms of process, what they include, and timelines for publication.

The new HHS inventory offers a more detailed look into the agency’s AI uses than its inventory last year and includes nearly every category required under the Chief Information Officers Council’s more expansive guidance for documenting uses in fiscal year 2023.

The agency’s inventory for fiscal 2022 included the name, agency, and description of each use. The fiscal 2023 inventory includes those categories plus the stage of every use case and whether it was contracted. Some uses also include the dates it was initiated, began development and acquisition, and was implemented. 

A little more than a third, 36%, of HHS’s reported AI uses are in the operation and maintenance phase, 28% are in development and acquisition, 20% are in initiation, and 16% are in implementation. 

One key requirement of the executive order was to bring into compliance or retire uses that didn’t comply with its framework for AI use in government.

In response to an inquiry about any use cases that were retired or abandoned by agencies since the last inventory, Mathias said: “Some artificial intelligence use cases, like other technology projects, have pivoted or are no longer pursued for various reasons, but none have been retired because of lack of consistency with principles of Executive Order 13960 of December 3, 2020.”

The post HHS’s artificial intelligence use cases more than triple from previous year appeared first on FedScoop.

]]>
71917