risk assessment Archives | FedScoop https://fedscoop.com/tag/risk-assessment/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 08 Nov 2023 15:06:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 risk assessment Archives | FedScoop https://fedscoop.com/tag/risk-assessment/ 32 32 House Dems call on White House to make agencies adopt NIST AI framework https://fedscoop.com/dems-push-biden-admin-to-mandate-nist-ai-framework/ Fri, 21 Jul 2023 13:24:19 +0000 https://fedscoop.com/?p=70876 Reps. Lofgren, Lieu, and Stevens say the Office of Management and Budget should require federal agencies to follow NIST's AI Risk Management Framework.

The post House Dems call on White House to make agencies adopt NIST AI framework appeared first on FedScoop.

]]>
House Democrats on Thursday pushed the White House’s Office of Management and Budget to mandate federal agencies adopt the National Institute of Standards and Technology’s AI Risk Management Framework, which could significantly affect how the government designs and develops AI systems.

House Science Space and Technology Committee Ranking Member Zoe Lofgren, D-Calif., along with Reps. Ted  Lieu, D-Calif., and Haley Stevens, D-Mich., sent a letter to OMB urging that federal agencies and vendors be required to follow the currently voluntary NIST AI guidance to analyze and mitigate the risks associated with the technology.

“We ask that you also consider utilizing the NIST AI RMF and subsequent risk management guidance specifically tailored for the federal government, to ensure agencies and vendors meet baseline standards in mitigating risk,” the three Democratic members said in their letter to OMB.

The Democrats said that the federal government must take a coordinated approach to ensure cutting-edge technologies like AI are used responsibly and that the NIST AI framework served as a “great starting point for agencies and vendors to analyze the risks associated with AI and how their systems can be designed and developed with these risks in mind.”

The Biden administration in recent months has worked to hold organizations accountable for addressing bias that may be embedded within AI systems while also promoting innovation. In October, it published an AI ‘Bill of Rights’ blueprint document, which was followed by NIST’s voluntary risk management framework in January.

The NIST AI framework document sets out four key functions that it says are key to building responsible AI systems: govern, map, measure and manage.

The document is a “rules of the road” that senior technical advisers at NIST hope will provide a starting point for government departments and private sector companies big and small in deciding how to regulate their use of the technology. Organizations can currently adopt the framework on a voluntary basis.

Commerce Secretary Gina Raimondo said in April that NIST’s AI framework represents the “gold standard” for the regulatory guidance of AI technology and has so far received a warm reception from industry.

Republicans are also in support of the NIST AI framework being adopted by federal agencies when creating and designing AI going forward.

A House Republican Science, Space and Technology committee aide told FedScoop that committee Chairman Frank Lucas first raised this issue of federal agency adoption of the NIST AI framework in May and now Republicans are in the process of drafting legislation on this issue.

The post House Dems call on White House to make agencies adopt NIST AI framework appeared first on FedScoop.

]]>
70876
Survey highlights need for agencies to find, remediate IT risks more quickly https://fedscoop.com/survey-highlights-need-for-agencies-find-remediate-it-risks-more-quickly/ Tue, 07 Mar 2023 20:30:00 +0000 https://fedscoop.com/?p=66479 A majority of federal and state agency executives believe their organizations can detect vulnerabilities in their networks, but many face challenges remediating them.

The post Survey highlights need for agencies to find, remediate IT risks more quickly appeared first on FedScoop.

]]>
Despite the progress federal, state and local governments have made discerning what devices and applications are running on their networks, new survey results show fewer than half of agency leaders are strongly confident their organizations can identify critical risks across their IT environment within a five-day business cycle.

Asked about the ability to determine various IT risks, 54% said their agencies could determine the number of outstanding vulnerabilities in their IT environment — and 56% could determine the specifics of those vulnerabilities.

While nearly three in four (72%) respondents said their organizations can determine if software updates and patches have been implemented, only 56% can determine why those updates and patches were not implemented.

Read the full report.

Those and other findings are part of a newly released study by Scoop News Group for FedScoop and StateScoop, which sought to gauge the ability of federal and state agencies to assess and manage risks across their IT environments. The study, which was underwritten by Tanium, is based on the responses of 193 prequalified government leaders, IT and security directors and managers, procurement staff and IT influencers.

Confronting complexity

The relative ability to assess and mitigate IT risk has consistently challenged government agencies. However, the increasing complexity of multi-cloud IT environments — and the growing reliance on out-of-network devices and applications — have made it more difficult for agencies to assess and address their IT risks.

Fewer than half (47%) of those surveyed, said their agency is able to determine the security status of endpoints that are “off-network,” such as personal devices used by employees working from home.

A key factor hindering agencies’ ability to minimize risks is their reliance on several security solutions installed over multiple years. Two-thirds of respondents at state and local agencies — and more than one-third at federal agencies — reported using six or more tools specifically for managing IT risks.

Another factor is the extent to which “agencies still don’t know what they don’t know when it comes to devices, applications and APIs interacting dynamically on their networks,” said Wyatt Kash, senior vice president at Scoop News Group, who headed up the research.

Consequently, Kash cautioned that though the results suggest agencies have made significant headway in monitoring and responding to endpoints on their networks, many executives may still be overestimating their ability to identify and mitigate looming threats.

The study also shines a spotlight on the following:

  • How satisfied federal and state IT executives are with their risk management tools.
  • Where their organizations are prioritizing their risk management investments.
  • The working relationships between agency IT operations and security teams.
  • Their confidence in the accuracy of their endpoint data.
  • Their confidence in third-party vendors to meet federal security compliance standards.

“These results show that federal, state and local agencies have made much progress in reducing their attack surface, which is encouraging,” said Matt Marsden, vice president of technical account management for public sector at Tanium.

“What stood out was that agency executives often say they are certain of the endpoints they know about on their networks,” he said. “But many of our customers discover 20% more endpoints they didn’t know were on their networks once Tanium was deployed to assess their environments.”

Marsden also pointed to a recent multi-industry study by Cybersecurity Insiders which found that 55% of cybersecurity and risk management professionals estimate more than 75% of endpoint attacks can’t be stopped with their current systems. He believes it’s not enough to know what devices and applications are on your networks; but remediating vulnerabilities quickly is equally important.

The challenge for many federal, state and local agencies is the inability of most endpoint management platforms to identify unknown or unmanaged assets. Marsden cited the need for absolute certainty across all environments, not just those impacting legacy tools.

He also recommended that federal, state and local agencies consolidate network data as much as possible and reduce the number of monitoring tools they use to combine data sources and reduce IT complexity.

Read the full findings in the report “Keeping Ahead of the Risk Curve.” Proactive risk management starts with a comprehensive view of risk posture. Learn more about Tanium’s no-cost, no-obligation risk assessment.

This article was produced by Scoop News Group for FedScoop and StateScoop, and sponsored by Tanium.

The post Survey highlights need for agencies to find, remediate IT risks more quickly appeared first on FedScoop.

]]>
66479
NOAA evaluating multi-factor authentication for apps and devices https://fedscoop.com/noaa-evaluating-multi-factor-authentication-solutions/ Wed, 17 Aug 2022 17:04:21 +0000 https://fedscoop.com/?p=58279 Chief information officer Zach Goldstein tells FedScoop the agency plans to launch a Cloud Program Management Office in fiscal 2023.

The post NOAA evaluating multi-factor authentication for apps and devices appeared first on FedScoop.

]]>
Editor’s note: This story has been updated to include additional information about the Open-Architecture Data Repository and NOAA’s supercomputing improvements.

The National Oceanic and Atmospheric Administration is exploring multi-factor authentication beyond its network as it looks to strengthen cybersecurity in accordance with the federal zero trust strategy, according to its chief information officer.

Zach Goldstein told FedScoop his agency already requires Common Access Cards (CACs) and personal identification numbers to authenticate to its network but continues to perform comparative analyses of multi-factor authentication (MFA) solutions for applications and devices.

“We’re looking at things other than CAC cards, things that are intelligent tokens — that know who I am, that can exchange certificates with a certificate server, that can be easily revoked, that can have multiple kinds of privileges,” Goldstein said.

Goldstein added that cybersecurity is his “first priority,” in keeping with the White House’s Cybersecurity Executive Order issued in May 2021, and that he hopes to select a token for app and device authentication by the second quarter of fiscal 2023.

NOAA is also increasing supply chain risk assessments of Software as a Service — looking not only at the firm but what they buy and use for services — under Goldstein, who’s been with the agency 17-and-a-half years and CIO since 2015.

Goldstein wants to expand NOAA’s use of the cloud in a way that further improves the agency’s cyber posture while shedding light on how migration is progressing.

“We have an initiative to create a Cloud Program Management Office (PMO), one of whose jobs will be to provide me and NOAA leadership with that answer,” he said.

Assuming the funding for the office within the president’s fiscal 2023 budget stands, Goldstein hopes to launch it by the end of that fiscal year.

According to Goldstein, NOAA was the second federal agency to move its email and calendar to a public cloud, Google Apps for Government, in 2011, and since then the agency has migrated websites, help desk ticketing and global device management.

“It became very clear that we needed to have more discipline going to the cloud and more efficiencies because people were duplicating each other by having to learn how to do a security evaluation of going to the cloud, learn how to authenticate to the cloud, figure out how to communicate and get my data to the cloud,” Goldstein said. “And they were also using different contract vehicles.”

The CIO agreed to authorize NOAA offices’ migrations with the expectation that once his team implemented centralized cloud services streamlining and lowering the cost of the process, they’d use those instead.

“It became very clear that we needed to have more discipline going to the cloud and more efficiencies.”

– NOAA Chief Information Officer Zach Goldstein

NOAA now offers a standard way of getting to the cloud; authenticating using its identity, credential and access management (ICAM) service; and contracting with the three large service providers — Google, Amazon and Microsoft — and others. The Office of the CIO’s Cyber Division evaluates cloud offerings once for universal use across NOAA, accelerating offices’ migrations, but the Cloud PMO will make it so they don’t have to consult separate experts for each step in the process.

A Cloud PMO will also help offices take advantage of NOAA Open Data Dissemination (NODD), which allows for “extremely inexpensive” egress to the public, Goldstein said.

The White House proposed a large funding increase for the Office of Space Commerce in its fiscal 2023 budget, which if accepted by Congress would elevate it to a staff office receiving IT support from the OCIO. 

Goldstein expects to indirectly advise on, provide perimeter security for and oversee the cloud-native Open-Architecture Data Repository, which processes tracking data on space objects to predict and assess risk of collision. This information will improve space situational awareness for commercial and civil space operators. A requirements analysis is ongoing, so the operational cost hasn’t been calculated yet.

“Because the cloud is available and they know how to do it, we know how to do it — we’re going to help the Office of Space Commerce with this — they’ll be able to get that capability in the hands of the world faster,” Goldstein said.

The cloud is also freeing up NOAA’s IT professionals — previously stuck patching, scanning and performing domain controller work — to improve weather forecasting model accuracy and speed.

Supercomputing improvements that continue to be made by NOAA have increased capacity for forecasting three times over and should lead to 30% growth in research computing by the end of 2022, but research and development could benefit from even more, Goldstein said. The agency’s objective is to get enough capacity to perform all NOAA research, and enable focusing these applications down to what should be operationalized.

“We’re not there yet,” Goldstein said. “But we’re getting closer.”

The post NOAA evaluating multi-factor authentication for apps and devices appeared first on FedScoop.

]]>
58279
NASA agrees to insider threat risk assessment of unclassified systems https://fedscoop.com/nasa-insider-threat-risk-assessment/ Fri, 18 Mar 2022 19:22:48 +0000 https://fedscoop.com/?p=49002 The agency plans to determine if the program needs to be expanded to protect more than just classified systems.

The post NASA agrees to insider threat risk assessment of unclassified systems appeared first on FedScoop.

]]>
NASA management has agreed to conduct a risk assessment of its unclassified systems to determine if its insider threat program should be expanded to include them, according to an Office of Inspector General report.

The agency plans to assemble a cross-discipline team with representatives from the offices of Protective Services and the Chief Information Officer, as well as the OIG Cyber Crimes Division by Dec. 1, 2023.

OIG recommended the move after finding that — while NASA appropriately implemented its insider threat program established in 2014 for classified IT systems — the agency’s unclassified systems still contained high-value assets and critical infrastructure facing “higher-than-necessary risk.”

“While NASA’s exclusion of unclassified systems from its insider threat program is common among federal agencies, adding those systems to a multi-faceted security program could provide an additional level of maturity to the program and better protect agency resources,” reads the report released Monday. “According to agency officials, expanding the insider threat program to unclassified systems would benefit the agency’s cybersecurity posture if incremental improvements, such as focusing on IT systems and people at the most risk, were implemented.”

NASA management further agreed to establish an insider threat working group with the offices of Protective Services, the Chief Information Officer, and Procurement and human resources by Dec. 1, 2023.

The working group will assess the resources needed to expand the insider threat program to protect unclassified systems from cybersecurity threats posed by employees and contractors. Limited staffing, technology resources and funding present challenges to expansion, as does the fact the offices of Protective Services and the Chief Information Officer share handling of unclassified systems, the Office of Procurement manages contracts, and the Office of the Chief Financial Officer grants and cooperative agreements.

The insider threat program currently consists of one full-time government employee and two contract employees performing user activity monitoring for anomalous activity with the help of software and resides within the Office of Procurement. Agency-wide insider threat training and a reference website for identifying threats, risks and follow-up are also provided, and the program is expanding contractor disclosure requirements to limit the risk of foreign influence during procurements.

“Nations such as Russia and Iran wage sophisticated cyber espionage campaigns directed at the
acquisition of U.S. trade secrets in both the private and government sectors, while other countries like
China attempt to blur the line between informal technology transfer and intellectual property theft by
recruiting leading U.S. experts in high-tech fields,” reads the report. “Currently, China is by far the most prolific sponsor of such recruitment programs through what it calls ‘talent plans.'”

OIG found NASA’s risk is “significant” given its ties to academia, research institutes and international partners.

Accidental leaks through phishing or forwarding of sensitive emails are most common at NASA, followed by misuse of networks or databases to skirt the agency’s cyber policy and then data theft for sale or inappropriate release. Improper use of NASA IT systems increased from 249 incidents in 2017 to 1,103 in 2020, 343% growth, with the most prevalent error being the failure to protect sensitive but unclassified information by, say, sending an unencrypted email containing such data, according to an OIG report from May.

A comprehensive insider threat risk assessment is intended to identify gaps in administrative processes and cybersecurity.

“At a time when there is growing concern about the continuing threats of foreign influence, taking the proactive step to conduct a risk assessment to evaluate NASA’s unclassified systems ensures that gaps cannot be exploited in ways that undermine the agency’s ability to carry out its mission,” reads the new report.

The post NASA agrees to insider threat risk assessment of unclassified systems appeared first on FedScoop.

]]>
49002
CISA working group assessing cyber risks to space infrastructure https://fedscoop.com/cisa-space-infrastructure-risk-assessment/ https://fedscoop.com/cisa-space-infrastructure-risk-assessment/#respond Tue, 16 Nov 2021 22:59:58 +0000 https://fedscoop.com/?p=44725 CISA's working group will emulate the public-private partnership used to protect pipeline operators.

The post CISA working group assessing cyber risks to space infrastructure appeared first on FedScoop.

]]>
The Cybersecurity and Infrastructure Security Agency established a cross-sector space working group that is performing an assessment of risks to both federal and commercial space infrastructure, said Assistant Director Bob Kolasky.

CISA’s primary concern is mitigating cyber risks to position, navigation and timing (PNT) services and GPS, Kolasky said, during an AFCEA Bethesda event on Tuesday.

The agency already examined all 55 national critical functions — ones government and the private sector perform that, if corrupted, would be detrimental to national security — as they relate to space.

“We have to think about space as a potential risk vector to national critical functions and space infrastructure as critical infrastructure,” Kolasky said.

CISA’s working group will leverage critical infrastructure sector partnerships, similar to how it’s done with pipeline operators in the aftermath of the Colonial Pipeline ransomware attack in May, he added.

The agency is also exploring ways to extend the benefits of investments in national security systems to commercial space missions working with the Department of Defense.

CISA continues to partner with DOD and the Department of Transportation to ensure redundant and terrestrial backup systems to space systems are resilient. Meanwhile the Department of Commerce’s Office of Space Commerce is teaming with DOD, DOT and NASA; industry; and academia to populate its Open-Architecture Data Repository of orbiting satellite and space junk locations.

The challenge to any efforts to harden space systems from earth after launch is they employ channels that foreign adversaries like Russia and China can also exploit.

“Space systems have a unique characteristic of things up there, right?” Kolasky said. “You don’t get many opportunities to replace them.”

The post CISA working group assessing cyber risks to space infrastructure appeared first on FedScoop.

]]>
https://fedscoop.com/cisa-space-infrastructure-risk-assessment/feed/ 0 44725
FBI awards $13.5M risk assessment contract in move to CIA clouds https://fedscoop.com/fbi-risk-assessment-clouds/ https://fedscoop.com/fbi-risk-assessment-clouds/#respond Thu, 04 Feb 2021 12:30:58 +0000 https://fedscoop.com/?p=39948 Telos Corporation has a $13.5 million contract from the bureau to integrate its Xacta solution — which is already used by the CIA — with the FBI's clouds.

The post FBI awards $13.5M risk assessment contract in move to CIA clouds appeared first on FedScoop.

]]>
The FBI is adopting the intelligence community’s real-time risk assessment practices for cloud computing.

Telos Corporation announced a $13.5 million contract from the bureau Wednesday to integrate its Xacta solution — which is already used by the CIA — with the FBI’s clouds. The bureau wants to shorten the time it takes to grant contractors permission to access its systems so its assessors can focus on more pressing security issues.

“They want to have a customized risk-management framework,” John Wood, CEO of Telos, told FedScoop. “They want to have a customized business process that provides workflows, and that ensures process efficiency and consistency across their enterprise.”

Telos has 12 months to add the risk assessment capability to the GovCloud the FBI uses, then to the FBI’s part of two CIA clouds: Commercial Cloud Services (C2S) and Secret Commercial Cloud Service (S-C2S). The FBI expects to hook up with those services this year.

Contractors seeking authorities to operate in the FBI’s system, whether on premise or in the cloud, must test against about 11,000 security controls within the National Institute of Standards and Technology’s Cybersecurity Framework. The manual process used to take nine months for the IC to provision a server but with the cloud takes 30 seconds, Wood said.

Xacta automates 85 percent of and continuously updates those controls, which ensure “very solid” cyber-hygiene such as good passwords, strong user access control and multi-factor authentication, Wood said.

Gaining a better understanding of the bureau’s risk posture is especially important following the massive breach of software from government contractor SolarWinds, Wood said. The incident compromised at least eight agencies as of December. The FBI has not specified whether it was exposed to the breach.

The post FBI awards $13.5M risk assessment contract in move to CIA clouds appeared first on FedScoop.

]]>
https://fedscoop.com/fbi-risk-assessment-clouds/feed/ 0 39948
CMMC looks to clear up questions about cybersecurity assessors https://fedscoop.com/cmmc-assessors-training-credentialing-answers/ https://fedscoop.com/cmmc-assessors-training-credentialing-answers/#respond Thu, 28 May 2020 18:04:47 +0000 https://fedscoop.com/?p=36760 The Accreditation Body for the Cybersecurity Maturity Model Certification (CMMC) has released more information about training and approval for people who want to be third-party assessors.

The post CMMC looks to clear up questions about cybersecurity assessors appeared first on FedScoop.

]]>
The Department of Defense’s new program for third-party cybersecurity assessments of contractors is starting to answer some of the important questions about who those third-party assessors will be.

The Accreditation Body (AB) that is overseeing the program — known as Cybersecurity Maturity Model Certification (CMMC) — has released new videos and requests for information that shed light on how assessors will be trained and credentialed.

The board acknowledges the process is “very complicated” and a massive undertaking, says Jeff Dalton, head of the AB’s Credentialing Committee. To cover the entire U.S. defense-industrial base, tens of thousands of assessors will likely need to be certified in the next few years. The training and credentialing videos show that the process to become an assessor is sequenced, meaning that each individual will need to go through each level of training to get to the next step, according to the board. Each level will require somewhere around 20-30 hours of coursework and exams to test a trainee’s grasp of the model and assessing methodology.

All 300,000 DOD contractors (except for providers of commercial-off-the-shelf goods) will need to get an in-person assessment and be certified to one of the five levels of cybersecurity maturity by an assessor in order to be awarded a contract from the DOD.

“People need to appreciate the volume here, the volume of CMMC assessors will … far exceed everything out on the market today,” Dalton says in a video detailing the latest update from the board. “We really want to make sure we have all the right roles.”

The assessors themselves will need to be certified by the AB and be a part of a Certified Third Party Assessment Organization (C3PAO) licensed by the AB, which will consist of experienced cybersecurity firms, assessment organizations and groups with industry expertise. 

This is the process for individual people, as outlined in the credentialing video:

  • Certified Professional (CP): step one requires an individual to pass an exam demonstrating basic understanding of the CMMC controls and take more than 20 hours of training. Being a CP allows an individual to be a part of an assessment team and is the “gateway” to the next steps, Dalton said.
  • Certified Assessor (CA): CAs will be the foot soldiers in the army of new CMMC industry. They will be allowed to certifying contractors after taking additional training, passing the CP exam and being a part of a licensed C3PAO.
  • Certified Instructor (CI): Training the assessors will be the CIs. These individuals will be certified to train at specific levels of the maturity model and must first go through the first two steps of CP and CA to make their two-letter acronym be CI.
  • Certified Master Instructor (CMI): Trainers who train the assessors will need training of their own, right? The Master Instructors will be those at the top of the pyramid of training and work for the AB, which has yet to staff-up an organization under the board.
  • Certified Quality Auditor (CQA): Finally, CQAs will be the arbiters of quality in the assessment process. The first will likely be a board member.

The training to start moving individuals through the steps of certification will take place in two phases, Ben Tchoubineh who leads the training committee, said in a separate video. Phase One will be a pilot program aimed to whisk potential assessors through the training and credentialing to meet DOD’s “aggressive” timeline for CMMC implementation. The board will select 60 experienced cybersecurity organizations that will be “beta” learners to test-run the training and credentialing. Those assessors will handle the companies bidding in the fall on the contracts that will be the first to include CMMC requirements, Tchoubineh said.

The board will take in revenue from all of the levels of training. Final pricing is not yet published, but accidentally published drafts indicate the cohort in Phase One could see costs around $5,000 per organization and other training services from the board costing a few hundred dollars.

The second phase will be the long-term one, with the goal of bringing in enough organizations and individuals to meet the demand for CMMC assessments.

The training itself was developed by the DOD and is under review by the board, Tchoubineh said. The relationship between the government and the accreditation body was defined in a still-unpublished memorandum of understanding, so the exact way the DOD and AB will work together is still unclear.

New RFIs

The AB also published two requests for information for assistance in market research on educational tools. Both requests seek to inform the board on how to scale up the delivery of training and testing by partnering with outside groups.

The first RFI is for market research on working with third-parties on training development and implementation. With the large scale of needed training and assessment, the board knows it will need help in overseeing much of the process. The RFI seeks to betting inform the board on how to work with other organizations.

The second RFI, in a similar vein, is asking for information on delivering assessment exams. The challenge of delivering online exams is one many organizations, both in the cyber world and in academia, are finding themselves in.

The post CMMC looks to clear up questions about cybersecurity assessors appeared first on FedScoop.

]]>
https://fedscoop.com/cmmc-assessors-training-credentialing-answers/feed/ 0 36760
Why government is slow to endorse frameworks for quantifying cybersecurity risk https://fedscoop.com/cybersecurity-risk-management-doe-dot/ https://fedscoop.com/cybersecurity-risk-management-doe-dot/#respond Mon, 19 Aug 2019 11:30:50 +0000 https://fedscoop.com/?p=33433 Until individual agencies like the Department of Energy and Department of the Treasury see success quantifying risk, the practice won't likely be mandated.

The post Why government is slow to endorse frameworks for quantifying cybersecurity risk appeared first on FedScoop.

]]>
Some agencies have begun to quantify their cybersecurity risk — but don’t expect the government to make the practice mandatory anytime soon.

In April 2018, the National Institute of Standards and Technology (NIST) published the latest version of its Cybersecurity Framework for agencies, where risk is reduced to a qualitative one-to-four scale with traffic light color coding: red, yellow and green.

“Investment decisions are made that way,” a spokesperson for Rep. Jim Langevin, D-R.I., told FedScoop. Langevin wants to see agencies justify their cyber budgets with quantitative risk frameworks in time.

The international Factor Analysis of Information Risk (FAIR) standard is one of a handful in use and, in an agency first, the Department of Energy intends to implement the risk-assessment model before migrating data to the cloud.

First comes agencywide risk management training so everyone speaks the same language on cyber risk. At the same time, DOE is building a risk assessment program to quantify risk to information technology infrastructure before and after its cloud migration.

“What we’ve seen anecdotally in the industry is the risk in most scenarios of running data or applications in the cloud is less than if you do it on-premise — oftentimes with the same types of security tools — because cloud providers have typically been a bit more diligent in applying those security measures than IT folks within agencies,” said Nick Sanna, founder of the FAIR Institute and CEO of RiskLens.

Jack Jones created FAIR in 2001 while he was chief information security officer at Nationwide Insurance, hoping to answer his employers’ questions about how high the company’s cyber risk was and how much he could reduce it if given enough funding. On the heels of high-profile breaches like the 2014 Sony Pictures hack, many private sector boards started asking the same questions, Sanna said.

The FAIR Institute formed as a nonprofit in 2016 and boasts about 6,000 members including 30% of Fortune 1000 companies like Bank of America, Cisco Systems and Fannie Mae. Its goal is to accelerate learning and share best practices around risk management — a goal echoed in President Trump’s May 2017 executive order pushing agencies to assess their cyber budgets based on the risk they face.

The Office of Management and Budget “is saying, ‘We need better data. [Agencies] are throwing us a bunch of technical data. We have no idea if that presents a lot of risk or very little risk,’” Sanna said.

But while OMB had staff trained on FAIR, it’s held off on issuing guidance absent agency success stories it can highlight — so as not to appear heavy-handed, Sanna added.

“While OMB does encourage agencies to adopt methods for better understanding and managing their cybersecurity risks, it does not endorse any one methodology over another and does not have plans to,” said a senior administration official.

Sanna wants to see the Government Accountability Office update Federal Information Security Management Act reporting rules to mandate agencies use the FAIR standard. He’s met with Langevin, chair of the Congressional Cybersecurity Caucus, along with Bank of America to encourage such an update.

It’s all about metrics

But frameworks are only as good as the metrics plugged into them.

“One of the things we challenged the FAIR Institute people with is the fact a lot of the assumptions that go into their models are squishy at best,” Langevin’s spokesperson said.

In the immediate term, Congress is in touch with DOE Assistant Secretary for Cybersecurity Karen Evans about the state of its framework and metrics being tested, the spokesperson added.

FAIR works by breaking problems down into their factors to determine what data is required for analysis.

“FAIR helps you understand that a controlled efficiency only makes sense when you have an asset attached to it, there is a threat affecting it and a resulting negative impact,” Sanna said.

He uses the analogy of a bald tire, which is only a risk if it’s actually on a car.

The framework considers things like the frequency of a threat, whether the threat actor is capable of overcoming a control, and if a breach would result in loss of availability or data and paying for credit monitoring for the affected afterward.

Bad metrics don’t reflect poorly on the FAIR model so much as the readiness of government agencies to implement the framework, said Jones, who’s also co-founder and chief risk scientist of RiskLens.

In July, GAO released findings that only seven of the 23 civilian Chief Financial Officers Act agencies had a cyber risk management strategy in place, and a failure to hire and retain key personnel was primarily to blame.

“The FAIR model actually makes it easy to harness the power of the data already available,” Jones said. “That said, with respect to looking at the impact of cyber events — the right side of the FAIR model — while it may seem more difficult a hill to climb, there are already people in these agencies who do this on a day-to-day basis — that is, help to look at the impact of events that interrupt the mission.”

For instance, he explained, the EPA has people on staff who evaluate how business activity impacts the population’s health and apply models to that to help drive regulation.

While there is room for metrics to mature, the FAIR Institute believes they’re “highly effective,” and corporations are finding data on increases in attack frequency and potential losses helpful when allocating cyber resources, Jones said.

And increasingly federal agencies are joining industry.

The Department of the Treasury is the latest agency turning to metrics to quantify risk in terms of dollars rather than color-coding.

About 40 employees with the Office of the Comptroller of the Currency, Treasury’s regulatory arm, have trained in FAIR. Now the department is looking for the right contract vehicle to start a risk management project, Sanna said.

NIST said it isn’t currently seeking public comment on or evaluating FAIR risk assessments, but the Enterprivacy Consulting Group has posted a privacy risk assessment tool, FAIR Privacy, to its Privacy Engineering Collaboration Space. The online venue allows practitioners like Enterprivacy the ability to share and receive feedback on risk management solutions.

Sanna is optimistic the next iteration of the NIST Cybersecurity Framework will reference FAIR as a way to assess risk, thereby raising its profile among agencies.

“There’s a lag, but now the administration is pressuring the agencies to do better because otherwise what’s the alternative?” Sanna asked. “They’re going to cut their budget arbitrarily if you cannot demonstrate you need money.”

The post Why government is slow to endorse frameworks for quantifying cybersecurity risk appeared first on FedScoop.

]]>
https://fedscoop.com/cybersecurity-risk-management-doe-dot/feed/ 0 33433
FedRAMP issues new continuous monitoring guidance and requirements https://fedscoop.com/fedramp-issues-new-continuous-monitoring-guidance-requirements/ https://fedscoop.com/fedramp-issues-new-continuous-monitoring-guidance-requirements/#respond Wed, 21 Mar 2018 19:08:42 +0000 https://fedscoop.com/?p=27813 FedRAMP issued new documents detailing the requirements needed for automated scanning.

The post FedRAMP issues new continuous monitoring guidance and requirements appeared first on FedScoop.

]]>
The Federal Risk and Authorization Management Program (FedRAMP) issued three new documents Tuesday outlining continuous monitoring guidance and requirements for cloud service providers.

The new documents include a draft version of the “Automated Vulnerability Risk Adjustment Framework Guidance,” the “Guide for Determining Eligibility and Requirements for the Use of Sampling for Vulnerability Scans” and another on “Vulnerability Scanning Requirements.”

FedRAMP officials have emphasized reducing the compliance costs of continuous monitoring requirements for providers offering cloud services to federal agencies and began in January slowly rolling out new documents to address the changes.

The new documents provide guidance for using automated tools based on the Common Vulnerability Scoring System and how CSPs can “scan representative samples of system components instead of the entire system,” as well as other vulnerability scanning requirements.

CSPs will have six months to apply the new guidance for the sampling of vulnerability scans, while FedRAMP officials said they would pilot the draft “Automated Vulnerability Risk Adjustment Framework Guidance” over the course of the year before issuing a final version.

The post FedRAMP issues new continuous monitoring guidance and requirements appeared first on FedScoop.

]]>
https://fedscoop.com/fedramp-issues-new-continuous-monitoring-guidance-requirements/feed/ 0 27813
EXCLUSIVE: VA downplays risk assessment report https://fedscoop.com/va-downplays-risk-assessment-report/ https://fedscoop.com/va-downplays-risk-assessment-report/#respond Sat, 10 Jan 2015 11:01:01 +0000 http://ec2-23-22-244-224.compute-1.amazonaws.com/tech/exclusive-va-downplays-risk-assessment-report/ A 2013 internal security risk assessment of the Department of Veterans Affairs’ main electronic health record system that warned a data breach was “practically unavoidable” did not take into account various security mitigation actions the department had already taken to address a very specific vulnerability, according to VA officials. The heavily redacted assessment of the Veterans […]

The post EXCLUSIVE: VA downplays risk assessment report appeared first on FedScoop.

]]>
A 2013 internal security risk assessment of the Department of Veterans Affairs’ main electronic health record system that warned a data breach was “practically unavoidable” did not take into account various security mitigation actions the department had already taken to address a very specific vulnerability, according to VA officials.

The heavily redacted assessment of the Veterans Health Information Systems and Technology Architecture, or VistA, first reported last week by CNBC and obtained by FedScoop, warned it was “practically unavoidable that a data breach to financial, medical and personal veteran and employee protected information may occur within the next 12 to 18 months.”

But a VA official familiar with the risk assessment, speaking on background, told FedScoop the draft report was proposed last year by staff to address “a very specific but narrow risk, and its contentions were not intended to apply to all VA IT systems.”

The official said the report “did not take into account all of the defense-in-depth, mitigating security factors VA already has in place on its systems and network.” In addition, the report was reviewed by senior VA leadership, who “initiated actions to either validate existing mitigation controls or to put in place additional protections,” the official said.

Publicly, a VA spokesperson said the agency “has in place a strong, multilayered defense to combat evolving cybersecurity threats … [and] is committed to protecting veteran information, continuing its efforts to strengthen information security and putting in place the technology and processes to ensure veteran data at VA are secure.”

VA’s public response, however, has not settled the issue as far as some members of the House Committee on Veterans Affairs are concerned. Late Friday, Rep. Jackie Walorski, R-Ind., issued a statement pressing VA for answers on what portions of the risk assessment are no longer valid and what the agency has done to fix the vulnerabilities contained in the report.

“It’s incumbent upon VA to clarify what specific portions of this report were inaccurate and what changes have been made since the report has been finalized,” Walorski said. “Is a data breach to veterans’ financial, medical and personal information ‘practically unavoidable’ as the report states? If not, how likely is it? VA owes it to America’s veterans and American taxpayers to answer these questions in short order.”

The post EXCLUSIVE: VA downplays risk assessment report appeared first on FedScoop.

]]>
https://fedscoop.com/va-downplays-risk-assessment-report/feed/ 0 11803