Office of Personnel Management (OPM) Archives | FedScoop https://fedscoop.com/tag/office-of-personnel-management-opm/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Thu, 16 May 2024 19:46:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Office of Personnel Management (OPM) Archives | FedScoop https://fedscoop.com/tag/office-of-personnel-management-opm/ 32 32 How the Biden administration is tackling diversity in federal AI hiring https://fedscoop.com/how-the-biden-administration-is-tackling-diversity-in-federal-ai-hiring/ Thu, 16 May 2024 16:27:10 +0000 https://fedscoop.com/?p=78347 The pool of potential AI workers could pose a challenge to the administration’s efforts to build a diverse workforce to responsibly manage artificial intelligence.

The post How the Biden administration is tackling diversity in federal AI hiring appeared first on FedScoop.

]]>
The Biden administration’s plan to bolster the federal civilian workforce with more than 500 artificial intelligence professionals by the end of fiscal year 2025 could face a challenge when it comes to another one of its priorities: promoting a workforce that looks like America.

While data is limited, the broader AI workforce and pipeline appears to have the same demographic underrepresentation issues that STEM careers experience, lacking diversity in terms of gender, race and ethnicity. And just like the private sector, the government has historically struggled with diversity in STEM roles.

Aware of that landscape, the Biden administration says it’s making efforts to promote diversity in AI hiring, including encouraging agencies to target their outreach for open positions, underscoring the need for “AI-enabling” jobs, and engaging with groups aimed at diversifying technologists. Ultimately, what hangs in the balance of those actions is having a workforce that will bring a variety of experiences and perspectives to the table when managing the application of the booming technology — something the administration, experts, and advocates have stressed.

“If we don’t have a diverse group of people building something that needs to serve a larger group of people, we’re going to do ourselves a disservice, and there’s going to be a lot of unhappy people that can’t benefit from something that should be able to be accessible to all,” said Lisa Mae Brunson, founder of Wonder Women Tech, an organization aimed at helping advance women, people of color, and other underrepresented communities in tech and science fields. 

Already, the federal government is hiring for artificial intelligence positions and seeing interest in open roles. Since President Joe Biden’s October executive order on AI, more than 150 people have been hired in AI and AI-enabling roles, according to a report to the White House by the AI Tech and Talent Task Force. As of March, applications for AI and AI-enabling roles in 2024 have doubled when compared to similar periods in the previous two years, the report said.

That report also underscored the need for diversity, noting that the task force has “prioritized recruiting from a diverse pool of qualified candidates,” consistent with previous Biden executive orders that established the White House Gender Policy Council and outlined actions to promote diversity in federal government hiring.

According to the task force, those efforts recognize “the need for technical experts who can work to mitigate bias in AI systems and the overall underrepresentation of women, people of color, first-generation professionals and immigrants, individuals with disabilities, and LGBTQI+ individuals in the STEM field as a whole.”

Active recruiting 

As AI hiring efforts move forward, officials are stressing the importance of recruitment. 

Kyleigh Russ, a senior adviser to the Office of Personnel Management’s deputy director, told FedScoop the administration is trying to get away from a passive “post and pray” method of hiring — meaning the job gets posted and agencies hope the right person applies. Instead, agencies are encouraged to shift to “active recruiting.” 

Often the volume of applications isn’t the problem for federal government positions, Russ said, but there is a desire to make sure the right people and a diverse group of people are applying.

Active recruiting could mean reaching out to someone on LinkedIn, recruiting directly from minority-serving institutions, or engaging in events like its recent virtual job fair. 

Russ described the push for active recruiting as a “change in practice” and said OPM is working on a training program that will address active recruiting. That program, which it’s collaborating on with the U.S. Digital Service and the Office of Performance and Personnel Management, will be aimed at teaching human resources how to recruit and hire technologists, as it’s a specialized field with “unique challenges,” Russ said. 

During a panel about women in AI last month at Scoop News Group’s AI Talks, USDS Administrator Mina Hsiang pointed to the concept of telling stories about use cases and problems they’re trying to solve as a tool for hiring. 

“Different people want to solve different problems that they see in their communities or in their lives,” Hsiang said. “And so the more that we can connect this to problems that people care about, and show how these are relevant pieces of that, the more people will be motivated to sort of move into those fields.”

Hiring a diverse federal workforce across the government has been an important issue for the Biden administration since its early days. In June 2021, the White House issued an executive order to advance diversity, equity and inclusion in the federal workforce. That order directed agencies to assess the state of diversity, equity and inclusion in their workforces, and took steps to advance things like pay equity. 

AI-enabling

The administration is also highlighting the difference between AI and AI-enabling jobs, which includes roles that are less technical and broadens the pool of candidates. 

Roles that fall into the enabling category include things like data scientists, data analysis, and technical recruiters, Russ said. She noted that the administration has been stressing that it’s looking for both categories of roles in its recruiting campaign and specifically with the recent Tech to Gov job fair. 

That April 18 virtual fair, which is similar to others Tech to Gov has held before, yielded registrations from over 1,300 people representing all 50 states, according to numbers provided by the nonpartisan and nonprofit Tech Talent Project that coordinates the Tech to Gov coalition. The event focused on senior-level technologist roles at the roughly 15 federal agencies and four state agencies that participated. 

Jennifer Anastasoff, executive director of the Tech Talent Project, similarly underscored that both AI and AI-enabling roles are needed. For government hires, Anastasoff said it isn’t required “that every one of the folks who’s inside is someone who has deep expertise in the most technical of technical AI.”

“What we need are folks who can really help make sure that all of our systems — technically, data and otherwise — are really focused on the people who are supposed to be receiving those services,” said Anastasoff, who was a founding member of USDS. 

Anastasoff said the administration’s work with Tech to Gov shows a “level of commitment” to diversity in the technology workforce, as the coalition’s members are interested in that issue. Tech to Gov’s members include organizations like the U.S. Digital Response, Coding it Forward, and AnitaB.org

There’s also more work planned with groups trying to diversify tech. Deputy Federal Chief Information Officer Drew Myklegard told FedScoop the administration is planning a hiring push at this year’s Grace Hopper Celebration, a conference for women and non-binary people in technology that’s organized by AnitaB.org.

“It’s 30,000 individuals that come together who are excited, young, extremely diverse,” Myklegard said, “and we think we have a very compelling pitch why they should come and work for the government in AI.”

Additionally, there’s action being taken to support a diverse pipeline of AI professionals outside government. The National Science Foundation, for example, has a program targeted at diversifying the AI research community, including funding research and education at minority-serving institutions. Biden’s AI executive order directed NSF to continue its support of AI-related education and workforce development in an effort to “foster a diverse AI-ready workforce.” 

“We know that the existing research institutions, and some of the other institutions, are building curriculum, but this curriculum has to be everywhere because talent and ideas are anxious to engage, and that’s a deep commitment from NSF,” Sethuraman Panchanathan, the agency’s director, told FedScoop.

Diversity data

The growth of the AI workforce comes as STEM careers more broadly have historically struggled with diversity — both in the private sector and the federal government.

The U.S. Equal Employment Opportunity Commission, for example, found that women made up less than 30% of federal STEM jobs in fiscal year 2019. A November 2023 report by the Union of Concerned Scientists found that while the shares of scientists in the federal government grew more racially and ethnically diverse between 2017 and 2022, there were decreases in certain groups and inequities were still present in the STEM workforce at specific agencies. 

When it comes to the federal AI workforce, specifically, there aren’t yet figures. The government, through OPM, is still in the process of getting a head count of federal AI and AI-enabling employees. A recent publication from OPM that describes and defines those AI roles will aid those efforts to get a sense of the workforce within the government. Russ said that will likely include demographic data.

Data on the AI workforce is a challenge outside of government as well. Nestor Maslej, a research manager at Stanford’s Institute for Human-Centered Artificial Intelligence who manages the AI Index, said there isn’t as much data on diversity in AI compared to economic or technical performance data, and emphasized the need to address that.

“Although things are getting better, we really would want to kind of create a world in which there is more data — there is much more reporting,” Maslej said. “Because I think data is the first step in actually understanding what’s going on, what the landscape is like, and what kind of changes are required.” 

Stanford’s most recent AI Index report, for example, uses data on computer science graduates to paint a picture of the AI workforce pipeline. That data shows that men represent roughly 3 in 4 bachelor’s, master’s, and PhD computer science graduates in North America. Those students are predominantly white, though Asian graduates also make up a substantial portion in each degree type as well.

If people are able to see that the government has a diverse and representative AI workforce, Maslej said it could generate more confidence from the public in its regulation of that technology.

Looking forward

While the hiring push is still in its early stages, there are some suggestions on how to improve efforts.

Wonder Woman Tech’s Brunson said she’d like to see the administration be more vocal about a commitment to diversity with its AI hires, especially as the tech industry has seen a rollback of some diversity, equity and inclusion initiatives.

Brunson said she now doesn’t have the resources to be able to tell people looking for jobs where to go, and many people who are interested are trying to teach themselves about AI. “Where is there an opportunity … to train up these diverse candidates so that the future of AI talent looks different than what it looks like today?” Brunson said. 

But there is also optimism that diverse hiring is achievable. Seth Dobrin, founder and CEO of Qantm AI and the author of a forthcoming book on AI strategy, talent and culture, said that while the talent pool of people building AI models isn’t particularly diverse, the pool that the Biden administration will likely hire from is separate from that. He said that in his experience “it’s not as bleak as some of these studies show.” 

Dobrin, who was IBM’s first global chief AI officer, emphasized the importance of intentionally crafting job postings and descriptions so they are more inclusive to diversity. 

“It’s not hiring for a lowest common denominator,” he said. “It’s making sure that you craft your job descriptions appropriately, that you don’t interview until you have a diverse pool of candidates, and then you hire the best person from that pool.”

FedScoop reporter Caroline Nihill contributed to this story.

The post How the Biden administration is tackling diversity in federal AI hiring appeared first on FedScoop.

]]>
78347
OPM’s new HR Marketplace aims to be ‘community center’ for federal services https://fedscoop.com/opm-launches-hr-marketplace/ Wed, 01 May 2024 20:44:46 +0000 https://fedscoop.com/?p=77867 The HR Marketplace aligns with the workforce agency’s pre-designation as the quality services management office (QSMO) for human resources solutions.

The post OPM’s new HR Marketplace aims to be ‘community center’ for federal services appeared first on FedScoop.

]]>
The Office of Personnel Management’s recent launch of an online marketplace for human resources solutions is just the beginning of what the agency hopes will be a one-stop shop for HR best practices, market intelligence, and a catalog of available services.

The HR Marketplace, which is hosted on the General Services Administration’s Acquisition Gateway, was announced publicly last week, and currently has a catalog of HR solutions from federal shared service providers like the Department of Agriculture’s National Finance Center and OPM itself.

But the agency is working on a strategy to also onboard commercial services and looking to ensure that information it provides is fresh so people will want to regularly return, Steve Krauss, a senior advisor for OPM’s HR Quality Services Management Office, or QSMO, told FedScoop.

Before the creation of the website, OPM sought input from those in the federal government and industry to get an idea of what people wanted in such a tool, Krauss said. While people wanted a place to do market research and understand the solutions available, they also wanted more.

“They really want something more like a community center where they can go and learn more about the standards and the things they’re supposed to be implementing,” Krauss said. 

That includes learning about industry-recognized best practices, getting market intelligence, and other resources, in addition to the solutions catalog that serves as the foundation of the marketplace, he added. And that’s what OPM is working toward. 

“What you will find is the beginnings of a site that is going to progress over time to become that,” Krauss said.

The rollout of the website aligns with OPM’s pre-designation as the QSMO for civilian human resources services across the federal government. Established in 2019 by the Office of Management and Budget, QSMOs are intended to serve as “governmentwide storefronts” for technology solutions and services in various areas, according to GSA. Since then, several agencies designated as QSMOs have launched similar marketplaces for cybersecurity, financial management, and grants. 

While OPM is still technically pre-designated as the HR QSMO — which means that it hasn’t yet received its official designation from the Office of Management and Budget — OMB has advised the agency “to proceed to full operating capability as a QSMO while it completes its designation process,” according to an OPM spokesperson. They also noted that there isn’t a “practical distinction” between pre-designation and designation in the QSMO’s ability to operate.

OPM was given a pre-designation as the HR QSMO in March 2022 and received a positive vote from OMB’s Shared Services Governance Board, after it presented its implementation plan, the spokesperson said. 

Although the website was announced publicly last week, it quietly went live about a month before with a small number of users, Krauss said. So far, the feedback from that group has been positive — and nothing is broken, he added. 

As OPM continues to build upon the site, it’s also in the process of standing up a steering committee for the marketplace. While still in the early stages of building that group, Krauss said it will comprise customer stakeholders, such as federal shared service providers and OPM, and eventually the agency is interested in adding commercial participants as well.

Krauss said OPM has in the past fielded a lot of questions from agencies about what solutions are available for things like payroll and support for hiring, and the marketplace gives people a place to go explore and research the options on their own “for the first time.” 

It also brings more attention to OPM’s broader HR efforts. 

“This just creates a focal point to generate sort of more awareness and affinity for the HR QSMO, the HR line of business, and the things that OPM is trying to do in terms of leading the community forward,” Krauss said. 

For example, he said, OPM is interested in collaboration and figuring out how agencies can learn from each other. Over the next five to 10 years, federal agencies are looking at the same few cloud-based software as a service (SaaS) platforms that can meet federal government needs, and something OPM is working with agencies on is whether the community can agree on a common set of requirements and efficient procurement for those platforms.

“Having this marketplace, having the site up as a focal point for that, is going to be really useful to us in terms of just sort of advancing that conversation,” Krauss said.

The post OPM’s new HR Marketplace aims to be ‘community center’ for federal services appeared first on FedScoop.

]]>
77867
OPM issues generative AI guidance, competency model for AI roles required by Biden order https://fedscoop.com/opm-issued-generative-ai-guidance-ai-competency-model/ Mon, 29 Apr 2024 11:00:00 +0000 https://fedscoop.com/?p=77713 The guidance was among several actions required by the federal workforce agency within 180 days of President Joe Biden’s executive order on the technology.

The post OPM issues generative AI guidance, competency model for AI roles required by Biden order appeared first on FedScoop.

]]>
Guidance on generative AI and a competency model for AI roles are among the latest actions that the Office of Personnel Management has completed under President Joe Biden’s executive order on the technology, an agency spokesperson said.

In a statement provided to FedScoop ahead of the Monday announcement, OPM disclosed it would issue guidance on use of generative AI tools for the federal workforce; a competency model and skills-based hiring guidance for AI positions to help agencies find people with the skills needed for those roles; and an AI competency model specifically for civil engineering

All of those actions were among those the agency was required to complete at the 180-day mark of the October executive order, which would have been over the weekend. The spokesperson also noted that the agency established an interagency working group for AI, as required by the order. 

OPM was given multiple actions under the sweeping order, most of which were aimed at helping agencies attract and retain a federal workforce prepared to address AI. That role is important as the government is working to rapidly hire for 100 AI positions by this summer. The latest actions from OPM give federal agencies a better roadmap for hiring workers in those positions.

They also add to OPM’s existing work under the order, which has included authorizing direct hire authority for AI-related positions and outlining incentives for attracting and retaining AI workers in the federal government. 

Notably, OPM’s action on the responsible use of generative AI comes as agencies across the government have been developing their own unique approaches to those tools for their workforces. Those policies have ranged from banning the use of certain third-party tools to allowing use across the workforce with guidelines. 

The OPM guidance, which was posted publicly Monday, outlines risks and benefits of the technology along with best practices for implementing it in work. 

Though it ultimately directs employees to consult their agency’s policy, the guidance provides examples of uses and specific considerations for those uses, such as summarizing notes and transcripts, drafting content, and using generative tools for software and code development. 

“GenAI has the potential to improve the way the federal workforce delivers results for the public,” the guidance says. “Federal employees can leverage GenAI to enhance creativity, efficiency, and productivity. Federal agencies and employees are encouraged to consider how best to use these tools to fulfill their missions.”

Under the order, OPM was required to create that guidance in consultation with the Office of Management and Budget. 

In addition to the competency models and guidance, the OPM spokesperson also disclosed that the agency issued an AI classification policy and talent acquisition guidance. While those actions support the rest of OPM’s work, they weren’t required by Biden’s executive order but rather the 2020 AI in Government Act. The spokesperson described those actions as addressing “position classification, job evaluation, qualifications, and assessments for AI positions.”

OPM is seeking feedback on that policy and guidance in a 30-day comment period ending May 29. 

This story was updated April 29, 2024, with additional information and links from OPM released Monday.

The post OPM issues generative AI guidance, competency model for AI roles required by Biden order appeared first on FedScoop.

]]>
77713
Kiran Ahuja to step down as OPM director https://fedscoop.com/kiran-ahuja-to-step-down-as-opm-director/ Tue, 16 Apr 2024 20:27:13 +0000 https://fedscoop.com/?p=77296 Ahuja has served as Office of Personnel Management director since June 2021 and is the first Asian American woman to lead the agency.

The post Kiran Ahuja to step down as OPM director appeared first on FedScoop.

]]>
Office of Personnel Management Director Kiran Ahuja is stepping down after three years leading the federal civilian workforce agency. 

Ahuja, who is the longest-serving OPM director in more than 10 years, will depart her role in coming weeks, according to an agency release Tuesday. Ahuja was confirmed by the Senate in June 2021 and became both the first South Asian American and first Asian American woman to lead OPM. 

“From my time as a civil rights lawyer in the Department of Justice, to my years as OPM’s Chief of Staff, I’ve seen the power that public service has to change lives, rebuild communities, and make our nation stronger,” Ahuja said in a statement. “We have accomplished so much these last three years at OPM, but I am most proud of the friendships and bonds we built together in public service.” 

During her time leading OPM, Ahuja oversaw the administration’s efforts to implement a $15 minimum wage for federal workers, prohibit use of non-federal salary history in pay-setting for federal jobs, implement a new data strategy plan, and bolster the federal government’s tech workforce, among other things.

As part of the Biden administration’s AI efforts, Ahuja is a member of the AI and Tech Talent Task Force, which was created to support hiring efforts related to the president’s executive order on the technology. Related to that same order, OPM has also authorized direct hire authority for AI-related positions and outlined incentives for attracting and retaining AI workers in the federal government.

Prior to serving as OPM’s director, Ahuja was the agency’s chief of staff from 2015 to 2017. She also served in other federal government roles, including as executive director of the White House Initiative of Asian Americans and Pacific Islanders during the Obama administration and as an attorney at the Justice Department.

“Under Kiran’s leadership, OPM has bounced back stronger than ever and partnered with agencies across government to better serve the American people,” Rob Shriver, deputy director of OPM said in a statement. “Kiran represents the very best of the Biden-Harris Administration, and I am honored to call her a dear colleague and friend.”

The post Kiran Ahuja to step down as OPM director appeared first on FedScoop.

]]>
77296
AI talent role, releasing code, deadline extension among additions in OMB memo https://fedscoop.com/ai-talent-role-releasing-code-deadline-extension-among-additions-in-omb-memo/ Fri, 29 Mar 2024 16:40:52 +0000 https://fedscoop.com/?p=76904 Requiring the release of custom AI code, designating an “AI Talent Lead,” and extending deadlines were among the changes made to the final version of a White House memo on AI governance.

The post AI talent role, releasing code, deadline extension among additions in OMB memo appeared first on FedScoop.

]]>
Additions and edits to the Office of Management and Budget’s final memo on AI governance create additional public disclosure requirements, provide more compliance time to federal agencies, and establish a new role for talent.

The policy, released Thursday, corresponds with President Joe Biden’s October executive order on AI and establishes a framework for federal agency use and management of the technology. Among the requirements, agencies must now vet their AI uses for risks, expand what they share in their annual AI use case inventories, and select a chief AI officer.

While the final version largely tracks with the draft version that OMB published for public comment in November, there were some notable changes. Here are six of the most interesting alterations and additions to the policy: 

1. Added compliance time: The new policy changes the deadline for agencies to be in compliance with risk management practices from Aug. 1 to Dec. 1, giving agencies four more months than the draft version. The requirement states that agencies must implement risk management practices or stop using safety- or rights-impacting AI tools until the agency is in compliance. 

In a document published Thursday responding to comments on the draft policy, OMB said it received feedback that the August deadline was “too aggressive” and that timeline didn’t account for action OMB is expected to take later this year on AI acquisition. 

2. Sharing code, data: The final memo adds an entirely new section requiring agencies to share custom-developed AI code model information on an ongoing basis. Agencies must “release and maintain that code as open source software on a public repository” under the memo, unless sharing it would pose certain risks or it’s restricted by law, regulation, or contract.

Additionally, the memo states that agencies must share and release data used to test AI if it’s considered a “data asset” under the Open, Public, Electronic and Necessary (OPEN) Government Data Act, a federal law that requires such information to be published in a machine-readable format.

Agencies are required to share whatever information possible, even if a portion of the information can’t be released publicly. The policy further states that agencies should, where they’re able, share resources that can’t be released without restrictions through federally operated means that allow controlled access, like the National AI Research Resource (NAIRR).

3. AI Talent Lead: The policy also states agencies should designate an “AI Talent Lead,” which didn’t appear in the draft. That official, “for at least the duration of the AI Talent Task Force, will be accountable for reporting to agency leadership, tracking AI hiring across the agency, and providing data to [the Office of Personnel Management] and OMB on hiring needs and progress,” the memo says. 

The task force, which was established under Biden’s AI executive order, will provide that official with “engagement opportunities to enhance their AI hiring practices and to drive impact through collaboration across agencies.” The memo also stipulates that agencies must follow hiring practices in OPM’s forthcoming AI and Tech Hiring Playbook.

Biden’s order placed an emphasis on AI hiring in the federal government, and so far OPM has authorized direct-hire authority for AI roles and outlined incentives for attracting and retaining AI talent. 

4. Aggregate metrics: Agencies and the Department of Defense will both have to “report and release aggregate metrics” for AI uses that aren’t included in their public inventory of use cases under the new memo. The draft version included only the DOD in that requirement, but the version released Thursday added federal agencies.

Those disclosures, which will be annual, will provide information about how many of the uses are rights- and safety-impacting and their compliance with the standards for those kinds of uses outlined in the memo. 

The use case inventories, which were established by a Trump-era executive order and later enshrined into federal statute, have so far lacked consistency across agencies. The memo and corresponding draft guidance for the 2024 inventories seeks to enhance and expand those reporting requirements.

5. Safety, rights determinations: The memo also added a new requirement that agencies have to validate the determinations and waivers that CAIOs make on safety- and rights-impacting use cases, and publish a summary of those decisions on an annual basis. 

Under the policy, CAIOs can determine that an AI application presumed to be safety- or rights-impacting — which includes a wide array of uses such as election security and conducting biometric identification — doesn’t match the memo’s definitions for what should be considered safety- or rights-impacting. CAIOs may also waive certain requirements for those uses.

While the draft stipulated that agencies should report lists of rights- and safety-impacting uses to OMB, the final memo instead requires the annual validation of those determinations and waivers and public summaries.

In its response to comments, OMB said it made the update to address concerns from some commenters that CAIOs “would hold too much discretion to waive the applicability of risk management requirements to particular AI uses cases.” 

6. Procurement considerations: Three procurement recommendations related to test data, biometric identification, and sustainability were also added to the final memo. 

On testing data, OMB recommends agencies ensure developers and vendors aren’t using test data that an agency might employ to evaluate an AI system to train that system. For biometrics, the memo also encourages agencies to assess risks and request documentation on accuracy when procuring AI systems that use identifiers such as faces and fingerprints. 

And finally on sustainability, the memo includes a recommendation that agencies consider the environmental impact of “computationally intensive” AI systems. “This should include considering the carbon emissions and resource consumption from supporting data centers,” the memo said. That addition was a response to commenters who wanted the memo to expand risk assessment requirements to include environmental considerations, according to OMB.

The post AI talent role, releasing code, deadline extension among additions in OMB memo appeared first on FedScoop.

]]>
76904
How cloud modernization transformed OPM cybersecurity operations https://fedscoop.com/how-cloud-modernization-transformed-opm-cybersecurity-operations/ Tue, 27 Feb 2024 20:27:00 +0000 https://fedscoop.com/?p=76126 By shifting to cloud-native solutions, the U.S. Office of Personnel Management has significantly enhanced its underlying security infrastructure to better protect the agency from evolving cyber threats.

The post How cloud modernization transformed OPM cybersecurity operations appeared first on FedScoop.

]]>
Few organizations in the world provide human resource services at the scale of the U.S. Office of Personnel Management (OPM). OPM oversees personnel management services for 2.2 million federal workers — and the retirement benefits for another 2.7 million annuitants, survivors, and family members. Because the agency also manages the federal workforce’s recruiting, hiring, and benefits management, OPM is responsible for handling vast amounts of sensitive data, making it a prime target for cyberattacks. 

Following a massive data breach in 2015, OPM instituted a comprehensive overhaul of its IT and security practices. However, in the years since, it became increasingly clear that without modernizing its underlying IT infrastructure, many of the remedies OPM put in place were becoming outmoded in the face of ever more sophisticated cyberattacks.

That was especially apparent to Guy Cavallo, who arrived at OPM in the fall of 2020 as principal deputy CIO after leading sweeping IT modernization initiatives at the Small Business Administration (SBA) and before that at the Transportation Security Administration (TSA). He was named OPM’s CIO in July 2021.

Recognizing new cyber challenges

“We looked at the on-premises cyber tools that OPM was running since the breach and saw while they were effective, with today’s advancements in AI and cyber capabilities, they weren’t keeping up with the attack vectors we’re facing today,” said Cavallo in a recent interview. Threat actors had shifted to identity-based attacks using more sophisticated tactics, requiring advanced detection and response solutions.

Guy Cavallo, CIO, OPM

“We knew with AI coming and the Executive Order on Cybersecurity requiring logging to get visibility into your environment, investing in on-premises hardware would be a never-ending battle of running out of storage space,” he concluded.

The cloud was “the ideal elastic storage case for that,” he continued. But it also offered other critical solutions. The cloud was the ideal way to host applications to ensure “that we’re always up to date on patching and versions, leaving that to the cloud vendors to take care of — something that the federal government struggles with,” he said.

Checklist for a better solution

Cavallo wanted to avoid the mistake he had seen other organizations make, trying to weave all kinds of tools into an enterprise security blanket. “It’s incredibly difficult to integrate them and not have them attack each other — or also not have gaps between them,” he said. “I’m a believer that simpler is much better than tying together best-of-breed from multiple vendors.”

James Saunders, CISO, OPM

That drove Cavallo and OPM Chief Information Security Officer James Saunders to pursue a fundamental shift to a cloud-native cybersecurity platform and “making that the heart of our security apparatus,” said Saunders.  

After reviewing the options, they elected to move to Microsoft’s Azure cloud-based cybersecurity stack “so that we can take advantage of the edge of cloud, and cloud in general, to collect data logs.” Additionally, it would mean “We didn’t have to worry about software patching and ‘Do I have enough disk space?’ It also allows us to springboard into more advanced capabilities such as artificial intelligence,” Saunders said.

Because OPM exchanges data with many federal agencies that rely on different data systems, Cavallo and Saunders also implemented a cloud access security broker (CASB) — a security policy enforcement engine that monitors and manages security activity across multiple domains from a single location. It also “enables our security analysts to be more efficient and identify threats in a more holistic manner,” Saunders explained.

Added benefits

“There is a general misconception that you can only use cloud tools from the host vendor to monitor and protect that environment.  We found that leveraging cyber defenses that span multiple clouds is a better solution for us instead of having multiple different tools performing the same function,” Cavallo added.

Microsoft’s extensive threat intelligence ecosystem and the ability to reduce the number of contracts OPM has to maintain were also critical factors in their decision to move to Azure, Saunders added.

The pay-off

The migration from on-premises infrastructure to the cloud was a complex process involving the retirement of more than 50 servers and the decommissioning of multiple storage areas and SQL databases, according to Saunders. The most challenging aspect, though, was not the technology but managing the transition with the workforce. Extensive training and organizational change management were as critical as the technical migration to the success of the transition.

According to Saunders, the benefits didn’t take long to recognize:

  • Enhanced visibility: OPM now has a more comprehensive view of its security posture, thanks to the centralized platform and increased log collection.
  • Improved threat detection and response: AI-powered tools and Microsoft’s threat intelligence helps OPM identify and respond to threats faster and more effectively.
  • Reduced costs and complexity: Cloud-native solutions eliminate the need for buying expensive on-premises hardware and software, while also simplifying management and maintenance.
  • Increased scalability and agility: The cloud platform allows OPM to easily scale its security infrastructure as needed to meet evolving threats and business requirements.

Collectively, those and related cloud benefits are also helping OPM make faster headway in meeting the administration’s zero-trust security goals.

Lessons learned

Perhaps one of the most important benefits is being able to demonstrate the magnitude and nature of today’s threat landscape to the agency’s leadership and how OPM is much better prepared to defend against it, according to Cavallo.

“When James and I showed them the visibility that we have from all those logs, it was a drop-the-mic moment for them. We can say we blocked 4,000 attacks in the last hour, but until you actually show them a world map and our adversaries trying to get into OPM, then be able to click and show the real details of it — those threats get lost in the noise,” he said.

“My recommendation at the CIO level is, this is a better mousetrap. But you can’t just expect people to flock to it. You have to go show them why it’s a better mousetrap.”

Among the other lessons Cavallo recommends to fellow IT leaders:

  • Focus on simplicity: Choose a single, integrated security platform to avoid the complexity of managing multiple tools.
  • Invest in training: Ensure your staff is trained and familiar with new cloud-native security tools and processes.
  • Start small and scale gradually: Begin with a pilot project and gradually migrate your security infrastructure to the cloud.
  • Communicate effectively: Clearly explain the benefits of cloud-native security to your stakeholders and address any concerns.

This report was produced by Scoop News Group for FedScoop as part of a series on technology innovation in government, underwritten by Microsoft Federal.

The post How cloud modernization transformed OPM cybersecurity operations appeared first on FedScoop.

]]>
76126
OPM outlines incentives to attract, retain federal AI workforce https://fedscoop.com/opm-outlines-federal-ai-workforce-incentives/ Tue, 27 Feb 2024 15:00:00 +0000 https://fedscoop.com/?p=76215 New memo follows the Office of Personnel Management’s authorization of direct-hire authority for AI positions in December.

The post OPM outlines incentives to attract, retain federal AI workforce appeared first on FedScoop.

]]>
The Office of Personnel Management sent guidance to federal agencies Tuesday outlining pay and benefits flexibilities for AI positions as the administration works to attract and retain a workforce equipped to address the budding technology.

The memo and guidance from OPM Director Kiran Ahuja, which was shared with FedScoop, summarizes the “considerable discretionary authority” that agencies have for pay, incentive pay, leave and workforce flexibility programs for AI and other key technical jobs, and includes tips for agencies seeking to use the incentives. 

Among the benefits noted in the guidance: Recruitment and retention incentives, student loan repayment, a higher annual leave accrual rate for certain positions, multiple mechanisms for allowing higher pay, alternative work schedules and remote work.

The guidance was required by President Joe Biden’s October executive order, which placed an emphasis on federal AI hiring and included plans for “a national surge in AI talent in the Federal Government.” As part of those efforts, OPM announced in December that it authorized direct-hire authority for AI positions in government to create more flexibility for recruitment.

“For the few flexibilities that require OPM approval — special rates, critical pay, and waivers of the recruitment, relocation, and retention incentive payment limits — we stand ready to assist agencies and respond to their requests for enhanced compensation tools,” the memo said.

The flexibilities OPM outlined include a recruitment incentive for new employees and a relocation incentive for existing employees in difficult-to-fill positions of up to 25% of basic pay times the number of years in a service agreement, with a maximum four years. The guidance noted that for both of those incentives, OPM’s approval of direct-hire authority can serve as an agency’s justification that a position is difficult to fill without any further evidence.

Agencies can also offer a retention incentive for certain workers who are likely to leave the federal government of up to 25% of basic pay for a single employee or 10% for a group. To qualify for that incentive, employees don’t have to have a job offer from outside the federal government, OPM said.

Already, agencies are working to attract AI talent quickly. Earlier this month, the Department of Homeland Security announced a “hiring sprint” to build a team of 50 AI experts for its “AI Corps,” modeled after the U.S. Digital Service. That sprint, the agency said, will use OPM’s direct-hire authorization for AI positions to expedite and streamline the process.

Caroline Nihill contributed to this article. 

The post OPM outlines incentives to attract, retain federal AI workforce appeared first on FedScoop.

]]>
76215
AI integration, data-driven decisions should be among top workforce priorities for agencies, OPM says https://fedscoop.com/opm-ai-integration-data-driven-decisions-top-workforce-priorities/ Fri, 23 Feb 2024 19:01:22 +0000 https://fedscoop.com/?p=76179 New OPM playbook is intended to cultivate an “inclusive, agile and engaged” federal workforce.

The post AI integration, data-driven decisions should be among top workforce priorities for agencies, OPM says appeared first on FedScoop.

]]>
The federal workforce of the future will be tasked with leveraging artificial intelligence — including in hiring — and using accurate, timely data to inform policy decisions, the Office of Personnel Management said in an announcement Friday. 

In releasing its workforce playbook for federal agencies, OPM said agencies should implement strategies that aim to enable a workforce that is “inclusive, agile and engaged, with the right skills to enable mission delivery,” according to a press release. OPM listed 12 priorities, including AI integration and data-driven decisions. 

“OPM is 100% invested in strengthening the federal workforce,” OPM Director Kiran Ahuja said in the release. “This playbook is just another example of OPM’s ongoing efforts to equip federal agencies with the tools and resources to hire the right talent and strategically plan for their future workforce. The federal government works best when we leverage the full talent of our nation and workforce — this playbook is full of useful strategies to do just that.”

In the playbook, OPM calls on federal agencies to use “appropriate” AI capabilities in the HR process, understand how the technology will impact the workforce and safeguard employees accordingly, upskill teams with “appropriate competencies” and train existing talent on recent AI use cases and their applicability to current projects. The office pointed to a report from the Government Accountability Office that shared approximately 1,200 existing and planned use cases along with specific “challenges or opportunities that AI may help solve.”

The Biden administration is also asking agencies to explore the use of generative AI to see how the technology could “improve efficiencies” as needed. 

Additionally, the office is asking agencies to take steps toward implementing data into decision-making by using available data products and platforms, ensuring that data standards are implemented, identifying data literacy gaps and developing strategies to guarantee employees have the necessary data skills. OPM shared that the Department of Defense uses an advanced analytics platform that “supplies leaders with decision support analytics, visualizations, data tools and associated support services.” 

OPM also said in the release that it will provide training and technical assistance to agencies as they implement these strategies through webinars, and it will be posting “periodic updates” to its Workforce of the Future webpage.

The post AI integration, data-driven decisions should be among top workforce priorities for agencies, OPM says appeared first on FedScoop.

]]>
76179
How risky is ChatGPT? Depends which federal agency you ask https://fedscoop.com/how-risky-is-chatgpt-depends-which-federal-agency-you-ask/ Mon, 05 Feb 2024 17:20:57 +0000 https://fedscoop.com/?p=75907 A majority of civilian CFO Act agencies have come up with generative AI strategies, according to a FedScoop analysis.

The post How risky is ChatGPT? Depends which federal agency you ask appeared first on FedScoop.

]]>
From exploratory pilots to temporary bans on the technology, most major federal agencies have now taken some kind of action on the use of tools like ChatGPT. 

While many of these actions are still preliminary, growing focus on the technology signals that federal officials expect to not only govern but eventually use generative AI. 

A majority of the civilian federal agencies that fall under the Chief Financial Officers Act have either created guidance, implemented a policy, or temporarily blocked the technology, according to a FedScoop analysis based on public records requests and inquiries to officials. The approaches vary, highlighting that different sectors of the federal government face unique risks — and unique opportunities — when it comes to generative AI. 

As of now, several agencies, including the Social Security Administration, the Department of Energy, and Veterans Affairs, have taken steps to block the technology on their systems. Some, including NASA, have or are working on establishing secure testing environments to evaluate generative AI systems. The Agriculture Department has even set up a board to review potential generative AI use cases within the agency. 

Some agencies, including the U.S. Agency for International Development, have discouraged employees from inputting private information into generative AI systems. Meanwhile, several agencies, including Energy and the Department of Homeland Security, are working on generative AI projects. 

The Departments of Commerce, Housing and Urban Development, Transportation, and Treasury did not respond to requests for comment, so their approach to the technology remains unclear. Other agencies, including the Small Business Administration, referenced their work on AI but did not specifically address FedScoop’s questions about guidance, while the Office of Personnel Management said it was still working on guidance. The Department of Labor didn’t respond to FedScoop’s questions about generative AI. FedScoop obtained details about the policies of Agriculture, USAID, and Interior through public records requests. 

The Biden administration’s recent executive order on artificial intelligence discourages agencies from outright banning the technology. Instead, agencies are encouraged to limit access to the tools as necessary and create guidelines for various use cases. Federal agencies are also supposed to focus on developing “appropriate terms of service with vendors,” protecting data, and “deploying other measures to prevent misuse of Federal Government information in generative AI.”

Agency policies on generative AI differ
AgencyPolicy or guidanceRisk assessmentSandboxRelationship with generative AI providerNotes
USAIDNeither banned nor approved, but employees discouraged from using private data in memo sent in April.Didn’t respond to a request for comment. Document was obtained via FOIA.
AgricultureInterim guidance distributed in October 2023 prohibits employee or contactor use in official capacity and on government equipment. Established review board for approving generative AI use cases.A March risk determination by the agency rated ChatGPT’s risk as “high.”OpenAI disputed the relevance of a vulnerability cited in USDA’s risk assessment, as FedScoop first reported.
EducationDistributed initial guidance to employees and contractors in October 2023. Developing comprehensive guidance and policy. Conditionally approved use of public generative AI tools.Is working with vendors to establish an enterprise platform for generative AI.Not at the time of inquiry.Agency isn’t aware of generative AI uses in the department and is establishing a review mechanism for future proposed uses.
EnergyIssued a temporary block of Chat GPT but said it’s making exceptions based on needs.Sandbox enabled.Microsoft Azure and Google Cloud.
Health and Human ServicesNo specific vendor or technology is excluded, though subagencies, like National Institutes of Health, prevent use of generative AI in certain circumstances.“The Department is continually working on developing and testing a variety of secure technologies and methods, such as advanced algorithmic approaches, to carry out federal missions,” Chief AI Officer Greg Singleton told FedScoop.
Homeland SecurityFor public, commercial tools, employees might seek approval and attend training. Four systems, ChatGPT, Bing Chat, Claude 2 and DALL-E2, are conditionally approved.Only for use with public information.In conversations.DHS is taking a separate approach to generative AI systems integrated directly into its IT assets, CIO and CAIO Eric Hysen told FedScoop.
InteriorEmployees “may not disclose non-public data” in a generative AI system “unless or until” the system is authorized by the agency. Generative AI systems “are subject to the Department’s prohibition on installing unauthorized software on agency devices.”Didn’t respond to a request for comment. Document was obtained via FOIA.
JusticeThe DOJ’s existing IT policies cover artificial intelligence, but there is no separate guidance for AI. No use cases have been ruled out.No plans to develop an environment for testing currently.No formal agreements beyond existing contracts with companies that now offer generative AI.DOJ spokesperson Wyn Hornbuckle said the department’s recently established Emerging Technologies Board will ensure that DOJ “remains alert to the opportunities and the attendant risks posed by artificial intelligence (AI) and other emerging technologies.”
StateInitial guidance doesn’t automatically exclude use cases. No software type is outright forbidden and generative AI tools can be used with unclassified information.Currently developing a tailored sandbox.Currently modifying terms of service with AI service providers to support State’s mission and security standards.A chapter in the Foreign Affairs Manual, as well as State’s Enterprise AI strategy, apply to generative AI, according to the department.
Veterans AffairsDeveloped internal guidance in July 2023 based on the agency’s existing ban on using sensitive data on unapproved systems. ChatGPT and similar software are not available on the VA network.Didn’t directly address but said the agency is  pursuing low-risk pilotsVA has contracts with cloud companies offering generative AI services.
Environmental Protection AgencyReleased a memo in May 2023 that personnel were prohibited from  using generative AI tools while the agency reviewed “legal, information security and privacy concerns.” Employees with “compelling” uses are directed to work with the information security officer on an exception.Conducting a risk assessment.No testbed currently.EPA is “considering several vendors and options in accordance with government acquisition policy,” and is “also considering open-source options,” a spokesperson said.The department intends to create a more formal policy in line with Biden’s AI order.
General Services AdministrationPublicly released policy in June 2023 saying it blocked third-party generative AI tools on government devices. According to a spokesperson, employees and contractors can only use public large language models for “research or experimental purposes and non-sensitive uses involving data inputs already in the public domain or generalized queries. LLM responses may not be used in production workflows.”Agency has “developed a secured virtualized data analysis solution that can be used for generative AI systems,” a spokesperson said.
NASAMay 2023 policy says public generative AI tools are not cleared for widespread use on sensitive data. Large language models can’t be used in production workflows.Cited security challenges and limited accuracy as risks.Currently testing the technology in a secure environment.
National Science FoundationGuidance for generative AI use in proposal reviews expected soon; also released guidance for the technology’s use in merit review. Set of acceptable use cases is being developed.“NSF is exploring options for safely implementing GAI technologies within NSF’s data ecosystem,” a spokesperson said.No formal relationships.
Nuclear Regulatory CommissionIn July 2023, the agency issued an internal policy statement to all employees on generative AI use.Conducted “some limited risk assessments of publicly available gen-AI tools” to develop policy statement, a spokesperson said. NRC plans to continue working with government partners on risk management, and will work on security and risk mitigation for internal implementation.NRC is “talking about starting with testing use cases without enabling for the entire agency, and we would leverage our development and test environments as we develop solutions,” a spokesperson said.Has Microsoft for Azure AI license. NRC is also exploring the implementation of Microsoft Copilot when it’s added to the Government Community Cloud.“The NRC is in the early stages with generative AI. We see potential for these tools to be powerful time savers to help make our regulatory reviews more efficient,” said Basia Sall, deputy director of the NRC’s IT Services Development & Operations Division.
Office of Personnel ManagementThe agency is currently working on generative AI guidance.“OPM will also conduct a review process with our team for testing, piloting, and adopting generative AI in our operations,” a spokesperson said.
Small Business AdministrationSBA didn’t address whether it had a specific generative AI policy.A spokesperson said the agency “follows strict internal and external communication practices to safeguard the privacy and personal data of small businesses.”
Social Security AdministrationIssued temporary block on the technology on agency devices, according to a 2023 agency reportDidn’t respond to a request for comment.
Sources: U.S. agency responses to FedScoop inquiries and public records.
Note: Chart displays information obtained through records requests and responses from agencies. The Departments of Commerce, Housing and Urban Development, Transportation, and Treasury didn’t respond to requests for comment. The Department of Labor didn’t respond to FedScoop’s questions about generative AI.

The post How risky is ChatGPT? Depends which federal agency you ask appeared first on FedScoop.

]]>
75907
FITARA scorecard adds cloud metric, prompts expected grade declines https://fedscoop.com/fitara-scorecard-adds-cloud-metric-prompts-expected-grade-declines/ Thu, 01 Feb 2024 23:30:28 +0000 https://fedscoop.com/?p=75884 Lower grades were anticipated with the addition of a cloud metric in the 17th FITARA scorecard, Rep. Connolly said. “The object here is to move up.”

The post FITARA scorecard adds cloud metric, prompts expected grade declines appeared first on FedScoop.

]]>
A new version of an agency scorecard tracking IT modernization progress unveiled Thursday featured tweaked and new metrics, including one for cloud computing that caused an anticipated falter in agency grades. 

The latest round of grading awarded one A, 10 Bs, 10 Cs, and three Ds to federal agencies, Rep. Gerry Connolly, D-Va., announced at a roundtable discussion on Capitol Hill. While the grades were generally a decline from the last iteration of the scorecard, Connolly said that starting at a “lower base” was expected with the addition of a new category. “The object here is to move up.”

Carol Harris, director of the Government Accountability Office’s IT and Cybersecurity team, who was also at the roundtable, similarly attributed the decline to the cloud category.

“A large part of this decrease in the grades was driven by the cloud computing category, because it is brand new, and it’s something that we’ve not had a focus on relative to the scorecard,” Harris said.

The FITARA scorecard is a measure of agency progress in meeting requirements of the 2024 Federal IT Acquisition Reform Act that has over time added other technology priorities for agencies. In addition to cloud, the new scorecard also changed existing metrics related to a 2017 law, added a new category grading IT risk assessment progress, and installed a progress tracker.

“I think it’s important the scorecard be a dynamic scorecard,” Connolly said in an interview with FedScoop after the roundtable. He added: “The goal isn’t, let’s have brand new, shiny IT. It’s to make sure that our functions and operations are better serving the American people and that they’re protected.”

Harris also underscored the accomplishments of the scorecard, citing $4.7 billion in savings as a result of closing roughly 4,000 data centers and $27.2 billion in savings as the result of eliminating duplicative systems across government.

“So, tremendous accomplishments all coming out of FITARA and the implementation of FITARA,” she said.

The Thursday roundtable featured agency representatives from the Office of Personnel Management, the Nuclear Regulatory Commission, the Department of Housing and Urban Development, and the U.S. Agency for International Development. USAID was the only agency to get an A.

Updated scorecard

Among the changes, the new scorecard updated the existing category for Modernizing Government Technology to reflect whether agencies have an account dedicated to IT that “satisfies the spirit of” the Modernizing Government Technology Act, which became law in 2017.

Under that metric, each agency must have a dedicated funding stream for government IT that’s controlled by the CIO and provides at least three years of flexible spending, Connolly said at the roundtable.

The transparency and risk management category has also evolved into a new CIO investment evaluation category, Connolly said in written remarks ahead of the roundtable. That category will grade how recently each agency’s IT Dashboard “CIO Evaluation History” data feed reflects new risk assessments for major IT investments, he said.

The 17th scorecard also added a progress tracker, which Connolly said Democrats on the House Subcommittee on Cybersecurity, Information Technology, and Government Innovation worked on with the GAO to create. Connolly is the ranking member of that subcommittee.

“This section will provide transparency into metrics that aren’t being regularly updated or do not lend themselves to grading across agencies,” Connolly said, adding the data “still merits congressional attention, and we want to capture it with this tool.”

The progress tracker also allows stakeholders to keep tabs on categories the subcommittee has retired for the scorecard.

The release of a new scorecard has in the past been a hearing, but Connolly indicated the Republican majority declined to take the issue up. 

At the start of the meeting, Connolly said he was “disappointed” that “some of the Republican majority had turned their backs on FITARA.” He later noted that by “the difference of two votes, this would be called a hearing instead of a meeting.”

FITARA scorecard grades in September were also announced with a roundtable and not a hearing.

“FITARA is a law concerning federal IT management and acquisition,” a House Committee on Oversight and Accountability spokesperson said in a statement to FedScoop. South Carolina Republican Rep. Nancy Mace’s “subcommittee has held a dozen hearings in the past year concerning not only federal information technology management and acquisition, but also pressing issues surrounding artificial intelligence, and cybersecurity. These hearings have been a critical vehicle for substantive oversight and the development of significant legislation.”

This story was updated Feb. 2, 2024, with comments from a House Committee on Oversight and Accountability spokesperson.

The post FITARA scorecard adds cloud metric, prompts expected grade declines appeared first on FedScoop.

]]>
75884