Billy Mitchell Archives | FedScoop https://fedscoop.com/author/billy-mitchell/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 22 May 2024 01:07:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Billy Mitchell Archives | FedScoop https://fedscoop.com/author/billy-mitchell/ 32 32 Google earns FedRAMP High authorization for more than 100 additional commercial services https://fedscoop.com/google-earns-fedramp-high-authorization-for-more-than-100-additional-commercial-services/ Wed, 22 May 2024 12:01:00 +0000 https://fedscoop.com/?p=78437 The additional services include many that are most in demand for government customers, like AI, zero-trust security, and data and analytics tools.

The post Google earns FedRAMP High authorization for more than 100 additional commercial services appeared first on FedScoop.

]]>
More than 100 Google commercial cloud services recently received FedRAMP High authorizations, including its Vertex AI platform and other artificial intelligence capabilities, the company announced Wednesday.

Google has several services — as well as its underlying commercial cloud infrastructure — that have previously received FedRAMP High authorizations. But with this latest spate of authorizations, the company adds many services that are in demand for government customers, like AI, zero-trust security, and data and analytics tools.

In an interview with FedScoop ahead of the announcement, Leigh Palmer, vice president of technology, strategy and delivery for Google Public Sector, said this not only gives federal civilian agencies that work with highly sensitive data sets — like those in health care, law enforcement, finance and emergency response, among others — a long list of new tools to work with, but they’re also hosted in a commercial environment, which she said comes with added benefits.

“These are certified on our commercial cloud, not a separate [government-specific] cloud instance,” Palmer said, referencing the model some cloud vendors have used to create separate cloud enclaves limited only to government work for security reasons. “Which means that you have the full capability of commercial cloud, right? More regions, more elasticity, more data, compute, storage, etc.”

That’s particularly important, she said, as the Office of Management and Budget in draft guidance issued last fall pushes to modernize FedRAMP — short for the Federal Risk and Authorization Management Program — with more of a focus placed on agencies using commercial cloud services instead of the government-specific offerings.

“Instead of having physical separation, we have logical separation [through] encryption. So we can run the same workloads on our commercial cloud without having to have that physical separation,” Palmer said. “Whenever you have to do that, it’s going to be difficult to keep parity across the environments.”

On top of that, the compute-intensive tools — such as AI — that more and more agencies are beginning to use will stand to benefit from the scale of commercial cloud, she added.

“As you look towards AI and things that are going to require, you know, heavy, massive amounts of compute, it’s going to be much more cost-effective and easy for our customers to do that in a commercial cloud than in [a government-specific] environment,” Palmer said.

On the topic of the federal government’s recent work to modernize FedRAMP, she added that Google is “really optimistic and encouraged by the modernization changes that are happening at FedRAMP.”

“At the end of the day, I think what we all want is more capabilities in the government’s hands faster” and done so safely, Palmer said.

The new authorizations come after Google Public Sector last month announced that defense and intelligence agencies were approved to use Google’s air-gapped cloud platform, Google Distributed Cloud Hosted, to process top-secret workloads. Palmer called the achievements “complementary” to one another, and added that Google is continuing work to add more services that meet the Department of Defense’s IL-5 compliance for some of its most sensitive but unclassified workloads.

The post Google earns FedRAMP High authorization for more than 100 additional commercial services appeared first on FedScoop.

]]>
78437
Oracle approved to handle government secret-level data https://fedscoop.com/oracle-approved-to-handle-government-secret-level-data/ Tue, 23 Apr 2024 14:42:51 +0000 https://fedscoop.com/?p=77524 The accreditation puts Oracle on a level playing field with its top competitors in the federal cloud space — Amazon, Google and Microsoft.

The post Oracle approved to handle government secret-level data appeared first on FedScoop.

]]>
Oracle has added its name to the short list of cloud vendors approved to handle classified, secret-level data for the federal government.

The company on Monday announced that three of its classified, air-gapped cloud regions received accreditation from the Department of Defense to handle workloads at the secret level — what the department refers to as Impact Level 6 (IL-6).

The achievement comes after Oracle last August also earned a Top Secret/Sensitive Compartmented Information accreditation from the intelligence community. With both that and the latest secret-level cloud authorization, Oracle is approved to handle government information at any classification level in the cloud.

“America’s warfighters must have the world’s preeminent technology and our taxpayers insist that technology is delivered at competitive costs. Oracle is bringing both to the Department of Defense’s Secret networks,” Rand Waldron, vice president of Oracle, said in a statement. “Technology no longer sits outside the mission; technology is a part of the mission. In austere locations with limited communication, and in massive secure data centers, Oracle is bringing our best capabilities to serve the men and women that defend the U.S. and our Allies.”

While the news comes most to the benefit of the DOD, which is expanding its use of cloud in the classified space and at the edge through its Joint Warfighting Cloud Capability, it ultimately puts Oracle on a level playing field with its top competitors in the federal cloud space — Amazon, Google and Microsoft, which have all earned secret and top secret accreditations ahead of Oracle. Google announced its accreditation at the secret and top-secret levels just two weeks earlier.

Notably, it is those companies that Oracle is vying against for DOD task orders under its $9 billion JWCC cloud contract. Those companies also hold spots, with IBM, on the intelligence community’s multibillion-dollar Commercial Cloud Enterprise (C2E) contract, which requires work at the secret and top-secret levels as well.

The post Oracle approved to handle government secret-level data appeared first on FedScoop.

]]>
77524
Top public sector takeaways from Google Cloud Next 2024 https://fedscoop.com/top-public-sector-takeaways-from-google-cloud-next-2024/ Wed, 17 Apr 2024 20:15:24 +0000 https://fedscoop.com/?p=77288 Top leaders from Google see huge opportunities for the many new products and announcements out of Next to impact their partners across the public sector.

The post Top public sector takeaways from Google Cloud Next 2024 appeared first on FedScoop.

]]>
LAS VEGAS — More than 218 new product announcements over the course of three days delivered to more than 30,000 attendees — that, in a nutshell, was Google Cloud’s annual Next tech conference, which took place last week in Las Vegas.

A frenetic energy consumed the Mandalay Bay Hotel and Casino, with so much of that excitement focused on the untapped potential of generative artificial intelligence in every industry. During his keynote to open the conference, Google Cloud CEO Thomas Kurian acknowledged the many massive global corporations like Goldman Sachs, Mercedes Benz and Uber that are using the company’s AI technology to innovate.

And in such an excitable, flashy environment, it’s easy for Google’s budding work with public sector entities to get overshadowed. But despite that, top leaders from Google see huge opportunities for the many new products and announcements out of Next to impact their partners across federal, state and local governments and academia, forecasting continued growth for the nearly two-year-old public sector arm of the Silicon Valley cloud giant.

Karen Dahut, CEO, Google Public Sector

With a sizable portion of Next announcements dedicated to AI, Karen Dahut pointed to “all of the work around building a fully integrated enterprise-grade AI stack” as “so important to our public-sector customers.”

That focus on “enterprise-grade” is the key, she told FedScoop. “It’s not a consumer product being used in enterprise; it is an enterprise product built for large, complex bureaucratic organizations.”

Security also matters a great deal with AI, as it does for any part of the public sector enterprise IT stack, and Dahut pointed to that as a differentiator for Google’s cloud and AI offerings, particularly with a new security operations tool built on Google’s Gemini generative AI platform.

“Security operations are top of mind for all of our federal, state and local governments,” she said. “And knowing that Google Cloud is the most secure cloud — and we continue to build on that — is super important to them.”

Dahut suggested that nearly two years in since the creation of Google Public Sector in June 2022, the organization is still early on in its journey to scale as a major service provider in the public sector technology ecosystem.

The thing that “gives me such confidence in what we are doing is the passion that our customers bring to us,” Dahut explained. “They want to work with us,” she said, referring to partnerships with major customers like the Department of Defense “to purpose-build technology.”

“This isn’t a case where we’re building technology in the back room, and then we come and try to get it accredited,” Dahut said. “We are working very closely with them to build what they need and accredit it along the way.”

She continued: “And that partnership, and that passion for what we’re doing, transcends just Google — our customers feel it, too. So it’s very gratifying.”

Leigh Palmer, VP of Delivery and Operations, Google Public Sector

As Dahut said, Google has worked to position itself as the leader in cloud security to differentiate itself from other cloud services competitors like Amazon and Microsoft.

In that same vein, for Leigh Palmer, Google Public Sector’s new accreditation for Google Distributed Cloud Hosted to handle secret and top secret data for the Department of Defense and intelligence community was the biggest announcement to come out of the conference.

“I like to tell people that sometimes there’s a benefit to being third to market, right? So we could see what went before us and make adjustments for very specific mission sets,” Palmer said in an interview with FedScoop, referring to Google’s biggest competition.

With the authorizations, agencies across the Department of Defense and the intelligence community can use Google Distributed Cloud Hosted — an air-gapped private cloud service tailored to workloads that demand maximized security requirements — to support some of their most sensitive data and applications. It also comes with “core AI services out of the box,” like language translation, search, Document AI and others, Palmer explained.

“What we did when we built and designed GDC-H is we built that cloud from the ground up. So instead of taking a cloud and trying to shrink it into a small form factor, we started with the small form factor and made it expandable, right?” Palmer said. “So open Kubernetes-based, very open source software with the idea of multi-cloud, hybrid cloud in mind. Very customized for data analytics and AI at the edge, because that is our sweet spot. That is what Google is really good at.”

Because of the small form factor, there’s immense flexibility, particularly for sensitive missions at the edge, she said.

“So lots of flexibility for that mission need between kind of that enterprise scale, that tactical scale, and then that middle scale of, you know, ‘I need it to be in a data center in Germany,’ for example,” Palmer added.

Sandra Joyce, VP of Mandiant Intelligence

Much of the hype concerning AI has been focused on its ability to drive and defend against new cyberattacks. Many have predicted that threat actors will use AI and gain an upper hand to keep ahead of an organization’s network defenses.

But largely, that is not yet the case, Sandra Joyce told FedScoop, calling it a “period of opportunity” for network defenders in the public sector the bolster their use of AI for cybersecurity.

Joyce said Mandiant tracks “threat actors from nation states, cybercriminals, hacktivists, all types. And thankfully, right now we’re in this real period of opportunity is what I like to call it. Of all the breaches that we investigate, we’re not seeing AI as the main factor in any of those breaches, which means to me that we have an opportunity to develop our own AI for defenses.”

Where Mandiant sees AI being used by hackers is for “information operations,” Joyce said. “We’re seeing it in the underground, with purported jailbreaks of chatbots that they can provide access to. These pretty low-level and very questionable things. It’s experimentation. So there’s not a ton going on that I would say would outpace what you could traditionally do in the threat landscape.”

But on the cyber defense side, things are moving more dramatically, she said, particularly in the ability to supplement the global shortage in cybersecurity talent.

“What we’re doing with AI internally is very exciting. We’re using it for many different uses, including productivity. So we’re using it to reverse [engineer] malware faster. We’re using it to look at adversary smart contracts. We’re doing that type of experimentation and putting it into our workflows. And we’re already getting some gains in productivity,” she said. “And that’s the important part. Because if you’re looking at the future of cybersecurity, what it really is, is a growing problem, and we cannot keep throwing people at the problem. We need to have technical solutions and AI is one way we can start doing that.”

During her time at Next, Joyce and her colleagues at Google-owned Mandiant also spoke about the frustrations that are keeping CISOs up at night.

Chris Hein, Director of Customer Engineering, Google Public Sector

As the director of customer engineering, Chris Hein inherently serves in a more technical role for Google Public Sector. Despite that, the biggest impact felt from the many announcements out of Next, from Hein’s perspective, is the changing nature of how the government relates to its constituents.

“You think about how many of these services are being built out from like a customer perspective. And so what I really enjoy doing, and I think it’s really important for government agencies to think about, is how do they take their constituent services and make them 10 times better than they are today,” Hein told FedScoop.

Though AI models are often shrouded in new levels of complexity from a technical level in how they function, for public sector entities, the barrier for entry to use them is actually quite low, he said.

“Being able to use these things does not take a whole lot of work or a whole lot of know-how. And so I think it really starts to open the door for way better experiences,” Hein said.

With that, it’s still so important, he said, to “do it safely, responsibly, making sure that we’re reducing hallucinations, and all those kinds of things that we talk about these days.”

“You have to start from first principles on a lot of these things,” Hein said, referencing building that safety and responsibility into AI. “What Google has been building, when we look at our AI strategy, is building on top of this overall zero-trust environment that starts from the ground up. The number one thing we’re going to care about is the security of that dataset and the privacy of that dataset. And so AI is not new to that — that’s just another aspect of that exact same paradigm that we’ve been so focused on for so long.”

The post Top public sector takeaways from Google Cloud Next 2024 appeared first on FedScoop.

]]>
77288
With updated IT strategic plan, USAID tech is a driver of mission, not compliance, CIO says https://fedscoop.com/with-updated-it-strategic-plan-usaid-tech-is-a-driver-of-mission-not-compliance-cio-says/ Tue, 09 Apr 2024 22:20:22 +0000 https://fedscoop.com/?p=77124 "It's about making sure that we're supporting the missions and enabling and empowering them," CIO Jason Gray said of the role of USAID's IT.

The post With updated IT strategic plan, USAID tech is a driver of mission, not compliance, CIO says appeared first on FedScoop.

]]>
LAS VEGAS — With the U.S. Agency for International Development’s issuance of a new IT strategic plan in December, the agency has created a new vision for its technology management that better drives mission outcomes rather than checking the box for compliance, according to its top IT official.

“In the last year, we have updated our strategic plan to focus on making sure that everyone understands our alignment with the mission itself. It’s not just about compliance. It’s not just about following the law. Yes, it is about following the law. But it’s not just about that,” USAID CIO Jason Gray said during a panel Tuesday at Google Cloud’s annual tech conference Next. “It’s about making sure that we’re supporting the missions and enabling and empowering them.”

USAID’s IT strategic plan, which runs from 2024 through 2028, is built around five pillars: “creating a culture of data- and insights-based decision making; delivering agile, secure, and resilient IT platforms; building worldwide skills and capacity; establishing pragmatic governance; and driving high operational performance.”

Gray said USAID’s journey to adopt cloud has been “critical” in better connecting tech to mission in recent years, namely by making it easier to connect and collaborate in austere environments around the world where USAID is called to deliver aid.

“Some of the areas and countries that we operate in, even getting power, reliable power is a massive challenge,” he said, adding that there are also “bandwidth concerns or severe latency.”

When Gray joined USAID as CIO in 2022, the agency was already well on its way to adopting cloud and had an existing partnership with Google Cloud — one that he credits as key in fostering communication and collaboration across international lines while taking care of the basic security requirements.

“Being able to collaborate across the world in real-time through the document management [tool], the security … is absolutely critical, as well,” Gray said. “And knowing that you are encrypting data in use, in transit, at rest” has been critical “because we’re complying, yes, we’re securing things but also enabling our end users to communicate with the implementing partners in the areas that we operate in.”

USAID was recognized in February by Rep. Gerry Connolly, D-Va., ranking member on the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation, as the only federal agency to receive an A grade on the latest FITARA scorecard.

The post With updated IT strategic plan, USAID tech is a driver of mission, not compliance, CIO says appeared first on FedScoop.

]]>
77124
Google gets authorization to work with top-secret intelligence, defense data https://fedscoop.com/google-gets-authorization-to-work-with-top-secret-intelligence-defense-data/ Tue, 09 Apr 2024 12:01:00 +0000 https://fedscoop.com/?p=77102 Defense and intelligence agencies will now be able to use Google’s air-gapped cloud platform to process top-secret workloads.

The post Google gets authorization to work with top-secret intelligence, defense data appeared first on FedScoop.

]]>
LAS VEGAS — Defense and intelligence agencies can now use Google’s air-gapped cloud platform, Google Distributed Cloud Hosted, to process top-secret workloads, the company announced Tuesday at its annual Google Cloud Next tech conference.

With the authorizations, agencies across the Department of Defense and the intelligence community can use Google Distributed Cloud Hosted — an air-gapped private cloud service tailored to workloads that demand maximized security requirements — to support some of their most sensitive data and applications. 

Google also announced that it received authorization to host data and applications at the secret level for intelligence community missions. 

“This authorization underscores Google Public Sector’s commitment to empowering government agencies with secure, cutting-edge technology,” Leigh Palmer, vice president of delivery and operations for Google Public Sector, wrote in a blog post previewed by FedScoop before the announcement. She referenced “personnel records, information around pending cyber threats, geospatial data used for maps, language translation in support of humanitarian efforts, and more” as examples of the types of data the cloud environment can now support.

Not to be confused with Google public cloud offerings, Google Distributed Cloud Hosted was developed to be isolated and doesn’t require connection to the internet or Google Cloud.

In addition to boosted security, the company touts Google Distributed Cloud Hosted’s integrated cloud services, notably Vertex AI, a platform that supports the development of generative AI applications with more than 130 pre-trained AI models and offers access to Gemini, Google’s own multimodal large-language model. 

Lastly, Palmer in the blog points to the openness at the foundation of the platform as a differentiator. “GDC Hosted is designed around Google Cloud’s open cloud strategy and uses leading open source components in its platform and managed services. This openness includes support for managed open source services operated by our partners that are tightly integrated into the platform, providing a seamless user experience across management, billing, and support,” she wrote. 

The announcement comes after Google in December 2022 achieved an IL-5 accreditation to work with DOD’s highly sensitive, mission-critical and national security data workloads. 

Some of Google’s top competitors — Amazon, Microsoft and Oracle — also reached top-secret accreditation in recent years. Notably, the four companies also hold spots on the premier cloud contracts in the DOD and IC, the Joint Warfighting Cloud Capability and Commercial Cloud Enterprise vehicles, respectively. 

The post Google gets authorization to work with top-secret intelligence, defense data appeared first on FedScoop.

]]>
77102
Meet FedScoop’s newest technology reporter https://fedscoop.com/radio/meet-fedscoops-newest-technology-reporter/ Thu, 28 Mar 2024 18:53:30 +0000 https://fedscoop.com/?post_type=podcast&p=76886 Caroline Nihill, Reporter for FedScoop

The post Meet FedScoop’s newest technology reporter appeared first on FedScoop.

]]>
The FedScoop news team is growing! We recently promoted our editorial fellow Caroline Nihill to a permanent member of the reporting team. You’ve likely seen Caroline’s work since joining the team last summer, covering developments in AI, shift in the Technology Modernization Fund and opening FedScoop’s aperture wider on Capitol Hill.

But now that she’s a full-time member of the team, we wanted to invite her on so Daily Scoop listeners and readers of FedScoop.com could get to know her better. Caroline joins the podcast to discuss how she came to be a reporter with FedScoop, what she’s most passionate about covering at the intersection of the federal government and technology, and what you can come to expect from her coverage going forward.

The post Meet FedScoop’s newest technology reporter appeared first on FedScoop.

]]>
76886
Watchdog report ties veteran death to scheduling error in VA’s new electronic health record system https://fedscoop.com/va-ehr-patient-death-ig-report-2024/ Mon, 25 Mar 2024 17:29:58 +0000 https://fedscoop.com/?p=76786 The VA's Inspector General has linked the death of a veteran to a scheduling error with the department's new modernized EHR system, built by Oracle Cerner.

The post Watchdog report ties veteran death to scheduling error in VA’s new electronic health record system appeared first on FedScoop.

]]>
A scheduling error within the Department of Veterans Affairs modernized electronic health record system played a role in the 2022 death of a veteran in Ohio, the VA’s Office of the Inspector General charges in a new report.

A patient of the VA Central Ohio Healthcare System in Columbus with a history of behavioral health and substance abuse issues didn’t receive adequate outreach from the hospital system to reschedule a missed appointment due to “a system error in the functioning of the new EHR,” the report details. Just over 40 days later, the patient died an “accidental death” caused by “acute cardiac arrythmia ‘due to (or as a consequence of) acute toxic effect of inhalant.'”

The rollout of the VA’s EHR Modernization program has experienced a litany of issues since its launch in 2020, which led the department in April 2023 to pause the implementation of the new system at additional VA hospitals until it is deemed “highly functioning” and issues at first-adopter locations — like the hospital in Columbus — are resolved. That review is still ongoing.

The new inspector general’s report, released March 21, points to an error in the scheduling function that “resulted in staff’s failure to complete required minimum scheduling efforts following the patient’s missed mental health appointment,” the report says.

On the day of the missed appointment, staff at the facility followed standard operating procedures to call the patient and send a letter to reschedule. However, “staff did not complete the required three telephone calls on separate days,” as is VA policy for patients with mental health concerns, because in the new EHR system, the notification of the missed appointment “routed to a request queue and, as a result, schedulers were not prompted to conduct required rescheduling efforts.”

“The OIG concluded that the lack of contact efforts may have contributed to the patient’s disengagement from mental health treatment and ultimately the patient’s substance use relapse and death,” the watchdog wrote in the report.

However, the inspector general also attributes the case to other relevant incidents of mismanagement by VA staff, such as the failure to effectively evaluate and address the patient’s treatment needs and a lack of “caring communications” as required by the department’s Caring Communication Program. It also concluded that the hospital’s leadership failed to properly share lessons learned about what went wrong in the case.

This isn’t the only incident in which the VA’s new EHR Modernization program — developed by Oracle Cerner — has been tied to veteran deaths. In March 2023, Sen. Richard Blumenthal, D-Conn., disclosed during a Senate Committee on Veterans’ Affairs hearing six incidents of “catastrophic harm” to veterans, four of which resulted in deaths — one in Spokane, Washington, and the others in Columbus, Ohio. It’s unclear if the patient at the focus of the new OIG report is one of the three from Ohio that Blumenthal referenced.

Meanwhile, the VA inspector general released a pair of similar reports last week involving the Oracle Cerner EHR: one that calls out how “scheduling system limitations have caused additional work and redundancies” that could lead to an increased risk of scheduling errors, and another that highlighted “pharmacy-related patient safety issues nationally” due to a software coding error, as well as data transmission issues “that have affected approximately 250,000 new EHR site patients who received care at a legacy EHR site.”

In the latter case, the OIG concluded: “Affected patients have not been notified of their risk of harm and the OIG remains concerned for their safety.”

The post Watchdog report ties veteran death to scheduling error in VA’s new electronic health record system appeared first on FedScoop.

]]>
76786
AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says https://fedscoop.com/ai-transparency-creates-big-cultural-challenge-for-parts-of-dhs-ai-chief-says/ Wed, 20 Mar 2024 16:25:46 +0000 https://fedscoop.com/?p=76678 Transparency around AI may result in issues for DHS elements that are more discreet in their operations and the information they share publicly, CIO Eric Hysen said.

The post AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says appeared first on FedScoop.

]]>
As the Department of Homeland Security ventures deeper into the adoption of artificial intelligence — while doing so in a transparent, responsible way in line with policies laid out by the Biden administration — it’s likely to result in friction for some of the department’s elements that don’t typically operate in such an open manner, according to DHS’s top AI official.

Eric Hysen, CIO and chief AI officer for DHS, said Tuesday at the CrowdStrike Gov Threat Summit that “transparency and responsible use [of AI] is critical to get right,” especially for applications in law enforcement and national security settings where the “permission structure in the public eye, in the public mind” faces a much higher bar.

But that also creates a conundrum for those DHS elements that are more discreet in their operations and the information they share publicly, Hysen acknowledged.

“What’s required to build and maintain trust with the public in our use of AI, in many cases, runs counter to how law enforcement and security agencies generally tend to operate,” he said. “And so I think we have a big cultural challenge in reorienting how we think about privacy, civil rights, transparency as not something that we do but that we tack on” to technology as an afterthought, but instead “something that has to be upfront and throughout every stage of our workplace.”

While President Joe Biden’s AI executive order gave DHS many roles in leading the development of safety and security in the nation’s use of AI applications, internally, Hysen said, the department is focused on “everything from using AI for cybersecurity to keeping fentanyl and other drugs out of the country or assisting our law enforcement officers and investigators in investigating crimes and making sure that we’re doing all of that responsibly, safely and securely.”

Hysen’s comments came a day after DHS on Monday published its first AI roadmap, spelling out the agency’s current use of the technology and its plans for the future. Responsible use of AI is a key part of the roadmap, pointing to policies DHS issued in 2023 promoting transparency and responsibility in the department’s AI adoption and adding that “[a]s new laws and government-wide policies are developed and there are new advances in the field, we will continue to update our internal policies and procedures.”

“There are real risks to using AI in mission spaces that we are involved in. And it’s incumbent on us to take those concerns incredibly seriously and not put out or use new technologies unless we are confident that we are doing everything we can, even more than what would be required by law or regulation, to ensure that it is responsible,” Hysen said, adding that his office worked with DHS’s Privacy Office, the Office for Civil Rights and Civil Liberties and the Office of the General Counsel to develop those 2023 policies.

To support the responsible development and adoption of AI, Hysen said DHS is in the midst of hiring 50 AI technologists to stand up a new DHS AI Corp, which the department announced last month.

“We are still hiring if anyone is interested,” Hysen said, “and we are moving aggressively expand our skill sets there.”

The post AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says appeared first on FedScoop.

]]>
76678
NSF CIO on generative AI: ‘It takes a lifetime to build a reputation; it takes a moment to lose it’ https://fedscoop.com/nsf-cio-on-generative-ai-it-takes-a-lifetime-to-build-a-reputation-it-takes-a-moment-to-lose-it/ Wed, 13 Mar 2024 20:02:02 +0000 https://fedscoop.com/?p=76599 NSF has created a policy forbidding the use of "public-facing" generative AI by its staff to review research proposals, CIO Terry Carpenter said.

The post NSF CIO on generative AI: ‘It takes a lifetime to build a reputation; it takes a moment to lose it’ appeared first on FedScoop.

]]>
Like many agencies across the federal government, the National Science Foundation is taking a measured and risk-based approach to the adoption of generative AI, its newly appointed CIO said Wednesday.

“It takes a lifetime to build a reputation; it takes a moment to lose it,” Terry Carpenter, who was announced as NSF’s CIO in January, said of generative AI and its risks at the Elastic Public Sector Summit, produced by FedScoop.

The concern, Carpenter elaborated, is that despite the power some of these AI tools hold, if not used responsibly and thoughtfully, they can lead to unintended and harmful outcomes.

“There’s some reality to that fear,” he said. So far the agency’s approach in considering generative AI models has been: “Where can we use this effectively to maintain our high standards and can we assure that the outcome that we get from it is what we intend?”

Carpenter continued: “So when you think about … our reputation of giving money to the right research for the betterment of society, it’s really important to us that we uphold the standards of the merit review process.”

While NSF hasn’t outright forbidden the use of large-language, generative AI models across its enterprise, Carpenter said the agency has banned the use of such tools that are “public-facing” — like commercial versions of OpenAI’s ChatGPT or Google’s Gemini — to review research proposals submitted to the agency for funding consideration.

“So when we looked at the realities of what ChatGPT and other large language models could provide, you know, when they’re riding out there on that external data, you have to really think about: Well what data am I giving it? Where is that data? And how sensitive is that data?” Carpenter explained. “So we looked at those kinds of things and we said, ‘You know what, we’re not ready yet.’ So we’re going to not allow the public-facing tool sets to be applied to review proposals.”

That led to the creation of a policy at NSF “that says you cannot use [public-facing generative AI] to review a proposal,” he said.

However, that’s not a blanket ban. Carpenter highlighted that the agency is conducting an internal generative AI pilot: “a chatbot to try to help our partners and customers out there that are trying to seek grant monies to know whether or not they should try to write a proposal, and if they do write a proposal, how can we help them in that process,” he said.

The pilot is in the early stages and hasn’t yet been opened to the public.

The decision to move forward with the chatbot came about after NSF made an open call for ideas, through which it received 79 submissions, according to the CIO. After matching those ideas to key areas in which NSF leadership thinks generative AI is applicable — insight generation for the grant-awarding process, internal process enhancement, and improvement of customer and employee experience — and running them through what Carpenter called “a risk-based decision model,” the agency landed on just the one.

On top of that, while NSF employees are forbidden from using generative AI for reviewing proposals, the proposers themselves are allowed to use the technology in developing their submissions, Carpenter acknowledged. The agency just requires those who do so to disclose how they used the technology in their proposal “to gain learning and transparency in that,” he said.

“I think there’s a lot of learning to go,” Carpenter said. “We have to figure this out. It’s not going away. And I think a lot of the commercial industry and our partners are helping us to think through how do we apply our own internal data, and where can we assure the protection of the proprietary data that we’re given in the proposal process. We have a duty to protect that for the people that propose, and that’s what we’re thinking about.”

The post NSF CIO on generative AI: ‘It takes a lifetime to build a reputation; it takes a moment to lose it’ appeared first on FedScoop.

]]>
76599
White House wants $75M for Technology Modernization Fund in fiscal 2025 — down from $200M in 2024 request https://fedscoop.com/white-house-wants-75m-for-technology-modernization-fund-in-fiscal-2025-down-from-200m-in-2024-request/ Mon, 11 Mar 2024 21:36:26 +0000 https://fedscoop.com/?p=76535 The $75 million request is a significant decrease from prior budget requests: $200 million for fiscal 2024 — a figure that House appropriators have tried to zero out — and $300 million in fiscal 2023.

The post White House wants $75M for Technology Modernization Fund in fiscal 2025 — down from $200M in 2024 request appeared first on FedScoop.

]]>
After years of seeing Congress walk back budget requests for the Technology Modernization Fund during the annual appropriations process, the Biden administration on Monday made its latest ask to appropriators of $75 million to support federal tech modernization projects through the TMF.

The $75 million request is a significant decrease from prior budget requests: $200 million for fiscal 2024 — a figure that House appropriators have tried to zero out — and $300 million in fiscal 2023.

In total, the Biden administration requested $75.1 billion for IT spending across civilian agencies in fiscal 2025, a small uptick from the $74.4 billion it asked for in 2024.

“To support IT modernization efforts, the Budget also includes an additional $75 million for the Technology Modernization Fund (TMF), an innovative investment program that gives agencies additional ways to deliver services to the American public quickly,” the budget request reads.

It also highlights the “more than $750 million” the fund has invested in federal IT projects since its inception across 48 investments and 27 agencies (though those numbers are a bit higher as listed on the TMF’s website).

Amid the COVID-19 pandemic, the Biden administration, with Congress’ support, in 2021 issued a $1 billion injection through the American Rescue Plan to drive rapid modernization and close some of the most pressing digital service and cybersecurity gaps across the federal government.

But since then, the General Services Administration and Office of Management and Budget — the two agencies that lead the administration of the fund with its board — have fought an uphill battle to convince lawmakers to meet the Biden administration’s requested levels of funding for the TMF.

Last summer, for instance, the House Appropriations Committee said in a summary of the Financial Services and General Government appropriations bill that issues funding for the TMF that it wants to eliminate funding for the program in fiscal 2024 as part of its efforts to “cut wasteful spending” across the federal government. Congress has yet to pass a Financial Services and General Government appropriations bill for fiscal 2024. On the other side of Congress, the Senate Committee on Appropriations approved language in an earlier version of the appropriations bill for fiscal 2024 that would rescind $290 million allocated to the TMF through the American Rescue Plan.

The TMF’s hang-ups with Congress have largely centered on its repayment mechanism. When created in 2017 under the Modernizing Government Technology Act, recipients of TMF money were required by law to repay those funds within five years. But with the issuance of the $1 billion to the TMF in 2021, the Biden administration also created a “flexible” repayment policy for certain high-priority projects.

House lawmakers wrote a bill late last year to reform the TMF, requiring agencies to adhere to the original intent of the Modernizing Government Technology Act and to extend the fund through 2030, beyond its original sunset of 2025. Though that bill hasn’t been passed, GSA and OMB went ahead and updated the TMF’s repayment policy to reflect a new “consistent repayment floor with a minimum of 50% repayment,” with “rare exceptions” decided by the GSA administrator and OMB director.

The post White House wants $75M for Technology Modernization Fund in fiscal 2025 — down from $200M in 2024 request appeared first on FedScoop.

]]>
76535