Microsoft Azure Archives | FedScoop https://fedscoop.com/tag/microsoft-azure/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Fri, 03 May 2024 19:08:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Microsoft Azure Archives | FedScoop https://fedscoop.com/tag/microsoft-azure/ 32 32 CDC’s generative AI pilots include school closure tracking, website updates https://fedscoop.com/cdc-generative-ai-pilots-school-closure-tracking-website-updates/ Fri, 05 Apr 2024 18:29:29 +0000 https://fedscoop.com/?p=77030 The Centers for Disease Control and Prevention is testing out use cases for generative AI and sharing its approach with other federal partners as it plans to develop an agencywide AI strategy.

The post CDC’s generative AI pilots include school closure tracking, website updates appeared first on FedScoop.

]]>
An artificial intelligence service deployed within the Centers for Disease Control and Prevention is being put to the test for things like modernizing its websites and capturing information on school closures, the agency’s top data official said. 

The tool — Microsoft Azure Open AI that’s been configured for CDC use within its cloud infrastructure — has both a chatbot component for employees to use and the ability for more technical staff to develop applications that connect to the service via an application programming interface (API), Alan Sim, CDC’s chief data officer, said in an interview with FedScoop. 

“The idea here is that we can allow for our CDC staff to practice innovation and gen AI safely, within CDC boundaries, rather than going out to third-party sites,” Sim said. 

In total, CDC has 15 pilots using the agency’s generative AI capabilities, primarily through the Azure Open AI service, a spokesperson said.

Exploring generative AI uses comes as the CDC, like agencies throughout the federal government, looks to create its own approach to artificial intelligence. Roughly a year ago, CDC leadership got together to develop an AI roadmap, Sim said, and since then, it’s prioritized goals like working on the chatbot and developing guidance that it’s shared with others in the federal government.

Now, the agency is planning to develop an AI strategy that Sim said he’s hopeful will be released in late spring to early summer. That strategy will aim to set “high-level principles” for how the CDC wants to use AI to support the public, Sim said. 

“We’re still learning, but we’re trying our best to be innovative, responsive, and obviously sharing as we learn with our partners,” he said.

Piloted uses

The CDC’s pilots are varied in terms of application and topic, including HIV, polio containment, communications, analyzing public comments, and survey design. So far, there’s been positive feedback from the pilots that generative AI has “significantly enhanced data analysis, efficiency, and productivity,” Sim said.

In one of the more operational pilots, for example, communications staff is using AI to assist with updates to the CDC’s websites across the agency.

That process tends to be “tedious” and “manual,” Sim said. To help make it easier, the Office of Communications is using an application connected to the Azure Open AI API, which was created by a data scientist at the agency.

“This has allowed staff to begin summarizing, leveraging … the benefits of generative AI to help speed up the work,” Sim said. 

CDC is also looking to AI for tracking school closures, which it did during the COVID-19 pandemic to watch for potential outbreaks. 

That tracking — which included monitoring thousands of school district websites and various types of closures, from weather to disease outbreaks — was done manually. And although the funding for those efforts stopped in December 2022, Sim said, there’s “a recognition that it’s still important from a public health perspective to keep track of school closure information.” 

As a result, CDC developed an AI prototype to collect information via social media about closures at roughly 45,000 school districts and schools. That prototype is still being evaluated for effectiveness and for whether it’s something that can be scaled, but it’s something CDC is looking into, Sim said.

While the CDC isn’t using agency data with the generative AI service, training against relevant datasets could happen in the future, Sim said. “We haven’t gotten there yet, but that’s part of our roadmap is to sort of mature and learn from these initial pilots, and then just build upon that work,” he said. 

Generative AI guidance

In addition to working toward potential uses, CDC has also developed guidance for generative AI. That document “gets into some of the details” of leveraging generative AI tools responsibly, safely and equitably, Sim said. 

It’s also something the agency is sharing. Sim said CDC presented that guidance at the Chief Artificial Intelligence Officers Council and he’s shared the guidance with “many federal agencies.”

“We are just trying to do our part,” he said. “We are not necessarily experts, but we are sharing the progress that we’ve made.” 

Throughout the federal government, agencies have been creating their own generative AI policies for their employees that detail things like whether third-party tools are prohibited, what information shouldn’t be used in queries, and processes for approving potential uses of the technology. A recent Office of Management and Budget memo further directs agencies to “assess potential beneficial uses” of generative AI uses and establish safeguards. 

CDC declined to share a copy of its guidance.

Even though deploying an AI tool within CDC’s cloud infrastructure provides more security, Sim said there are always concerns. One of the reasons the agency is focused on machine-learning operations is so it can explore and provide guidance on best practices on things like ensuring developers are being transparent, being able to detect “model drift,” and certifying that a model isn’t amplifying bias.

Ultimately, CDC wants to take a proactive approach to AI and machine learning so the agency is prepared for the next outbreak response and to empower state, local, tribal and territorial partners to leverage their data to gain efficiencies where it’s possible, Sim said.

“Any efficiencies that we can gain through these types of innovations, we’re always trying to support and encourage,” Sim said. 

The post CDC’s generative AI pilots include school closure tracking, website updates appeared first on FedScoop.

]]>
77030
Microsoft makes Azure OpenAI service available in government cloud platform https://fedscoop.com/openai-service-available-government-cloud/ Tue, 06 Feb 2024 14:00:00 +0000 https://fedscoop.com/?p=75932 The service is live on Azure Government Tuesday while the agency pursues FedRAMP authorization for high-impact data.

The post Microsoft makes Azure OpenAI service available in government cloud platform appeared first on FedScoop.

]]>
Federal agencies that use Microsoft’s Azure Government service now have access to its Azure OpenAI Service through the cloud platform, permitting use of the tech giant’s AI tools in a more regulated environment.

Candice Ling, senior vice president of Microsoft’s federal government business, announced the launch in a Tuesday blog post, highlighting the data safety measures of the service and its potential uses for productivity and innovation. 

“Azure OpenAI in Azure Government enables agencies with stringent security and compliance requirements to utilize this industry-leading generative AI service at the unclassified level,” Ling’s post said.

The announcement comes as the federal government is increasingly experimenting with and adopting AI technologies. Agencies have reported hundreds of use cases for the technology while also crafting their own internal policies and guidance for use of generative AI tools.

Ling also announced that the company is submitting Azure OpenAI for federal cloud services authorizations that, if approved, would allow higher-impact data to be used with the system. 

Microsoft is submitting the service for authorization for FedRAMP’s “high” baseline, which is reserved for cloud systems using high-impact, sensitive, unclassified data like heath care, financial or law enforcement information. It will also submit the system for authorization for the Department of Defense’s Impact Levels 4 and 5, Ling said. Those data classification levels for DOD include controlled unclassified information, non-controlled unclassified information and non-public, unclassified national security system data.

In an interview with FedScoop, a Microsoft executive said the availability of the technology in Azure Government is going to bring government customers capabilities expected from GPT-4 — the fourth version of Open AI’s large language models — in “a more highly regulated environment.”

The executive said the company received feedback from government customers who were experimenting with smaller models and open source models but wanted to be able to use the technology on more sensitive workloads.

Over 100 agencies have already deployed the technology in the commercial environment, the executive said, “and the majority of those customers are asking for the same capability in Azure Government.” 

Ling underscored data security measures for Azure OpenAI in the blog, calling it “a fundamental aspect” of the service. 

“This includes ensuring that prompts and proprietary data aren’t used to further train the model,” Ling wrote. “While Azure OpenAI Service can use in-house data as allowed by the agency, inputs  and outcomes are not made available to Microsoft or others using the service.”

That means embeddings and training data aren’t available to other customers, nor are they used to train other models or used to improve the company’s or third-party services. 

According to Ling’s blog, the technology is already being used for a tool being developed by the National Institutes of Health’s National Library of Medicine. In collaboration with the National Cancer Institute, the agency is working on a large language model-based tool, called TrialGPT, that will match patients with clinical trials.

The post Microsoft makes Azure OpenAI service available in government cloud platform appeared first on FedScoop.

]]>
75932
How cloud modernization helps FERC streamline its regulatory processes https://fedscoop.com/how-cloud-modernization-helps-ferc-streamline-its-regulatory-processes/ Mon, 29 Jan 2024 20:30:00 +0000 https://fedscoop.com/?p=75709 A novel tech-challenge approach helped IT leaders at the Federal Energy Regulatory Commission start the overhaul of its legacy applications and improve customer service.

The post How cloud modernization helps FERC streamline its regulatory processes appeared first on FedScoop.

]]>
Upgrading and expanding the nation’s electrical grid isn’t just about meeting America’s growing electrical demands; it’s about powering our economy, securing national security, and ensuring a cleaner future for future generations. But because so much of that infrastructure lies mainly in private hands, orchestrating that effort requires extraordinary attention.

That is one of the roles of the Federal Energy Regulatory Commission (FERC), an independent agency within the Department of Energy that regulates the interstate transmission of electricity, natural gas, and oil. FERC also reviews and licenses liquified natural gas terminals, hydropower projects, and pipelines.

Ensuring that the companies building and operating power plants, pipelines and transmission lines adhere to safety standards, comply with environmental laws, and abide by market-based pricing guidelines requires an extensive review and approval process. And because FERC relies on approximately 1,570 employees to perform that work, technology plays a critical role in keeping on top of all those entities’ requests.

The challenge: Legacy technology

Michelle Pfeifer, Director of Solutions Delivery and Engineering, FERC.

FERC’s technology systems, however, like those at many federal agencies, have been hard-pressed to keep up with ongoing and emerging demands. Most of those systems used to manage the core applications the agency depends on are more than ten years old and stove-piped, according to Michelle Pfeifer, Director of Solutions Delivery and Engineering.

Among other challenges, the workload management systems used to process and manage filings from regulated entities operate on outdated, customized platforms, leading to inefficiencies in tracking and managing the significant number of filings the agency must handle, said Pfeifer, who joined FERC four years ago. “We have done some updates, but there are a significant number of requests for refresh or modernization that have not been addressed. Additionally, data had to be entered into multiple systems, compounding workload challenges,” she said.

The search for a better solution

FERC’s IT team recognized the solution required more than a technology refresh. So they decided to launch an “application layer modernization program to address pent-up demand from our customers, address the stovepipe nature of multiple applications, and do it more quickly and flexibly through an agile delivery process. And we definitely wanted a cloud-based solution,” she said.  “We also were looking at — instead of custom development, which is what we had — going to more of a low-code, no-code solution that gives us more pre-built capability. “

After evaluating a series of vendor demonstrations and completing the acquisition process, FERC’s IT team selected Microsoft’s Power Platform, a set of low-code tools that help create and automate solutions, to modernize the applications.  After conducting an application rationalization review, FERC defined a phased approach to modernize its applications. The first phase, which is complete, developed a Virtual Agenda system that supports the Commission voting process on energy matters.  FERC is now in the second phase, migrating its workload management and hydro project systems.  All the modernized systems operate on Microsoft Azure Government Community Cloud (GCC) environments, according to Pfeifer.

Wholesale improvements 

The first phase of modernization efforts, which went live in August, has already led to improvements for FERC employees, according to Pfeifer.

“The biggest improvement areas were greater integration of the workflows within the new system,” she said. Right away, there was less rekeying of data and fewer manual errors. Another significant improvement was “the automated generation of fields or documents that had previously been done manually,” she explained.

The new layer of automated workflow tracking provides more comprehensive visibility into the status of FERC dockets and reviews, which eventually flow up for final decisions by FERC’s five-member board of commissioners. The new system has replaced and consolidated a separate set of Microsoft Sharepoint sites used by the chairman and the commissioners’ staff to track projects in circulation before coming up for Commission decisions.

Externally, as part of future phases, regulated entities will find it easier to submit filings and requests, said Pfeifer. She acknowledged there’s more work to be done to improve FERC’s customers’ overall user experience. However, the cloud-based applications are already improving the agency’s ability to maintain the application and analyze data associated with the Commission proceedings — and puts FERC in a stronger position to leverage AI, said Pfeifer.

Lessons learned

One of the key lessons that helped accelerate FERC’s modernization efforts, according to Pfeifer, was using the acquisition process differently.

“We used some more advanced acquisition techniques — we requested a demo, for instance, as well as did a ‘Tech Challenge’ — which allowed us to see not just a paper document in response to a proposal, but a demo of a solution. That allowed us to work with (different vendor’s teams) to see how they would work together.” The tech challenge also included a tech talent component on top of the demo, “where vendors had to change something (so we could) see how they would go about doing that, what experience they had and what the team was capable of configuring and delivering,” she said.

Another lesson she stressed was the importance of business process mapping and reengineering “so that we could help our customers (define) what they want the processes to do. How do they want the processes to improve? We wanted to model that technically, not model the old processes that they weren’t happy with.”

That would also help the IT team implement the modernization efforts in phases, which was essential to ensuring the transition process went smoothly and minimized disruption to FERC mission.

Added benefits

While measuring the impact of modernizing and migrating to cloud services isn’t always straightforward, Pfeifer sees a number of operational benefits.

“Just to keep track of the status of things requires a lot of side spreadsheets and reports that aren’t part of the actual workflow (and will be incorporated into the FERC workload processing). Having a more streamlined workflow process also allows the user base to understand the due dates and ensure they’re meeting them, which once required substantial effort from the program offices to do that within the existing applications,” she explained. 

The other area that I see a lot of benefit in is consistency in how things are defined and managed, and handled across the different offices within FERC,” which in turn, leads to greater accuracy for decision-making.

Finally, Pfeifer sees these back-end improvements laying the foundation for modernizing the agency’s front-end experience for the regulated entities that rely on FERC, in line with the administration’s executive order on transforming the federal customer experience and service delivery.

“Modernization is hard to achieve because you have to replicate the capabilities of the existing systems — and improve on those capabilities at the same time,” concluded Pfeifer. “That said, sometimes the technical solution is the easier part of the solution.”

This report was produced by Scoop News Group for FedScoop as part of a series on technology innovation in government, underwritten by Microsoft Federal.

The post How cloud modernization helps FERC streamline its regulatory processes appeared first on FedScoop.

]]>
75709
Microsoft rolls out generative AI roadmap for government services https://fedscoop.com/microsoft-rolls-out-generative-ai-roadmap-for-government-services/ Tue, 31 Oct 2023 12:59:00 +0000 https://fedscoop.com/?p=73924 Some of the new AI services that Microsoft will roll out in the coming months include: Azure OpenAI generative services for government, classified cloud workloads, intelligent recap of meetings and Open Source LLMs in Azure Government.

The post Microsoft rolls out generative AI roadmap for government services appeared first on FedScoop.

]]>
Microsoft on Tuesday will announce a slew of new cutting edge artificial intelligence tools and capabilities through its Azure OpenAI Government and Microsoft 365 Government services, including classified cloud workloads and intelligent recap of meetings, as well as generative AI tools like content generation and summarization, code generation, and semantic search using its FedRAMP-approved systems.

“Government customers have signaled a strong, strong demand for the latest AI tools, especially for what we call our [Microsoft 365] co-pilot,” Candice Ling, vice president of Microsoft Federal, told FedScoop before the announcement. 

“By announcing the roadmap, we’re giving the agencies a heads up on how they can be prepared to adopt the capabilities that they want so much,” she added. “At the same time for those who haven’t done so, migrating to the cloud is a key first step to building and also looking at data governance, so that we can fully take advantage of the AI capabilities.”

Some of the key AI services that Microsoft will roll out in the coming months include: Azure OpenAI generative AI services for government, including GPT-3.5 Turbo and GPT-4 models; Azure OpenAI service for classified workloads; Teams Premium with intelligent recap in Microsoft 365 Government; Microsoft 365 Copilot update for government; and Open Source LLMs in Azure Government.

In a blog post shared exclusively with FedScoop that will publish Tuesday, Microsoft noted the higher levels of security and compliance required by government agencies when handling sensitive data. “To enable these agencies to fully realize the potential of AI, over the coming months Microsoft will begin rolling out new AI capabilities and infrastructure solutions across both our Azure commercial and Azure Government environments,” the blog post stated.

The new Azure OpenAI Service in Azure Government will enable the latest generative AI capabilities, including GPT-3.5 Turbo and GPT-4 models, for customers requiring higher levels of compliance and isolation. The product will be available in the first quarter of 2024.

Microsoft this summer will preview Azure OpenAI Services in its “air-gapped classified clouds to select national security customers.” The generative AI platform will be brought to its isolated classified cloud environment, enabling national security leaders and operators to use critical AI capabilities to analyze highly sensitive data anytime and anywhere.

The tech giant’s Teams Premium service with intelligent recap of meetings is expected to roll out to government users during the spring of 2024. Intelligent recap uses AI to help users summarize meeting content and focus on key elements through AI-generated meeting notes and tasks.

“So every agency, their needs are going to be different. But the theme that we’re hearing across the board is how we can transform the way they can deliver services to citizens that could really drive critical outcomes,” Ling told FedScoop. 

Ling added that consumers don’t have to be advanced programmers or data scientists to use the systems. “It’s anyone being able to ask the question about your data and being able to process information quite quickly. So anyone can do that now. And that can transform how the agencies work, right?”

Microsoft 365 Copilot for government is also expected to roll out during the summer of 2024, giving access to a “transformational AI assistant in GCC, bringing generative AI to our comprehensive productivity suite for a host of government users,” according to the blog post.

The Seattle-based company will announce on Tuesday that it has enabled access to open source AI model Llama-2 via the Azure Machine Learning catalog in Azure Government. The company recognizes that “some mission requirements benefit from smaller generative AI models” in addition to its own OpenAI models.

Microsoft’s AI rollout builds upon the June launch of its Azure OpenAI Service for the government to allow federal agencies to use powerful language models to run within the company’s cloud service for U.S. government agencies, Azure Government.

Microsoft in July also received FedRAMP high authorization, giving federal agencies who manage some of the government’s most sensitive data access to powerful language models including ChatGPT.

The post Microsoft rolls out generative AI roadmap for government services appeared first on FedScoop.

]]>
73924
NASA cautiously tests OpenAI software for summarization and code writing https://fedscoop.com/nasa-cautiously-tests-openai-software/ Fri, 04 Aug 2023 20:01:08 +0000 https://fedscoop.com/?p=71486 Employees looking to evaluate the technology are only invited to join generative trials if proposed use cases involve “public, non-sensitive data."

The post NASA cautiously tests OpenAI software for summarization and code writing appeared first on FedScoop.

]]>
NASA is cautiously testing OpenAI software with a range of applications in mind, including code-writing assistance and research summarization. Dozens of employees are participating in the effort, which also involves using Microsoft’s Azure cloud system to study the technology in a secure environment, FedScoop has learned. 

The space agency says it’s taking precautions as it looks to examine possible uses for generative artificial intelligence. Employees looking to evaluate the technology are only invited to join NASA’s generative AI trial if their tests involve “public, non-sensitive data,” Edward McLarney, digital transformation lead for Artificial Intelligence and Machine Learning at the agency, told FedScoop.

In June, Microsoft announced a new Azure OpenAI tool designed for the government, which according to the company is more secure than the commercial version of the software. Last week, FedScoop reported that the Microsoft Azure OpenAI was approved for use on sensitive government systems. A representative for Microsoft Azure referred to NASA in response to a request for comment. OpenAI did not respond to a request for comment by the time of publication.

Experimentation with the technology has just begun, McLarney noted, and “many iterations” of testing, verification, validation, bias mitigation, and safety reviews, among other types of evaluations, are still ahead. 

“NASA workers are assessing usability of the tools, accuracy of the results, completeness of AI-generated outputs, security behavior of the overall cloud services, speed of the models, costs, supportability and more,” McLarney said. “NASA is excited about the potential of generative AI and is also being clear-eyed about its risks and shortcomings.” 

He added: “NASA also uses cloud services from other companies and is interested in testing generative AI capabilities from them. NASA may conduct additional generative AI testing with Google Cloud Platform, Amazon Web Services, or other companies in the future.”

Right now, the space agency plans to study OpenAI’s chat, code assistance capabilities, and image-generating capabilities. AI-generated art could help provide “inspiration” for NASA artists, McLarnley explained, while the system’s text-generating software could help with writing documents. He pointed to other use cases, too. 

FedScoop learned about NASA’s generative AI tests after receiving a list of employees titled “Initial NASA OpenAI on Azure Testers” in response to a public records request. Last month, FedScoop obtained an email — which was sent by NASA’s chief information officer to employees in May — focused on preliminary guidance for using AI tools like ChatGPT. That email noted that some “early adopters” within the agency were preliminarily working with the technology. 

Notably, the space agency’s generative AI testing had not begun when NASA began collecting its fiscal year 2023 AI use case inventory, which was required by a 2020 Trump administration executive order. Still, McLarney noted that as “NASA generative AI testing and nascent use begins, it will be included as appropriate in future AI inventory reporting cycles.”

NASA’s experimentation with OpenAI software is just one part of the agency’s growing focus on AI. Officials also released a Responsible AI plan last September — and artificial intelligence and machine learning remain an element of the agency’s digital transformation efforts.

The agency is one of the first government departments to disclose details of its approach to experimentation with OpenAI. The use of generative AI tools can raise privacy, trust and oversight, and national security concerns, as a Government Accountability Office brief from June highlighted. Relatedly, the Department of Transportation recently deleted a reference to using ChatGPT from its AI use inventory, in response to FedScoop’s reporting.

The post NASA cautiously tests OpenAI software for summarization and code writing appeared first on FedScoop.

]]>
71486
Microsoft Azure OpenAI service approved for use on sensitive government systems https://fedscoop.com/azure-openai-approved-for-use-on-sensitive-gov-systems/ https://fedscoop.com/azure-openai-approved-for-use-on-sensitive-gov-systems/#respond Fri, 28 Jul 2023 16:46:44 +0000 https://fedscoop.com/?p=71150 The service has received FedRAMP High approval, meaning it can be used in cloud environments that hold sensitive, unclassified data.

The post Microsoft Azure OpenAI service approved for use on sensitive government systems appeared first on FedScoop.

]]>
Microsoft’s recently launched Azure OpenAI service on Thursday received Federal Risk and Authorization Management Program high authorization, giving federal agencies who manage some of the government’s most sensitive data access to powerful language models including ChatGPT, FedScoop has learned.

The authorization will allow government departments’ cloud apps to integrate with and adapt models including GPT-4, GPT-3.5, and DALL-E for specific tasks, including content generation, summarization, semantic search, and natural language-to-code translation.

FedRAMP is a security framework that allows cloud providers to obtain governmentwide authorization for their products. The high authorization permits the use of a product in cloud computing environments that hold some of the government’s most sensitive, unclassified data, such as data held by law enforcement agencies or financial regulators.

Microsoft in early June launched its Azure OpenAI Service for the government to allow federal agencies to use powerful language models to run within the company’s cloud service for U.S. government agencies, Azure Government.

“The FedRAMP High authorization demonstrates our ongoing commitment to ensuring that government agencies have access to the latest AI technologies while maintaining strict security and compliance requirements,” Bill Chappell, CTO for Microsoft’s Strategic Missions and Technologies told FedScoop in a statement.

“We look forward to empowering federal agencies to transform their mission-critical operations with Azure OpenAI and unlocking new insights with the power of Generative AI,” he added. 

The new FedRAMP authorization comes as Microsoft faces intense scrutiny after hackers based in China breached the email accounts of senior U.S. officials, an operation that utilized a flaw in a Microsoft product and was discovered thanks to a logging feature that costs customers extra. 

Biden administration officials, security researchers and members of Congress have questioned the company’s commitment to security in the aftermath of the hack and why Microsoft is upselling customers for core security features.

Microsoft’s Azure OpenAI service this week also received DoD IL2 Provisional Authorization (PA) issued by the Defense Information Systems Agency (DISA).

Notably, Microsoft says all traffic used within the Azure OpenAI service will stay entirely within its global network backbone and will never enter the public internet. The technology giant’s network is one of the largest in the world and made up of more than 250,000 km of lit fiber optic and undersea cable systems.

The tech company added that the Azure OpenAI Service does not connect with Microsoft’s corporate network, and that government agency data is never used to train the OpenAI model.

The Azure OpenAI Service can be accessed using REST APIs, Python SDK, or Microsoft’s web-based interface in the Azure AI Studio, and all Azure Government customers and partners will be able to access all models.

Microsoft is doubling down and highlighting its data, privacy, and security protections offered to government customers by encrypting all Azure traffic within a region or between regions using MACsec, which relies on AES-128 block cipher for encryption. 

The post Microsoft Azure OpenAI service approved for use on sensitive government systems appeared first on FedScoop.

]]>
https://fedscoop.com/azure-openai-approved-for-use-on-sensitive-gov-systems/feed/ 0 71150
Microsoft appoints Candice Ling as head of federal business unit https://fedscoop.com/microsoft-appoints-candice-ling-as-head-of-federal-business-unit/ https://fedscoop.com/microsoft-appoints-candice-ling-as-head-of-federal-business-unit/#respond Tue, 18 Jul 2023 14:12:58 +0000 https://fedscoop.com/?p=70642 The technology executive takes over leadership of Microsoft's federal IT business following the departure of Rick Wagner.

The post Microsoft appoints Candice Ling as head of federal business unit appeared first on FedScoop.

]]>
Microsoft on Tuesday will announce the appointment of Candice Ling as senior vice president and head of the technology giant’s federal government business unit, FedScoop has learned.

The executive has over two decades of leadership experience in the tech sector and was previously vice president at Microsoft’s public sector division. In her new role, Ling’s priorities are expected to include using the company’s partnership with tech giant OpenAI to help agencies adopt artificial intelligence tools.

Ling’s appointment follows the departure of Rick Wagner, who last week stepped down as Microsoft Federal president.

In addition, the Redmond, Washington-based software giant has also named Roger Heinz to lead Microsoft’s communication sales and delivery team amid a slight reshuffle of its Strategic Missions and Technologies team, according to Microsoft job announcements shared with FedScoop.

The leadership reshuffle comes as Microsoft faces intense intense scrutiny after hackers based in China breached the email accounts of senior U.S. officials, an operation that utilized a flaw in a Microsoft product and was discovered thanks to a logging feature that costs customers extra. Biden administration officials, security researchers and members of Congress have questioned the company’s commitment to security in the aftermath of the hack and why Microsoft is upselling customers for core security features.

“In this new era of government, we are dedicated to and laser-focused on accelerating AI adoption in support of your mission,” Ling is expected to say of her appointment, according to remarks shared with FedScoop. “We are always honored to stand by you, and it is a wonderful privilege for us to lead the charge in the AI revolution together.” 

Ling has been with Microsoft for five years including two years as Microsoft Asia Government lead in Singapore and three years on its federal team in Virginia. Ling previously spent 19 years with Canadian IT consulting company CGI in various leadership roles.

The reshuffle also comes as Microsoft works to expand the services it provides to U.S. government agencies, including through the provision of artificial intelligence-assisted cloud technology.

Last month the technology giant launched its new Azure OpenAI Service for government, which the company says will allow federal agencies to use powerful language models including ChatGPT while adhering to stringent security and compliance standards.

That service is intended to allow government departments to adapt models including GPT-3 and GPT-4 for specific tasks, including content generation, summarization, semantic search, and natural language-to-code translation.

In September 2021, Microsoft combined its U.S. federal business unit with its Azure cloud team to create a new subsidiary as part of a reorganization of the technology giant’s U.S. public sector operation.

Microsoft has a long track record working with government agencies, and for nearly two years was embroiled in a legal dispute with Amazon after winning the Pentagon’s JEDI cloud contract.

The post Microsoft appoints Candice Ling as head of federal business unit appeared first on FedScoop.

]]>
https://fedscoop.com/microsoft-appoints-candice-ling-as-head-of-federal-business-unit/feed/ 0 70642
Microsoft launches generative AI service for government agencies https://fedscoop.com/microsoft-launches-azure-openai-service-for-government/ https://fedscoop.com/microsoft-launches-azure-openai-service-for-government/#respond Wed, 07 Jun 2023 15:00:00 +0000 https://fedscoop.com/?p=69104 Microsoft's Azure OpenAI Service will allow departments to adapt generative AI models for tasks including content generation and semantic search.

The post Microsoft launches generative AI service for government agencies appeared first on FedScoop.

]]>
Microsoft on Wednesday launched its new Azure OpenAI Service for government, which the company says will allow federal agencies to use powerful language models including ChatGPT while adhering to stringent security and compliance standards.

The new service will allow government departments to adapt models including GPT-3 and GPT-4 for specific tasks, including content generation, summarization, semantic search, and natural language-to-code translation.

The language models will run within Microsoft’s cloud service for U.S. government agencies, Azure Government.

“If you’re an Azure Government customer (United States federal, state, and local government or their partners), you now have the opportunity to use the Microsoft Azure OpenAI Service through purpose-built, AI-optimized infrastructure providing access to OpenAI’s advanced generative models,” Bill Chappell, Chief Technology Officer, Strategic Missions and Technologies at Microsoft said in a blog post shared with FedScoop.

“Microsoft has developed a new architecture that enables government agencies to securely access the large language models in the commercial environment from Azure Government allowing those users to maintain the stringent security requirements necessary for government cloud operations,” Chappell added.

Notably, Microsoft says all traffic used within the service will stay entirely within its global network backbone and will never enter the public internet. The technology giant’s network is one of the largest in the world and made up of more than 250,000 km of lit fiber optic and undersea cable systems.

The tech company added that the Azure OpenAI Service does not connect with Microsoft’s corporate network, and that government agency data is never used to train the OpenAI model.

The Azure OpenAI Service can be accessed using REST APIs, Python SDK, or Microsoft’s web-based interface in the Azure AI Studio, and all Azure Government customers and partners will be able to access all models.

Microsoft is doubling down and highlighting its data, privacy, and security protections offered to government customers by encrypting all Azure traffic within a region or between regions using MACsec, which relies on AES-128 block cipher for encryption. 

The post Microsoft launches generative AI service for government agencies appeared first on FedScoop.

]]>
https://fedscoop.com/microsoft-launches-azure-openai-service-for-government/feed/ 0 69104
ICE awards $341.6M in cloud hosting contracts to Four Point Technology https://fedscoop.com/ice-cloud-hosting-contracts-four-points/ Tue, 16 Aug 2022 15:53:34 +0000 https://fedscoop.com/?p=58146 The awards come after ICE amended the solicitation in response to protests over cloud service provider limitations from Oracle and Mythics.

The post ICE awards $341.6M in cloud hosting contracts to Four Point Technology appeared first on FedScoop.

]]>
Immigration and Customs Enforcement awarded Four Points Technology three contracts potentially worth $341.6 million for cloud hosting earlier this month, according to Federal Procurement Data System filings.

ICE has already placed three task orders worth $11.1 million for Amazon Web Services and Microsoft Azure cloud offerings across the three, five-year blanket purchase agreements.

Four Points won the awards Aug. 5, after ICE amended the solicitation in response to protests from Oracle and Mythics with the Government Accountability Office over the cloud service provider limitations, as reported by Washington Technology.

The Chantilly, Virginia-based tech company is a service-disabled, veteran-owned small business that touts its partnerships with AWS and Microsoft.

The post ICE awards $341.6M in cloud hosting contracts to Four Point Technology appeared first on FedScoop.

]]>
58146
CDC looks to improve internal data sharing with centralized, cloud-based ecosystem https://fedscoop.com/cdc-edav-data-sharing-platform/ Fri, 27 May 2022 18:32:10 +0000 https://fedscoop.com/?p=52950 The Enterprise Data Analytics and Visualization (EDAV) platform lets CDC scientists catalog, analyze and publish findings faster.

The post CDC looks to improve internal data sharing with centralized, cloud-based ecosystem appeared first on FedScoop.

]]>
The Centers for Disease Control and Prevention launched a centralized, cloud-based data ecosystem to streamline intra-agency information sharing, according to the deputy director for public health science and surveillance.

Speaking during the National Center for Health Statistics Board of Scientific Counselors meeting Thursday, Dan Jernigan said the Enterprise Data Analytics and Visualization (EDAV) platform lets CDC scientists catalog, analyze and publish findings faster.

The COVID-19 pandemic revealed the CDC has a data-sharing problem due to its many one-off, proprietary systems tracking individual diseases, but EDAV allows for reuse of forecasting solutions.

“This is an important way that we are trying to have systems that are not unique to the pathogen but are pathogen and program agnostic,” Jernigan said.

Next, the agency’s Data Modernization Initiative (DMI) will connect EDAV core services with its Microsoft Azure cloud environment to rearchitect siloed systems like ArboNET, used to share arbovirus case information.

While Azure is the primary EDAV environment, the CDC sees itself as a multi-cloud environment, and future building blocks and applications will be cloud agnostic “as much as possible,” Jernigan said. The agency brought in architects familiar with different environments and is working with other cloud providers to develop tools that work in multiple environments for the public health benefits.

Amazon Web Services is used for the National Syndromic Surveillance Program, and Amazon provides the platform for the CDC’s biggest data-sharing intermediary, the AIMS Platform used by the Association of Public Health Laboratories.

The DMI is also reimagining data flow into the CDC through a consortium, including the Office of the National Coordinator for Health Information (ONC), developing a North Star Architecture. The future-state public health ecosystem will ensure federal, state and local health department information systems are connected and interoperable.

ONC is developing new data standards, in accordance with 21st Century Cures Act requirements, that the North Star Architecture will use to decrease the reporting burden on health care providers and hospitals, eliminate the need for phone calls, and improve national disease forecasting and mitigations.

The CDC further established a Consortium for Data Modernization, with public health partners and industry associations, that meets biweekly to identify issues and decide who will address them. The agency will also reestablish the Data and Surveillance Workgroup under the Advisory Committee for the Director this summer.

Lastly, the CDC is holding listening sessions with potential private sector partners on the development of prototypes that will further the DMI.

“We don’t have all the funding that we need to do it,” Jernigan said. “But we are going to be targeting that funding to get critical efforts underway.”

The CDC is budgeting for the DMI based on five priorities: building the right foundation, accelerating data to action, workforce, partnerships, and change management.

Building the right foundation involves getting data from the appropriate sources. For instance, the National Center for Health Statistics (NCHS) is part of a $200 million grant that will fund states standardizing vital statistics, immunization, and laboratory case reporting data from electronic health records and other sources.

From there the CDC will ensure there’s a secure, accessible cloud environment for the data to land and that there are tools available to state and local health departments to analyze the information available.

“We want to be able to have rapid outbreak responses and develop common operating pictures,” Jernigan said.

The DMI is integrating data from nontraditional sources, and the CDC received $3 billion through the American Rescue Plan Act for a five-year program grant that will help hire data scientists and other personnel.

Each of the five DMI priorities has an implementation team associated with it that are standing up communities of practice for developing definitions, identifying barriers and risks, and setting objectives and desired results.

“Our ultimate goal is to move from siloed and brittle public health data systems to connected, resilient, adaptable and sustainable. And that sustainable piece is going to be important as we move forward, thinking about how we’re going to keep these efforts going — response-ready systems that can help us solve problems before they happen and reduce the harm caused by the problems that do happen,” Jernigan said. “So essentially better, faster, actionable intelligence for decision-making at all levels of public health.”

The post CDC looks to improve internal data sharing with centralized, cloud-based ecosystem appeared first on FedScoop.

]]>
52950