machine learning Archives | FedScoop https://fedscoop.com/tag/machine-learning/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Mon, 06 May 2024 14:00:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 machine learning Archives | FedScoop https://fedscoop.com/tag/machine-learning/ 32 32 How the State Department used AI and machine learning to revolutionize records management https://fedscoop.com/how-the-state-department-used-ai-and-machine-learning-to-revolutionize-records-management/ Thu, 16 May 2024 19:34:00 +0000 https://fedscoop.com/?p=77770 A pilot approach helped the State Department streamline the document declassification process and improve the customer experience for FOIA requestors.

The post How the State Department used AI and machine learning to revolutionize records management appeared first on FedScoop.

]]>
In the digital age, government agencies are grappling with unprecedented volumes of data, presenting challenges in effectively managing, accessing and declassifying information.

The State Department is no exception. According to Eric Stein, deputy assistant secretary for the Office of Global Information Services, the department’s eRecords archive system currently contains more than 4 billion artifacts, which includes emails and cable traffic. “The latter is how we communicate to and from our embassies overseas,” Stein said.

Over time, however, department officials need to declare what can be released to the public and what stays classified — a time-consuming and labor-intensive process.

Photo of Eric Stein, U.S. State Department Eric Stein, deputy assistant secretary for the Office of Global Information Services,
Eric Stein, Deputy Assistant Secretary, Office of Global Information Services, U.S. Department of State

The State Department has turned to cutting-edge technologies like artificial intelligence (AI) and machine learning (ML) to find a more efficient solution. Through three pilot projects, the department has successfully streamlined the document review process for declassification and improved the customer experience when it comes to FOIA (Freedom of Information Act) requests.

An ML-driven declassification effort

At the root of the challenge is Executive Order 13526, which requires that classified records of permanent historical value be automatically declassified after 25 years unless a review determines an exemption. For the State Department, cables are among the most historically significant records produced by the agency. However, current processes and resource levels will not work for reviewing electronic records, including classified emails, created in the early 2000s and beyond, jeopardizing declassification reviews starting in 2025.

Recognizing the need for a more efficient process, the department embarked on a declassification review pilot using ML in October 2022. Stein came up with the pilot idea after participating in an AI Federal Leadership Program supported by major cloud providers, including Microsoft.

For the pilot, the department used cables from 1997 and created a review model based on human decisions from 2020 and 2021 concerning cables marked as confidential and secret in 1995 and 1996. The model uses discriminative AI to score and sort cables into three categories: those it was confident should be declassified, those it was confident shouldn’t be declassified, and those that needed manual review.

According to Stein, for the 1997 pilot group of more than 78,000 cables, the model performed the same as human reviewers 97% to 99% of the time and reduced staff hours by at least 60%.

“We project [this technology] will lead to millions of dollars in cost avoidance over the next several years because instead of asking for more money for human resources or different tools to help with this, we can use this technology,” Stein explained. “And then we can focus our human resources on the higher-level and analytical thinking and some of the tougher decisions, as opposed to what was a very manual process.”

Turning attention to FOIA

Building on the success of the declassification initiative, the State Department embarked on two other pilots to enhance the Freedom of Information Act (FOIA) processes from June 2023 to February 2024.

Like cable declassification efforts, handling a FOIA request is a highly manual process. According to Stein, sometimes those requests are a single sentence; others are multiple pages. But no matter the length, a staff member must acknowledge the request, advise whether the department will proceed with it, and then manually search for terms in those requests in different databases to locate the relevant information.

Using the lessons learned from the declassification pilot, Stein said State Department staff realized there was an opportunity to streamline certain parts of the FOIA process by simultaneously searching what was already in the department’s public reading room and in the record holdings.

“If that information is already publicly available, we can let the requester know right away,” Stein said. “And if not, if there are similar searches and reviews that have already been conducted by the agency, we can leverage those existing searches, which would result in a significant savings of staff hours and response time.”

Beyond internal operations, the State Department also sought to improve the customer experience for FOIA requesters by modernizing its public-facing website and search functionalities. Using AI-driven search algorithms and automated request processing, the department aims to “find and direct a customer to existing released documents” and “automate customer engagement early in the request process.”

Lessons learned

Since launching the first pilot in 2022, team members have learned several things. The first is to start small and provide the space and time to become familiar with the technology. “There are always demands and more work to be done, but to have the time to focus and learn is important,” Stein said.

Another lesson is the importance of collaboration. “It’s been helpful to talk across different communities to not only understand how this technology is beneficial but also what concerns are popping up—and discussing those sooner than later,” he said. “The sooner that anyone can start spending some time thinking about AI and machine learning critically, the better.”

Another lesson is to recognize the need to “continuously train a model because you can’t just do this once and then let it go. You have to constantly be reviewing how we’re training the model (in light of) world events and different things,” he said.

These pilots have also shown how this technology will allow State Department staff to better respond to other needs, including FOIA requests. For example, someone may ask for something in a certain way, but that’s not how it’s talked about internally.

“This technology allows us to say, ‘Well, they asked for this, but they may have also meant that,’” Stein said. “So, it allows us to make those connections, which may have been missing in the past.”

The State Department’s strategic adoption of AI and ML technologies in records management and transparency initiatives underscores the transformative potential of these tools. By starting small, fostering collaboration and prioritizing user-centric design, the department has paved the way for broader applications of AI and ML to support more efficient and transparent government operations.

The report was produced by Scoop News Group for FedScoop, as part of a series on innovation in government, underwritten by Microsoft Federal.  To learn more about AI for government from Microsoft, sign up here to receive news and updates on how advanced AI can empower your organization.

The post How the State Department used AI and machine learning to revolutionize records management appeared first on FedScoop.

]]>
77770
Leading AI-focused congressman on legislative prospects for the tech and the risks it presents https://fedscoop.com/don-beyer-ai-legislation-congress-2024/ Mon, 22 Jan 2024 21:03:09 +0000 https://fedscoop.com/?p=75673 Rep. Don Beyer, vice chair of the Congressional AI Caucus and the New Democrat Coalition’s working group on AI, is optimistic that legislation on the technology could succeed in 2024.

The post Leading AI-focused congressman on legislative prospects for the tech and the risks it presents appeared first on FedScoop.

]]>
Rep. Don Beyer is one of the leading voices on artificial intelligence in the House. The Virginia Democrat is vice chair of both the bipartisan Congressional AI Caucus and the working group on the technology established by the New Democrat Coalition, the party’s largest caucus in the lower chamber. He has proposed legislation meant to rein in the technology, including, most recently, a plan to ensure federal agencies and vendors follow the AI risk management framework created by the National Institute for Standards and Technology. 

Oh, and in the congressman’s spare time, he’s getting a master’s degree in machine learning, too. 

In a recent interview with FedScoop, Beyer said federal AI legislation could finally be signed by President Joe Biden this year. Of course, there are real reasons to be skeptical. Major legislation focused on the technology hasn’t been finalized yet — and House Speaker Mike Johnson, R-La., hasn’t personally told Beyer that he’s interested in making that goal happen. Still, Beyer says legislative ideas on the table do have traction. 

“It will be bipartisan. It will be supported by leadership. And I think it’s important. It’ll be an extraordinary contrast with the laissez-faire approach we’ve taken with social media for the last 24 years,” Beyer said. “We did virtually nothing and are suffering the consequences. Here’s a time where we’re trying to be responsible and get ahead of the curve.” 

In a wide-ranging conversation, Beyer outlined the House’s AI plans for this year, funding for NIST, potential existential risks created by the technology, where Congress might have a role, and why he’s optimistic. 

Editor’s note: The transcript has been edited for clarity and length.

FedScoop:  I know you’re really focused on AI — and I want to ask about the AI legislation you’ve been working on — but maybe to start: how is your AI master’s program going?

Rep. Don Beyer: It’s going well. I’ve got Monday [and] Wednesday classes [and] Thursday morning lab. My whole team is upset that labs are Thursday morning at 9:30 because it interferes with hearings. But the coursework is very fun. This semester is object-oriented programming. Don’t ask me what that means. You can ask me in a couple of months.

FS: I’ll come back with a follow-up on that one. Let’s start by talking about the Federal Artificial Intelligence Risk Management Act, which you proposed earlier this month, alongside Reps. Ted Lieu, D-Calif., Zach Nunn, R-Iowa, and Marcus Molinaro, R-N.Y. Why did you propose it? 

DB: This actually is an idea that we stumbled across maybe nine months ago. The simple notion [is] that to try to impose new standards on the entire private sector would be very difficult and would take a long time. But we had an easy trigger in how all the federal contracting work is done. We talked about it for months and then it showed up in the President’s Executive Order that federal agencies contracting for any contracts that involve AI, the AI had to follow the NIST risk management framework.

Then we decided: the executive order can be reversed at any time by the next president, so we should put this in legislation. Cheerfully, it’s very bipartisan. … It just requires all government agencies to follow the NIST risk management framework. Hopefully, what that will do is not only make sure that the government’s using AI well, but it will be a signal to the private sector that this is a responsible way to go.

FS: I know the Office of Management and Budget has to finalize its own guidelines for federal agencies working with AI. How do you see that interacting with some of the other rules for AI and federal agencies?

DB: I think most people still agree that NIST is the gold standard. … What we hope is that there will be a convergence around the set of standards that really works. Because NIST for more than a century has been the official caretaker of how long an inch is and how long a second takes and how much a gram weighs, all that stuff. They are probably the best people, we think, to determine what the standard should actually be. 

Now one of the challenges is they don’t have a big budget for it. There’s only two-and-a-half people assigned to it. So among our responsibilities will be to make sure that they have the intellectual and labor resources to keep it up to date and improve and evolve and learn from everyone else.

FS: Given that there’s a pretty complicated supply chain for the creation of an AI system, does that present potential challenges for implementing this with companies that might be building these tools?

DB: Yes, it does, but because that’s also the way the real world works, it’s good that we address it sooner rather than later. … I can tell you this now as a computer science student — one of the interesting ideas in computer science is something called inheritance, that you don’t have to recreate a whole set of code if it already exists. You can inherit the class structure, the code structure, files, all that from previous stuff. You’re going to have inheritance everywhere in the industry, but best to realize that and get on top of it early on. 

FS: Where does the idea of setting up an entirely new agency to regulate AI stand? 

DB: I’m going to give you an ambivalent answer. It makes more sense on an international level, in that maybe through the United Nations or something like the World Trade Organization or the World Health Organization, ultimately we need to coordinate this among more than 200 different countries, including major players like China, India, us, the U.K., Russia. … You’re not gonna be able to deal with that one at a time. 

On the other hand, I’m skeptical about doing another federal agency to do it. I think NASA’s need for AI and AI oversight is going to be very different from what the Department of Defense needs, which will be very different from what Fish and Wildlife needs within the Department of Interior. I’m also reluctant to bless the creation of yet another federal bureaucracy. … All the agencies already have been studying this and trying to get ready for it. So I’m perfectly content to let the Department of Defense within the NIST AI framework try to manage its own vendor relations.

FS: Do you think that risk management framework is sufficient for thinking about civil rights and AI, bias, trust and safety issues and things and leaving it to agencies to apply that? I can imagine a critic saying this is not rigorous enough. 

DB: I do want to start that way. Obviously, some agencies will do it better than others, based on the individuals they put in the leadership position or what the secretary or directors committed to. I’d rather have 20 or 30 different efforts out there. Some of them thrive, some of them will fail. Then we will apply the lessons learned. 

FS: Going into 2024, what are the priorities for the Congressional AI Caucus right now? 

DB: I’m just a humble vice chair. But the conversations that I’ve had with [California Democrat] Anna Eshoo and [California’s] Jay Obernolte, who is the other vice chair on the Republican side, are all around picking out a handful of the 100 bills that have already been introduced. … The number one priority would be, if we could get three to five AI bills signed by President Biden this year, that really creates an excellent platform for us to build on to the years to come as we get more real-life experience with the AI. 

FS: You’re working on the AI working group within the New Democrat Coalition. How would you parse the difference in perspective that the New Democratic Coalition has on this, versus maybe other members of the Democratic party who are not members of the New Democrat Coalition, including some of the more progressive or farther left members?

DB: I can’t give you the Progressive Caucus insight because it hasn’t come up there hardly at all. I am a member of both the New Dems and the Progressive Caucus. In terms of your thoughtful question about the differences between the New Dems’ approach and the bipartisan Congressional AI Caucus, I don’t see hardly any difference. Maybe the difference would be in ambition. 

FS: Do you have your eye on the question of how generative AI tools could sort of be deployed in maybe nefarious ways during the upcoming elections? How worried are you?

DB: I think we’re all worried about that and all expecting it to happen. We’re certainly seeing it happen in other countries already. And we will see. It’s possible that people will use it and it backfires on them. … Because it’s easy enough to create the horror scenarios where I’m standing there with — who’s the worst bad guy —  the leader of Hamas, having dinner talking about our children or something terrible. What deepfakes could accomplish.

On the other hand, one of our objectives is to educate people enough to be skeptical about anything like that. And, already, some private companies have policies to disclose ads. That is not gonna be a big step up. 

FS: With AI right now, what seems most exciting to you and what scares you the most about this technology? 

DB: The most exciting part by far are the science applications, specifically the medical applications. … I had dinner last week … with the AI scientists who developed AlphaFold. … They know how every protein ever discovered, they know how it’s folded to an 80% accuracy. That’s enough to be able to really, really stimulate drug development, make things go 1,000 times faster.

… The short-term biggest downside by far is gonna be job elimination. As we’ve seen in industry after industry over the years. You go back 100 years, 150 years, most of us were in agriculture, now it’s 1%. We’re gonna have the same kinds of things. Coping with the dislocations will be a great public policy challenge and cultural challenge. The long-term [downside] is still trying to dig deep enough to figure out how real are the existential risks. 

FS: I was gonna ask that.

DB: I don’t know. I’m trying to learn as much as I can about them — and only because I think it’s really irresponsible not to learn as much as we can about the existential risk. Many in the industry say, Blah. That’s not real. We’re very far from artificial general intelligence. … Or we can always unplug it.

But I don’t want to be calmed down by people who don’t take the risk seriously. A lot of people still don’t think climate change is a serious risk. I’m always annoyed by the people who don’t take 1,600 nuclear weapons that we have aimed at other people or vice versa [seriously]. … A lot of people don’t think about that at all, but that could be the end of humanity. 

The post Leading AI-focused congressman on legislative prospects for the tech and the risks it presents appeared first on FedScoop.

]]>
75673
AI algorithms could be used to better forecast natural disasters, GAO report says https://fedscoop.com/machine-learning-forecast-natural-disasters-gao/ Sat, 16 Dec 2023 00:04:04 +0000 https://fedscoop.com/?p=75280 The GAO found that AI machine learning models could significantly improve warning time and preparedness for severe storms and natural disasters.

The post AI algorithms could be used to better forecast natural disasters, GAO report says appeared first on FedScoop.

]]>
Artificial intelligence-driven algorithms can be used to better forecast models for natural disasters, saving lives and protecting property by rapidly analyzing massive data sets and identifying relevant patterns, a top government watchdog said in a report released Thursday.

Natural disasters result in hundreds of U.S. deaths and billions of dollars in damage annually, and machine learning AI tools could automate processes and glean new insights into weather patterns to improve warning time and preparedness during those events, the Government Accountability Office found.

“GAO found that machine learning, a type of artificial intelligence (AI) that uses algorithms to identify patterns in information, is being applied to forecasting models for natural hazards such as severe storms, hurricanes, floods, and wildfires, which can lead to natural disasters,” the GAO stated. 

“A few machine learning models are used operationally — in routine forecasting — such as one that may improve the warning time for severe storms. Some uses of machine learning are considered close to operational, while others require years of development and testing.”

GAO conducted the study by reviewing the use of machine learning to model severe storms, hurricanes, floods and wildfires, in addition to interviewing government, industry, academia and professional organization stakeholders. The watchdog also reviewed key reports and scientific literature on the subject.

The GAO study found that applying machine learning to natural disaster detection could reduce the time required to make costly forecasts and increase model accuracy by more fully exploiting available data, using other data that traditional models cannot and creating synthetic data to fill gaps as well as reducing uncertainty in the forecasting models.

The GAO study also found some challenges with the use of machine learning and AI, such as: data limitations that hinder ML model training and result in lower accuracy in some regions, especially those in rural areas; concerns about bias and general distrust and misunderstanding of algorithms; the costliness of developing and running ML models; a lack of understanding of the data that is being modeled.

GAO highlighted five policy options that could mitigate those challenges: work toward better data collection, sharing, and use; create more education and training options; target hiring and retention hurdles and specific resource shortfalls; take steps to account for bias and build trust in data and ML models; and maintain current efforts.

The post AI algorithms could be used to better forecast natural disasters, GAO report says appeared first on FedScoop.

]]>
75280
DHS seeks information for CISA analytics and machine learning project https://fedscoop.com/dhs-cisa-machine-learning-analytics-rfi/ Tue, 05 Dec 2023 22:04:25 +0000 https://fedscoop.com/?p=75129 The agency’s Office of Mission and Capability Support aims to better understand the “capabilities of businesses that could supply access to” the three commercial cloud providers that support the CAP-M project.

The post DHS seeks information for CISA analytics and machine learning project appeared first on FedScoop.

]]>
The Department of Homeland Security is seeking cloud-related information to support an analytics and machine learning research and development project that’s in the works for the Cybersecurity and Infrastructure Security Agency.

The Advanced Analytics Platform for Machine Learning (CAP-M) project, which is being developed by DHS’s Science and Technology Directorate for CISA, is “envisioned to be a multicloud, multi-tenant environment for testing new software and tools, and developing complex machine learning capabilities,” per DHS’s request for information, posted Tuesday.

CAP-M, which will enable end users to “seamlessly access a variety of capabilities across multiple clouds to meet their analytics and computation needs,” leverages a multi-cloud approach, with pre-existing commercial cloud services provided by Amazon Web Services, Google Cloud Platform and Microsoft Azure.

DHS’s Office of Mission and Capability Support seeks to understand the “capabilities of businesses that could supply access to” those three cloud providers, the RFI states. Additionally, DHS aims to conduct market research for a possible CAP-M cloud services contract vehicle. 

In a fact sheet published in April, CISA said CAP-M’s artificial intelligence and machine learning capabilities would ideally provide the agency with improved situational awareness and decision-making tools for cyber and infrastructure security missions, in addition to better preparedness in the face of evolving threats. 

The deadline for responses to the RFI is Jan. 5, 2024.

The post DHS seeks information for CISA analytics and machine learning project appeared first on FedScoop.

]]>
75129
FOIA.gov unveils new search tool https://fedscoop.com/foia-gov-search-tool/ Wed, 25 Oct 2023 19:47:03 +0000 https://fedscoop.com/?p=73799 The public can now search website for publicly available documents via a tool that is powered by a combination of machine learning and logic.

The post FOIA.gov unveils new search tool appeared first on FedScoop.

]]>
A search tool was added Wednesday to the federal government’s website for Freedom of Information Act requests, a move intended to ease public efforts in finding commonly requested information.

The update to FOIA.gov — one of the most notable upgrades to the site since the 2018 release of the National FOIA Portal — checks an important box for the Department of Justice in its ongoing efforts to meet pledges in the Fifth U.S. Open Government National Action Plan

Per a news release from the DOJ’s Office of Information Policy, the search functionality on FOIA.gov allows users to easily and quickly find publicly available information, and connect them to the proper federal agency depending on the request.  

OIP noted that the website’s search tool is organized by the six most common topics of FOIA requests: immigration or travel records; tax records; social security records; medical records; personnel records; and military records. Users can enter their own search terms or begin their searches by navigating to one of those six topics and selecting answers from a series of questions.

The search tool is powered by a combination of machine learning and logic, pointing users toward relevant documents that are public or guidance on where to request specific information. 

Previous FedScoop reporting found that several federal agencies were behind the eightball on achieving interoperability with FOIA.gov, with some battling ongoing technical and logistical challenges. 

OIP said in its release that launching the new search tool is part of “phase one of a multi-phase project” in DOJ’s push to make the FOIA request process “more efficient and user-friendly.”

The post FOIA.gov unveils new search tool appeared first on FedScoop.

]]>
73799
DHS names Eric Hysen chief AI officer, announces new policies for AI acquisition and facial recognition  https://fedscoop.com/dhs-names-eric-hysen-chief-ai-officer-announces-new-policies-for-ai-acquisition-and-facial-recognition/ Fri, 15 Sep 2023 18:35:20 +0000 https://fedscoop.com/?p=72952 The new policies focus on responsible acquisition and use of AI and machine learning, and governance of facial recognition applications.

The post DHS names Eric Hysen chief AI officer, announces new policies for AI acquisition and facial recognition  appeared first on FedScoop.

]]>
The Department of Homeland Security on Thursday released new policies regarding the acquisition and use of artificial intelligence and named its first chief AI officer to help champion the department’s responsible adoption of AI.

In a release, DHS Secretary Alejandro Mayorkas announced the directives — one to guide the acquisition and use of AI and machine learning, and another to govern facial recognition applications — and named department CIO Eric Hysen as chief of AI.

The new policies were developed by DHS’s Artificial Intelligence Task Force (AITF), which was created in April 2023.

The news comes after the Government Accountability Office released a report earlier this month outlining the DHS’s lack of policies and training for law enforcement personnel on facial recognition technology. 

“Artificial intelligence is a powerful tool we must harness effectively and responsibly,” said DHS Secretary Alejandro Mayorkas said in a statement. “Our Department must continue to keep pace with this rapidly evolving technology, and do so in a way that is transparent and respectful of the privacy, civil rights, and civil liberties of everyone we serve.”

The release explains that DHS already uses AI in several ways, “including combatting fentanyl trafficking, strengthening supply chain security, countering child sexual exploitation, and protecting critical infrastructure. These new policies establish key principles for the responsible use of AI and specify how DHS will ensure that its use of face recognition and face capture technologies is subject to extensive testing and oversight.”

As DHS’s appointed chief of AI, Eric Hysen will work to promote innovation and safety in the department’s uses of AI and advise Mayorkas and other DHS leadership.

 “Artificial intelligence provides the department with new ways to carry out our mission to secure the homeland,” Hysen said in a statement. “The policies we are announcing today will ensure that the Department’s use of AI is free from discrimination and in full compliance with the law, ensuring that we retain the public’s trust.”

During the past two years of the Biden administration, multiple prominent civil rights groups have harshly criticized DHS’s approach to facial recognition, particularly its contracts with controversial tech company, Clearview AI, which continues to work with the agency.

“DHS claims this technology is for our public safety, but we know the use of AI technology by DHS, including ICE, increases the tools at their disposal to surveil and criminalize immigrants at a new level,” Paromita Shah, executive director of Just Futures Law, a legal nonprofit focused on immigrants and criminal justice issues, said in a statement on the new policies. 

“We remain skeptical that DHS will be able to follow basic civil rights standards and transparency measures, given their troubling record with existing technologies. The infiltration of AI into the law enforcement sector will ultimately impact immigrant communities,” Shah added. 

The post DHS names Eric Hysen chief AI officer, announces new policies for AI acquisition and facial recognition  appeared first on FedScoop.

]]>
72952
US Patent Office eyes using AI to improve ‘prior art’ searches https://fedscoop.com/patent-office-eyes-ai-prior-art-searches/ Tue, 29 Aug 2023 19:27:58 +0000 https://fedscoop.com/?p=72370 USPTO believes adding advanced AI technologies “offers unique opportunities to leapfrog forward to further enhance patent search capabilities and further strengthen the patent system.”

The post US Patent Office eyes using AI to improve ‘prior art’ searches appeared first on FedScoop.

]]>
The U.S. Patent and Trademark Office is exploring the idea of using artificial intelligence to improve searches for “prior art” during the patent process, according to a public solicitation.

Prior art searches, which collect public information that’s used to assess the novelty of an invention, are an important part of the patent examination process and the USPTO‘s mission “to issue reliable patent rights,” the agency said in a document part of a recent request for information on SAM.gov.

“However, the exponential growth of prior art and tremendous pace of technological innovation make it increasingly more difficult to quickly discover the most relevant prior art,” the agency said.

The solicitation specifically seeks information on solutions that would leverage technologies like AI and machine learning to “expand, rank and sort the results of existing patent search systems so that prior art that might have otherwise not been present in or near the top of a list of search results is made readily available to examiners.” 

The request for information comes as interest in the nascent technology has boomed with the popularity of tools like ChatGPT. The solicitation is the latest example of that interest extending to the federal government and comes as Congress and the White House pursue guardrails for AI.

According to the solicitation, USPTO has already been developing and implementing AI, including capabilities for AI-based patent searches and “a roadmap for future development.” But the agency said it recognizes adding more advanced technology “offers unique opportunities to leapfrog forward to further enhance patent search capabilities and further strengthen the patent system.”

The agency is seeking responses by Sept. 11. USPTO and the Department of Commerce didn’t immediately respond to requests for comment.

The post US Patent Office eyes using AI to improve ‘prior art’ searches appeared first on FedScoop.

]]>
72370
Energy announces $16M for AI nuclear physics research https://fedscoop.com/energy-announces-16m-for-ai-nuclear-physics-research/ Thu, 17 Aug 2023 19:39:25 +0000 https://fedscoop.com/?p=72076 The Department of Energy funds will go toward 15 projects to implement artificial intelligence and machine learning in nuclear physics research.

The post Energy announces $16M for AI nuclear physics research appeared first on FedScoop.

]]>
The Department of Energy announced millions in funding for projects aimed at using AI and machine learning to accelerate discoveries in nuclear physics.

The agency on Thursday announced $16 million that will go to 15 projects, including research into nuclear decay, several projects related to optimizing accelerator beams, and detector design for the Brookhaven National Laboratory’s Electron-Ion Collider project. Those projects will be conducted at eight national labs and 22 universities.

“Artificial intelligence has the potential to shorten the timeline for experimental discovery in nuclear physics,” Timothy Hallman, DOE associate director of science for nuclear physics, said in a release from the agency. 

He added: “Particle accelerator facilities and nuclear physics instrumentation face a variety of technical challenges in simulations, control, data acquisition, and analysis that artificial intelligence holds promise to address.”

The announcement marks the agency’s latest infusion to support key research. Earlier this month, DOE announced $112 million for fusion research, and last month it announced $33 million for clean energy technology research and $11.7 million for quantum computing research.  

The post Energy announces $16M for AI nuclear physics research appeared first on FedScoop.

]]>
72076
Machine-learning models predicted ignition in fusion breakthrough experiment https://fedscoop.com/machine-learning-fusion-ignition/ Wed, 14 Dec 2022 02:40:32 +0000 https://fedscoop.com/machine-learning-fusion-ignition/ Recent ML advances helped ensure Lawrence Livermore National Laboratory's historic achievement on the path to zero-carbon energy.

The post Machine-learning models predicted ignition in fusion breakthrough experiment appeared first on FedScoop.

]]>
Lawrence Livermore National Laboratory’s machine-learning models predicted the historic achievement of fusion ignition the week before its successful experiment on Dec. 5.

The National Ignition Facility’s design team fed the experimental design to the Cognitive Simulation (CogSim) machine-learning team for analysis, and it found the resulting fusion reactions would likely create more energy than was used to start the process — leading to ignition.

LLNL’s laser-based inertial confinement fusion research device is the size of three football fields and shot 192 laser beams — delivering 2.05 megajoules of ultraviolet energy to an almost perfectly round fuel capsule made of diamond — causing 3.15 megajoules’ worth of fusion ignition in a lab for the first time during the latest experiment. The achievement strengthens U.S. energy independence and national security with nuclear testing prohibited, and CogSim machine-learning models helped ensure the experiment avoided the previous year’s pitfalls. 

“Last week our pre-shot predictions, improved by machine learning and the wealth of data we’ve collected, indicated that we had a better than 50% chance of exceeding the target gain of 1,” said LLNL Director Kim Budil, during a press conference at the Department of Energy on Tuesday.

NIF’s design team benchmarks complex plasma physics simulations and analytical models against experimental data collected over 60 years to create designs that will reach the extreme conditions required for fusion ignition. The most recent experiment reached pressures two times greater than the Sun’s and a temperature of 150 million degrees.

CogSim may run thousands of machine-learning simulations of an experimental design in the lead up.

“We have made quite a bit of advancements in our machine-learning models to kind of tie together our complex radiation hydrodynamics simulations of the experimental data and learning,” said Annie Kritcher, principal designer.

But NIF’s August 8, 2021 experiment reached the threshold for ignition, and September’s experiment paved the way for a new laser capability. So the design team relied on traditional methods for the latest experiment and only used machine learning for predictions.

For this experiment the design team thickened the fuel capsule to widen the margin of success and burn more fuel and used improved models to increase the symmetry of the implosion by transferring more energy between laser beams in the second half and readjusting the first half of the pulse.

Kritcher credited those changes for the experiment’s success, though she called capsule defects, which are tougher to model and predict, the “main driver” in performance. While the diamond capsule is 100 times smoother than a mirror, X-ray tomography must be used to see, measure and count defects — generating a lot of data that software now helps analyze.

The robust capsule employed in the most recent experiment was not the most effective option, meaning future experiments should see improved performance, said Michael Stadermann, Target Fabrication program manager.

Firing the laser required an additional 300 megajoules of energy pulled from the power grid, which highlights an important point about the NIF: It’s a scientific demonstration facility, not an optimized one.

“The laser wasn’t designed to be efficient,” said Mark Herrmann, LLNL Weapons, Physics and Design program director. “The laser was designed to give us as much juice as possible to make these incredible conditions possible.”

The NIF is more than 20 years old, and some of its technology dates back to the 1980s.

New laser architectures, target fabrication methods, materials, computation and simulations, and machine learning have federal officials optimistic the U.S. can achieve President Biden’s goal of a commercial fusion reactor within the decade.

DOE invested $50 million in a public-private partnership in September around fusion pilot power plant designs, but Budil said such a plant is likely still four decades away without “concerted effort and investment” on the technology side.

The private sector invested $3 billion in fusion research last year, and DOE is partnering with the White House Office of Science and Technology Policy to map out a vision for commercial fusion with zero-carbon energy powering homes, cars and heavy industry.

To its credit, the Biden administration proposed the biggest research and development budget in U.S. history, and recent investments enabled LLNL’s latest achievement.

“I think this is an amazing example of the power of America’s research and development enterprise,” said OSTP Director Arati Prabhakar.

The post Machine-learning models predicted ignition in fusion breakthrough experiment appeared first on FedScoop.

]]>
63674
Government leaders tout big wins for their missions with AI, ML and cloud tools https://fedscoop.com/government-leaders-tout-big-wins-for-mission-ai-ml-cloud/ Wed, 14 Dec 2022 01:30:00 +0000 https://fedscoop.com/government-leaders-tout-big-wins-for-mission-ai-ml-cloud/ Executives from the U.S. Army, U.S. Postal Service and the State of New York highlight IT modernization initiatives at Google Government Summit.

The post Government leaders tout big wins for their missions with AI, ML and cloud tools appeared first on FedScoop.

]]>
Public sector organizations are making big strides supporting their missions by applying artificial intelligence, machine learning, analytics, security and collaboration tools to their initiatives.

That’s according to government executives from the U.S. Army, U.S. Postal Service and the State of New York who joined Google leaders on stage for the opening keynote at the Google Government Summit in Washington, D.C. on November 15.

From both a warfighter perspective and a user experience perspective, the U.S. Army “needs data for decision-making at the point of “need” with the “the right tools to get the job done” across a diverse set of working conditions, explained Dr. Raj Iyer, Army CIO for the U.S. Department of the Army.

During the event, Dr. Iyer shared that Google Workspace will be provisioned for 250,000 soldiers working in the U.S. Army. The first 160,000 users have migrated to Google Workspace in just two weeks – with plans for the remaining personnel to be up and running by mid-2023. Google Workspace was designed to be deployed quickly to soldiers across a variety of locations, jobs and skill levels.

Thomas Kurian, CEO for Google Cloud, also took the stage and expressed Google’s “deep commitment” to providing products and solutions that are mature, compliant and meet government’s mission goals.

“In the last four years, we’ve really heightened our work for the government…in the breadth of our products that are focused as solutions, and significantly ramped up our compliance certifications to serve agencies more fully. And we culminated that by launching Google Public Sector, the only division that Google has in the whole company dedicated to a single industry,” Kurian explained.

Though cloud was once mainly viewed as a solution that can mainly provide economic elastic compute, what makes Google Cloud competitive against other providers is its ability to offer solutions for different needs as the nature of cloud computing evolves, said Kurian.

“Organizations want to get smarter to make decisions, combining both structured and unstructured data. And they want to be able to do analysis no matter where the data sits — whether it’s in our cloud or other clouds. We are the only cloud that lets you link data and analyze it across multiple clouds, structured and unstructured, without moving a single piece of data.”

Cybersecurity was also a key concern raised during the keynote, namely the need to simplify security analysis tools so cyber experts can detect threats faster.

“Protecting governments isn’t just for something for extraordinary times. The business of government requires constant vigilance,” said Karen Dahut, CEO for Google Public Sector, the company’s independent division, focused solely on the needs of federal, state and local government and the education sector.

She cited the success of the New York City Cyber Command, which works across city government to detect and prevent cyber threats. They are accomplishing this “by building a highly secure and scalable data pipeline on Google Cloud so their cyber security experts can detect threats faster.”

Google has also recently strengthened its ability to help its customers access data on known threats with the recent acquisition of Mandiant. Kevin Mandia, CEO and director for Mandiant, now a part of Google Cloud, took the stage to explain how the company has been uniquely positioned to “own that moment of incident response” and threat attribution. This has given the company an immense collection of data on cyber incidents and intrusion techniques.

“When Mandiant and Google combined,” he explained, “we took the security DNA of Mandiant…and joining — what I believe is the best AI on the planet, best machine learning on the planet, best big data on the planet — and we’re bringing what we know [about cybersecurity] to scale.”

The keynote featured several seasoned technology leaders who each shared how cloud, artificial intelligence and machine learning tools are helping their agencies achieve mission outcomes and keep pace with cybersecurity needs, including:

  • Pritha Mehra, CIO and Executive VP, United States Postal Service
  • Rajiv Rao, CTO and Deputy CIO, New York State
  • Teddra Burgess, Managing Director, Federal Civilian, Google Public Sector
  • Leigh Palmer, VP, Delivery and Operations, Google Public Sector

Watch the keynote in its entirety on the Government Summit On-Demand page. This article was produced by Scoop News Group for FedScoop and underwritten by Google Cloud.

The post Government leaders tout big wins for their missions with AI, ML and cloud tools appeared first on FedScoop.

]]>
63672