MITRE Archives | FedScoop https://fedscoop.com/tag/mitre/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Tue, 07 May 2024 20:06:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 MITRE Archives | FedScoop https://fedscoop.com/tag/mitre/ 32 32 MITRE announces AI sandbox for federal agency use https://fedscoop.com/mitre-announces-ai-sandbox-for-federal-agency-use/ Tue, 07 May 2024 20:06:15 +0000 https://fedscoop.com/?p=78209 The nonprofit operator of federally funded research and development centers for agencies said the tool will be operational by the end of 2024.

The post MITRE announces AI sandbox for federal agency use appeared first on FedScoop.

]]>
By the end of the year, an artificial intelligence sandbox built by MITRE is expected to be operational for federal agency use, according to a Tuesday announcement from the nonprofit.

MITRE, which operates federally funded R&D centers on behalf of agencies, announced during the AI Expo for National Competitiveness that it anticipates applying the sandbox to “national security, healthcare, transportation and climate” according to the press release. The sandbox will be powered by AI data center infrastructure from NVIDIA. 

MITRE said it is offering federal entities access to the tool’s benefits through existing contracts with R&D centers that the organization operates with agencies, including the Department of Homeland Security, the National Institute of Standards and Technology, the Department of Defense, the Federal Aviation Administration and others. 

Charles Clancy, senior vice president and chief technology officer at MITRE, referenced President Joe Biden’s October AI executive order that outlines a series of responsibilities and deadlines for agencies to use the technology and support its implementation. Agencies “often lack the computing environment necessary for implementation and prototyping,” Clancy said.

“Our new Federal AI Sandbox will help level the playing field, making the high-quality compute power needed to train and test custom AI solutions available to any agency,” he added. 

The sandbox provides computing power that is able to train AI applications such as large language models and other generative AI tools for government use, according to the press release. 

MITRE said the supercomputer is also able to “train multimodal perception systems that can understand and process information from multiple types of data at once,” including images, environmental and medical sensors, text, radar and more. 

Tuesday’s announcement comes nearly two months after the nonprofit announced the opening of a facility to test government use of AI for possible risks through red-teaming and human-in-the-loop experimentation.

The post MITRE announces AI sandbox for federal agency use appeared first on FedScoop.

]]>
78209
MITRE launches lab to test federal government AI risks https://fedscoop.com/mitre-federal-ai-lab-launch/ Tue, 26 Mar 2024 17:57:24 +0000 https://fedscoop.com/?p=76826 The new AI Assurance and Discovery Lab in McLean, Virginia, is aimed at helping federal agencies test and evaluate systems that use AI.

The post MITRE launches lab to test federal government AI risks appeared first on FedScoop.

]]>
Public interest nonprofit corporation MITRE opened a new facility dedicated to testing government uses of artificial intelligence for potential risks Monday.

MITRE’s new AI Assurance and Discovery Lab is designed to assess the risk of systems using AI in simulated environments, red-teaming, and “human-in-the-loop experimentation,” among other things. The lab will also test systems for bias and users will be able to control how their information is used, according to the announcement.

In remarks presented at the Monday launch, Keoki Jackson, senior vice president of MITRE National Security Sector, pointed to the corporation’s poll that found less than half of the American public respondents thought AI would have the trust needed for applications. 

“We have some work to do as a nation, and that’s where this new AI lab comes in,” Jackson said.

Mitigating the risks of AI in government has been a topic of interest for lawmakers and was a key component of President Joe Biden’s October executive order on the technology. The order, for example, directed the National Institute of Standards and Technology to develop a companion to its AI Risk Management Framework for generative AI and create standards for AI red-teaming. MITRE’s new lab bills itself as a testbed for that type of risk assessment.

“The vision for this lab really is to be a place where we can pilot … and develop these concepts of AI assurance — where we have the tools and capabilities that can be adopted and applied to the special the specialized needs of different sectors,” Charles Clancy, MITRE senior vice president and chief technology officer, said at the event. 

Clancy also noted that both the “assurance” and “discovery” aspects of the new lab are important. Focusing too much on assurance and getting “tangled up in security” could prevent from balancing “against the opportunity,” he said. 

Members of the Virginia congressional delegation were also present to express their support at the event, which was held at MITRE’s McLean, Virginia, headquarters where the new lab is located. The three lawmakers were Reps. Gerry Connolly and Don Beyer, and Sen. Mark Warner. All are Democrats. 

Warner, in remarks at the event, said he worries that the race for the best large language model by companies like Anthropic, Open AI, Microsoft, and Google might be so intense that those entities aren’t building in assurance. 

“Getting it right is critical as any mission I can imagine, and I think, unfortunately, that we’re going to have to make sure that we come up with the standards,” Warner said. He added that policymakers are still trying to figure out whether the federal government houses AI expertise in one location, such as NIST or the Office of Science and Technology Policy, or spreads it out across the government. 

For MITRE, working on AI projects isn’t new. The corporation has been doing work in that space for roughly 10 years, Miles Thompson, MITRE’s AI assurance solutions lead, told FedScoop in an interview at the event. “Today really codifies that we’re going to provide this as a service now,” Thompson said of the new lab.

As part of its approach to evaluation, MITRE created its own process for AI risk assessment it calls the AI Assurance Process, which is consistent with existing standards for things like machinery and medical devices. Thompson described the process as “a stake in the ground for what we think is the best practice today,” noting that it could change with the evolving landscape. 

Thompson also said the level of assurance for that process changes depending on the system and how it’s being used. The consequences for something like Netflix’s recommendations system are low whereas those for AI for self-driving cars or air traffic control are dire, he said.

An example of how MITRE has applied that process to work with an agency is its recent work with the Federal Aviation Administration, Thompson said. 

The FAA and its industry partners came to MITRE to talk through potential tweaks to a standard inside the agency pertaining to software in airborne systems (DO-178C) that doesn’t currently address AI or machine learning, he said. Those conversations addressed the question of how that standard might change to be able to say “this use of AI is still safe,” he said. 

The post MITRE launches lab to test federal government AI risks appeared first on FedScoop.

]]>
76826
MITRE researched air traffic language AI tool for FAA, documents show https://fedscoop.com/mitre-air-traffic-conversation-ai-tool-faa-dot/ Wed, 14 Feb 2024 22:34:02 +0000 https://fedscoop.com/?p=76048 The Department of Transportation has been relatively mum about its work on AI.

The post MITRE researched air traffic language AI tool for FAA, documents show appeared first on FedScoop.

]]>
MITRE, a public interest research nonprofit that receives federal funding, proposed a system for transcribing and studying conversations between pilots and air traffic controllers, according to documents obtained by FedScoop through a public records request. 

A presentation dated August 2023 and titled “Advanced Capabilities for Capturing Controller-Pilot dialogue” shows that MITRE engaged in a serious effort to study how natural language processing could be used to help the Federal Aviation Administration, and, in particular, to help with “understanding the safety-related and routine operations of the National Airspace System.”

MITRE, which supported the project through its Center for Advanced Aviation System Development, told FedScoop that the prototype is currently being transitioned to the FAA for “potential operational implementation.” Otherwise, it’s not clear what the current status of the tool is, as the agency’s artificial intelligence use case inventory was last updated in July 2023, according to a DOT page. The FAA did not respond to a request for comment and instead directed FedScoop to the Department of Transportation. 

“Communications between pilots and air traffic controllers are a crucial source of information and context for operations across the national airspace,” Greg Tennille, MITRE’s managing director for transportation safety, said in a statement to FedScoop in response to questions about the documents. “Collecting voice data accurately, efficiently and effectively can provide important insights into the national airspace and unearth trends and potential hazards.” 

The August 2023 presentation describes several ways that natural language processing, a type of AI meant to focus on interpreting and understanding speech and text, could be fine-tuned to understand conversations between air traffic controllers and pilots. The project reported on the performance of different strategies and models in terms of accuracy and provided recommendations. At the end, it also describes a brief exploration of how ChatGPT might be able to help with comprehension of Air Traffic Control sub-dialogues, noting that “the results were surprisingly good.”  

The presentation reveals how the often-overwhelmed aviation agency might try to take advantage of artificial intelligence and comes as the Biden administration continues to push federal agencies to look for ways to deploy the technology. 

At the same time, it also outlines potential interest in ChatGPT. While the Department of Transportation said it doesn’t have a relationship with OpenAI, other documents show that officials within the agency are interested in generative AI.  

The Department of Transportation and ChatGPT

The reference to ChatGPT in the project, though it appears to be provisional and not a core part of the research, is more evidence of how the Department of Transportation might use generative AI tools in the future. FedScoop previously reported, for example, that the DOT’s Pipeline and Hazardous Materials Safety Administration had disclosed a use case — described as “planned” and “not in production” — involving “ChatGPT to support the rulemaking processes.”

PHMSA, which said it’s continuing to study the short- and long-term risks and benefits associated with generative AI, has said it does not plan on using the technology for rulemaking. The agency also said that it has an agreement with Incentive Technology Group, worth about several hundreds of thousands of dollars, to explore generative AI pilots. 

PHMSA said that the project did not involve ChatGPT, but instead involved “Azure OpenAI Generative LLM version 3.5.” (OpenAI explains on its website that GPT-3.5 models can be used to understand and generate natural language or code, but PHMSA did not explain whether the reference to ChatGPT in the AI use case disclosure was a mistake or a distinct project from its work with Incentive Technology Group.)

Notably, while other agencies are beginning to develop policies for generative AI, the Department of Transportation has not responded to questions from FedScoop about what policies or guidance it might have surrounding the technology. 

Emails obtained by FedScoop through public records requests show that the Chief Data Officer Dan Morgan had on-hand a “generative AI” guidance document attributed to the government of New Zealand. An email last summer to the Department of Transportation’s AI Task Force from Matt Cuddy, operations research analyst at the DOT’s Volpe National Transportation Systems Center, shows that the agency had made large language models a topic of interest.

A publicly available document from 2019 said that through the task force, the DOT had made transportation-related AI “an agency research & development priority.” 

Last year, FedScoop reported that the Department of Transportation had disclosed the use of ChatGPT for code-writing assistance in its inventory, but then removed the entry and said it was made in error. The department has not responded to questions about how that error actually occurred. Emails obtained by FedScoop show that the incident attracted attention from Conrad Stosz, the artificial intelligence director in the Office of the Federal Chief Information Officer. 

In regard to this story, the Department of Transportation told FedScoop again that the FAA ChatGPT entry was made in error and that the “FAA does not use Chat GPT in any of its systems, including air traffic systems.” It also said that the use case was unrelated to the MITRE FAA project. 

The post MITRE researched air traffic language AI tool for FAA, documents show appeared first on FedScoop.

]]>
76048
New intel program will tap AI to help personnel ‘walk through’ unfamiliar areas before they arrive https://fedscoop.com/new-intel-program-will-tap-ai-to-help-personnel-walk-through-unfamiliar-areas-before-they-arrive/ Tue, 21 Jun 2022 20:29:28 +0000 https://fedscoop.com/?p=54176 FedScoop got a preview of a new 42-month research program that kicks off this week.

The post New intel program will tap AI to help personnel ‘walk through’ unfamiliar areas before they arrive appeared first on FedScoop.

]]>
The intelligence community’s primary research arm launched a new program to develop software algorithm-based systems that will fuse imagery captured from various altitudes and angles — including from traffic cameras, drones, satellites and other platforms — to build immersive, photorealistic virtual environments of unfamiliar locations across the globe.

Through its new Walk-Through Rendering Images of Varying Altitudes (WRIVA) program, the Intelligence Advanced Research Projects Activity (IARPA) will foster technology to acquaint government officials with potentially dangerous places before they deploy there.

“Imagine if law enforcement, the military, or aid workers could virtually drop themselves into a location and look around and become familiar with it before their arrival. These groups often have to deliver rapid support and lifesaving aid to unfamiliar or dynamic areas. Allowing them to prepare ahead of time helps keep them out of harm’s way when they have to conduct these activities,” IARPA Program Manager Ashwini Deshpande told reporters Friday during a preview of the project and the accompanying broad agency announcement (BAA) for federal funding, prior to its release on Tuesday. 

Inspired by a tween

WRIVA marks the first research project Deshpande is steering for the agency. She was inspired to pitch it after an experience that’s likely familiar to many parents. 

“One day I was trying to explain to my tween where I was going to pick her up when she was hanging out with her friends, but we ran into a problem because I couldn’t describe where I was going to be in a way that she could figure out. And when we went around looking for more information, we discovered that part of the problem could be that the area had recently undergone a lot of changes and we were working with some outdated information,” she recalled.

The solution she and her daughter landed on — picking a different spot to meet — seemed easy. But the entire scenario got Deshpande thinking about similar challenges for government officials.

“For instance, what if law enforcement needs to know where they might be vulnerable when responding to a threat? Or what if we need to respond to needs in an area where the landscape has recently changed due to a bombing or natural disaster?” she said. “Finding a solution is really critical.”

With years of experience as a chief scientist and technical advisor, Deshpande brought the problem to IARPA — a federal research hub that works on projects for the intelligence community (IC) and other agencies — and successfully pitched the project.

“WRIVA is aiming to help users visually see or plan a mission activity. And this will be a game-changer for the IC, military, law enforcement, and also for humanitarian and disaster relief,” she said.

Heaps of images and data from a number of sources about a specific location can be applied to create site models, which can then enable personnel to rehearse missions with better knowledge about where they’ll be operating. But such tools typically require massive amounts of that type of information to work properly. Through WRIVA, Deshpande and her team want to produce software systems to perform site modeling in scenarios where huge volumes of ground-level imagery with reliable metadata are not easily accessible or available.

“Our goal with WRIVA is to leapfrog off of some recent advancements in machine learning and computer vision and to further the technology in the areas of reducing the number of viewpoints that are required to create site models,” she noted. 

Those involved in the project also intend to increase the speed at which site models are completed and approved for use within the federal government. The envisioned outcomes of WRIVA will essentially be algorithms and methodologies that enable officials to rapidly create site models without full 360-degree coverage of a location — and methodologies to repair corrupted imagery.

“If you think about the history of us wanting to familiarize ourselves with an area that we can’t get to, that’s existed for so long. I mean, it goes back to [Greek mythology with] Icarus, and to the time in World War I when we were strapping cameras on the pigeons,” Deshpande said. “But I think the thing that’s going to be really different here is the level of immersiveness that these models will experience.”

Other site modeling capabilities exist within the IC and Defense Department. However, the processes to build such tools are often incredibly slow, tedious and require a lot of lead time, she noted.

“I think the real technological breakthroughs will be the timeliness of the model creation to allow us to respond very quickly, as well as the fact that you won’t need to spend hours and require a ton of expertise to collect the basic imagery to create these site models,” Deshpande said. “This will really allow people to practice almost like they’re in a video game, you know, practicing their scenarios before they have to go and conduct their mission in person.”

A ‘transitionable’ capability

The WRIVA program is slated to be a 42-month research effort with hands-on work beginning in fiscal 2023. 

The locations it will actually be applied for will depend on the nation’s specific needs three-and-a-half years from now, once the envisioned technology is fully developed.

“But I think we are planning on conducting experiments that represent a variety of different scenarios,” Deshpande explained. “One of the areas of concern is improvised housing such as refugee camps — we are planning on doing data collections that emulate scenarios like that. We are also planning on doing data collections that apply towards suburban environments. We have a wide-ranging set of field experiments and challenges for proposers to look forward to.”

So far, IARPA has named MITRE Corp., Johns Hopkins University Applied Physics Laboratory and Massachusetts Institute of Technology (MIT) Lincoln Laboratory as testing and evaluation partners for this effort. 

Building on their broad pursuits to create and refine transformational technologies for disaster relief and humanitarian assistance, MIT officials will work to generate applicable high-fidelity datasets and build infrastructure to assess the performance of capabilities IARPA’s partners produce.

“There’s almost immediate application for technology like this within damage quantification — so you could think just after a disaster when things are damaged, and people are trying to understand what is broken and what needs to be fixed, a tool like this could actually help not only convey that information, but also notably here in the U.S., can expand the available workforce that could come and respond,” MIT Lincoln Laboratory’s Group Leader Adam Norige told reporters during the briefing Friday. 

Research agencies like IARPA often conduct exploratory ventures and don’t play a big part in completely operationalizing the technology they creates — but, considering applications that already exist, those involved in WRIVA are “fairly confident” it will eventually be fielded.

“Our goal is to create a transitionable capability. We have several partners across the DOD and IC who are interested and who are heavily engaged as stakeholders, as members of our government advisory panel and as potential transition partners, and we’re working closely with them even starting now to make sure that their needs are addressed and that they are also putting in the infrastructure necessary to be able to take the WRIVA program and implement it against their own operational needs,” Deshpande said.

“We’ve got a lot of interest from, frankly, several organizations within the military, but we are also working closely with IC elements that more directly support the warfighter,” she noted.

The post New intel program will tap AI to help personnel ‘walk through’ unfamiliar areas before they arrive appeared first on FedScoop.

]]>
54176
GSA won’t use facial recognition with Login.gov for now https://fedscoop.com/gsa-forgoes-facial-recognition-for-now/ Wed, 09 Feb 2022 18:18:20 +0000 https://fedscoop.com/?p=47507 The agency's secure sign-in team continues to research the technology and to conduct equity and accessibility studies.

The post GSA won’t use facial recognition with Login.gov for now appeared first on FedScoop.

]]>
The General Services Administration won’t use facial recognition to grant users access to government benefits and services for now, but its secure sign-in team continues to research the technology.

“Although the Login.gov team is researching facial recognition technology and conducting equity and accessibility studies, GSA has made the decision for now not to use facial recognition, liveness detection, or any other emerging technology in connection with government benefits and services until rigorous review has given us confidence that we can do so equitably and without causing harm to vulnerable populations,” said Dave Zvenyach, director of TTS, in a statement provided to FedScoop.

“There are a number of ways to authenticate identity using other proofing approaches that protect privacy and ensure accessibility and equity.”

Login.gov ensures users are properly authenticated for agencies’ services and verifies identities, and the Technology Transformation Services team that manages it is also studying facial recognition equity and accessibility.

GSA‘s methodical evaluation of the technology contrasts with that of the IRS, which announced Monday that it would transition away from using ID.me‘s service for verifying new online accounts after the company disclosed it lied about relying on 1:many facial recognition — a system proven to pose greater risks of inaccuracy and racial bias.

Login.gov currently collects a photo of a state-issued ID and other personally identifiable information, which are validated against authoritative data sources. The last step involves either sending a text message to the user’s phone number or a letter to their address containing a code that must be provided to Login.gov to complete identity verification.

More than 60 applications across 17 agencies — including USAJOBS at the Office of Personnel Management and the Paycheck Protection and Disaster Loan Application programs at the Small Business Administration — use Login.gov, encompassing more than 17 million users.

GSA’s rejection of facial recognition for Login.gov was first reported by The Washington Post, but the technology is most certainly in the agency’s, and the government’s, future.

The White House Office of Science and Technology Policy is crafting an Artificial Intelligence Bill of Rights to protect people from technology infringements and focused its initial request for information on biometrics like facial recognition.

While OSTP’s definition of biometrics needs refining, not all facial recognition algorithms are prejudicially biased. Technical and operational bias also exist and don’t necessarily lead to inequitable outcomes.

“There are not direct correlations between technical and operational biases and prejudicial bias,” Duane Blackburn, science and technology lead at MITRE‘s Center for Data-Driven Policy, told FedScoop in January. “Even though in a lot of policy analyses they’re treated as equivalent.”

The post GSA won’t use facial recognition with Login.gov for now appeared first on FedScoop.

]]>
47507
MITRE: White House biometrics definition requires rethink https://fedscoop.com/biometrics-definition-ai-bill-of-rights/ Wed, 09 Feb 2022 16:22:06 +0000 https://fedscoop.com/?p=47124 OSTP conflated three distinct concepts as biometrics, which will lead to confusion as it attempts craft an AI Bill of Rights.

The post MITRE: White House biometrics definition requires rethink appeared first on FedScoop.

]]>
MITRE’s Center for Data-Driven Policy recommended the White House redefine biometrics as it develops an Artificial Intelligence Bill of Rights, in a request for information response submitted last month.

Within its RFI, the Office of Science and Technology Policy married biometrics for identification with technology for inferring emotion or intent and medicine’s understanding of the term as any biological-based data. MITRE would rather OSTP use the National Science and Technology Council‘s internationally accepted definition of biometrics limiting them to identity matters.

The U.S. lacks a comprehensive privacy law that would serve as the foundation for regulating AI, which has policy groups like the Open Technology Institute pressing the Biden administration for increased oversight and safeguards. OSTP wanted RFI respondents to examine biometrics through the lens of AI to inform the AI Bill of Rights government will use to protect people from problematic technologies but in doing so conflated three distinct concepts, which MITRE holds will lead to confusion.

“They kind of grouped multiple, different technologies into a single grouping, and those technologies all have different backgrounds, different operational issues and different policy considerations,” Duane Blackburn, science and technology policy lead at the Center for Data-Driven Policy, told FedScoop. “Grouping them together like that is going to really complicate the policy analysis and potentially leads to making improper decisions.”

MITRE’s second recommendation for OSTP is that it makes evidence- and science-based policy decisions because misconceptions about identity biometrics abound — the first being they’re not scientific in nature. Blackburn points to the decades of biometrics research, international standards, accreditation programs for examiners and university degrees.

The second misconception is about how face recognition technologies, specifically, are biased. Most people assume the bias is prejudicial for and against certain ethnic groups, and while that may be true for some algorithms, the assumption overlooks technical and operational bias, Blackburn said.

When face recognition technologies were first being developed 20 years ago, image lighting, pose angle and pixel numbers greatly impacted results — known as technical bias.

A face recognition algorithm trained for longer with more data performing more accurately than another is an example of operational bias, which impacts how the system works.

“There are not direct correlations between technical and operational biases and prejudicial bias, even though in a lot of policy analyses they’re treated as equivalent,” Blackburn said. “You can take a biometric algorithm with no differential performance technical bias and create systems with massive prejudicial bias.”

The opposite is also true, he added.

Lastly MITRE recommends OSTP ensure any policy decisions around biometrics are focused and nuanced, given the many biometrics that exist: fingerprint, face recognition, iris recognition and some aspects of DNA.

“You can’t really come up with a singular policy that’s going to be proper for all three or four of those modalities,” Blackburn said.

Using biometrics to unlock a phone is “significantly different” than law enforcement using it to identify a criminal, and decisions will need to be made about what data sharing is allowable under the AI Bill of Rights, he added.

An OSTP task force released a report on scientific integrity in early January reinforcing the need for technical accuracy when making policy decisions. Challenges aside, Blackburn said he remains optimistic OSTP is up to the task of crafting an AI Bill of Rights.

“How can we set up the policy so that it’s accurate from a technical, scientific-integrity perspective, while also meeting the objectives of the public that they represent,” Blackburn said. “It’s not easy, it takes a lot of time and effort, but OSTP and the federal agencies working on these issues have a lot of experience doing that.”

The post MITRE: White House biometrics definition requires rethink appeared first on FedScoop.

]]>
47124
Biden nominates LaPlante as defense acquisition chief https://fedscoop.com/biden-nominates-william-laplante-as-defense-acquisition-chief/ https://fedscoop.com/biden-nominates-william-laplante-as-defense-acquisition-chief/#respond Tue, 30 Nov 2021 16:19:16 +0000 https://fedscoop.com/?p=45140 William LaPLante previously served as Air Force acquisitions leader under the Obama administration.

The post Biden nominates LaPlante as defense acquisition chief appeared first on FedScoop.

]]>
President Biden has nominated veteran military technologist William LaPlante as undersecretary for acquisition and sustainment at the Department of Defense.

The job would mark a return to government for LaPlante, who previously served as assistant secretary for acquisition, technology and logistics at the Air Force during the Obama administration. In that role, he helped to lead acquisition programs including for the B-21 long-range strike bomber.

If the Senate confirms him as undersecretary, LaPlante will fill a vacancy created by chief Ellen Lord, who departed the acquisition job at the end of 2020 along with the outgoing Trump administration.

The undersecretary for acquisition and sustainment at the DOD acts as the main staff assistant and adviser to the secretary of Defense for all matters relating to military procurement including technology and logistics.

LaPlante has over three decades’ experience working on military procurement and policy, most recently as president and chief executive officer of the nonprofit Draper Laboratory. Prior to that, he was senior vice president for the MITRE Corporation’s National Sector, where he oversaw the operation of federally funded research and development centers for the departments of Defense and Commerce.

LaPlante is a present and past member of several scientific boards and commissions focused on bettering national security, including serving as a commissioner on the congressionally mandated Section 809 Panel, which previously undertook a review of military acquisition policies.

The post Biden nominates LaPlante as defense acquisition chief appeared first on FedScoop.

]]>
https://fedscoop.com/biden-nominates-william-laplante-as-defense-acquisition-chief/feed/ 0 45140
Connolly floats legislative fix for IT working capital funds https://fedscoop.com/connolly-floats-legislative-fix-for-it-working-capital-funds/ https://fedscoop.com/connolly-floats-legislative-fix-for-it-working-capital-funds/#respond Tue, 22 Jun 2021 14:59:34 +0000 https://fedscoop.com/?p=42270 The representative says new laws may be required to give agencies the authority to create tech modernization funds.

The post Connolly floats legislative fix for IT working capital funds appeared first on FedScoop.

]]>
Congress probably needs to revisit the Modernizing Government Technology Act because some agencies still haven’t created IT working capital funds, based on legal advice from their general counsels, said Rep. Gerry Connolly, D-Va., Monday.

The Subcommittee on Government Operations he chairs may open up a policy dialogue with those agencies and their counsels, but more likely a legislative fix is needed, Connolly said.

Only three out of 24 agencies graded received “As” in implementing the MGT Act on the last FITARA scorecard in December, in part, because their lawyers continue to tell leaders they lack transfer authority to put appropriated money in IT working capital funds.

“[I]n some cases they’ve formed the funds,” Connolly said, during a MITRE event. “In some other cases they have not because they’ve been advised legally they don’t have the authority, even though the law we passed says you do.”

A jurisdictional “turf battle” between the House Oversight and Appropriations committees could ensue over the working capital funds — designed to bank unused IT dollars until agencies are ready to invest them in long-term modernization projects — unless they work together, Connolly said.

Agencies must also be required to produce plans for the use of their IT working capital funds, he said.

“From my point of view, it’s just critical every agency has a working capital fund so that they can stay abreast of changes in technology, implement the latest encryption programs and measures to protect the assets in the databases and proprietary information, and retire those legacy systems,” Connolly said.

The post Connolly floats legislative fix for IT working capital funds appeared first on FedScoop.

]]>
https://fedscoop.com/connolly-floats-legislative-fix-for-it-working-capital-funds/feed/ 0 42270
National cyber director role in the spotlight after SolarWinds hack https://fedscoop.com/national-cyber-director-solarwinds/ https://fedscoop.com/national-cyber-director-solarwinds/#respond Tue, 29 Dec 2020 15:50:02 +0000 https://fedscoop.com/?p=39488 President-elect Joe Biden's first-ever cyber czar could prove instrumental in crafting a national cyber R&D strategy and coordinating dispersed efforts to secure the supply chain.

The post National cyber director role in the spotlight after SolarWinds hack appeared first on FedScoop.

]]>
The compromise of at least seven federal agencies through the SolarWinds hack has technology experts stressing the importance of a national cyber director (NCD) role within the incoming Biden administration.

President-elect Joe Biden is expected to appoint the first-ever NCD, a position the National Defense Authorization Act of 2021 will create, after taking office Jan. 20.

The role could prove instrumental in preparing for future emergencies like the one at SolarWinds — one of the most serious incidents of digital espionage in U.S. history — by ensuring more even implementation of the National Cyber Strategy across departments, experts say.

“An NCD doesn’t guarantee you don’t have a cyber hack, either one that does damage or an espionage hack like this,” Mark Montgomery, senior fellow at the Foundation for Defense of Democracies, told FedScoop. “However, what we think an NCD will do is significantly raise the overall readiness of the federal agencies in cybersecurity and ensure that there’s better public-private collaboration.”

The Cybersecurity Solarium Commission recommended the creation of an NCD in a March report and successfully pushed for its inclusion in this year’s NDAA, well before the SolarWinds hack, which has been tied to Russia.

But a “drastic” gap remains between the Department of Defense and intelligence community’s (IC) cyberdefenses and the more static defenses of civilian agencies, said Montgomery, who serves as executive director of the Solarium Commission when it’s active. As a Cabinet-level official, the NCD could help close that gap by advocating the Cybersecurity and Infrastructure Security Agency receive sufficient resources for securing .gov IT infrastructure.

First the NCD must build relationships inside the White House with the National Security Council, National Economic Council, Office of Science and Technology Policy, and Office of Management and Budget, before turning to Cabinet and agency heads. Then comes defensive cybersecurity campaign planning, Montgomery said.

Effectively integrating defensive cyber-capabilities within agencies to protect against another SolarWinds-style hack will require the NCD to improve coordination with industry — ideally by spearheading a national cyber research and development strategy, multiple experts say.

“I think the NCD position could, in fact, act to catalyze that strategy,” said Samuel Visner, a tech fellow at MITRE, in an interview. “They’d be in a good position to work cooperatively with the White House OSTP, but they would also be in a position — not only to reach out to industry and academia — but to help modulate the programs and budgets of the various agencies that have cyber research and development resources.”

A smarter supply chain

The NCD’s “whole-of-nation” strategy could create a community of practice with government, industry and academic representatives to address pressing challenges, Visner said.

Government further lacks a supply chain strategy for information and communication technologies like those exploited by the Russian hacking group APT29, or Cozy Bear, in the SolarWinds hack. To date, parts of the departments of Commerce, Defense, Energy, Health and Human Services, Homeland Security, State, and Treasury have reportedly been compromised as a result.

Within the Alliance for Digital Innovation‘s 2021 priorities for the Biden administration is a “smart supply chain” plan that the NCD could also implement. Current government supply chain efforts are “dispersed and poorly coordinated,” hindering agencies’ abilities to defend against nation-state actors, secure government data and protect intellectual property, according to the association of commercial companies.

Industry wants a better understanding of which agencies are in charge and who they should share their information with because several have set up centers of supply chain analysis, said Matthew Cornelius, executive director of ADI, in an interview.

Congress created the Federal Acquisition Security Council in 2018, and there’s also the National Risk Management Center within the Department of Homeland Security. And DOD, the Department of Commerce and the IC have robust efforts underway as well.

Making sense of the field starts with the NCD stepping in to coordinate information sharing.

“If they can iron out some of the inconsistencies and some of the fiefdoms that we have in supply chain right now and work to deliver a cohesive strategy, it will make it easier for the government and industry to work together,” Cornelius said.

Individual agencies’ efforts might not need to be halted, but they definitely shouldn’t be working at cross purposes, Montgomery said.

Including the NCD as a provision in the NDAA was actually the suggestion of the Solarium Commission. The public-private commission has floated the names of several of its members as potential national cyber director candidates, but the Biden transition team has so far stayed silent on potential appointees.

Someone with a mix of government, private sector and IC experience, who also has “sharp elbows,” would be helpful, Montgomery said.

“They have to be able to win bureaucratic battles with Type-A Cabinet members because in the end even after SolarWinds — give it three months to die down; it happened on my predecessor’s watch — there are going to be Cabinet members who, when the time comes to make the hard budget cut, cybersecurity will get cut because it’s not a primary mission of the department or agency,” Montgomery said.

Regardless of who ultimately lands the role, its filling has become all the more important in the wake of the SolarWinds hack.

“While we are confident that our federal cybersecurity leaders are doing all they can to mitigate any impact of this active exploitation, there is no question that a consistent, unified approach is necessary to rid federal networks of any of its remnants,” said Rep. Dutch Ruppersberger, D-Md., by email. “This is why I, along with my colleagues in Congress, have supported the creation of a national cyber director.”

The post National cyber director role in the spotlight after SolarWinds hack appeared first on FedScoop.

]]>
https://fedscoop.com/national-cyber-director-solarwinds/feed/ 0 39488
National AI strategy resolution could pass during lame duck session https://fedscoop.com/national-ai-strategy-resolution/ https://fedscoop.com/national-ai-strategy-resolution/#respond Fri, 25 Sep 2020 12:50:46 +0000 https://fedscoop.com/?p=38325 The authors would like to see an AI oversight framework, similar to the FITARA Scorecard, developed for agencies.

The post National AI strategy resolution could pass during lame duck session appeared first on FedScoop.

]]>
Reps. Will Hurd, R-Texas, and Robin Kelly, D-Ill., want to see their resolution to create a national artificial intelligence strategy passed this year, possibly during Congress’ upcoming “lame duck” session.

The House Committee on Science, Space and Technology is handling the resolution, and its authors are currently “twisting arms” for more co-sponsors, Hurd said during the launch of MITRE‘s Center for Data-Driven Policy on Thursday.

Quick passage of the resolution would provide the next couple of congresses with a framework for debating AI specifics, as the U.S. battles China to become the world leader in the emerging technology, Hurd said.

“The other idea is to take those 80-plus recommendations and turn those into individual bills,” Hurd said. “Can we narrow them as much as possible? Because that makes them easier to pass.”

Rather than establish a “Department of AI,” a national strategy should ensure every congressional committee has oversight plans and that every relevant agency has a role in organizing and securing data, procuring high-capacity compute and developing algorithms, Hurd said.

If the resolution is passed or individual laws are enacted, Hurd hopes Congress will oversee the adoption of federal AI much as it has with federal IT acquisition reform.

The outgoing congressman wasn’t in Congress when the Federal Information Technology Acquisition Reform Act (FITARA) was passed in late 2014 and said he was “suspicious” when the Government Accountability Office suggested a scorecard for measuring agencies’ compliance with IT reforms.

“But I can say that scorecard really did change behaviors, changed how the government operated, because we were keeping score,” Hurd said.

All agencies received passing FITARA grades for the first time in August, and Hurd and Kelly want to see similar oversight results with AI.

GAO is currently developing an oversight framework for algorithmic explainability, data quality and governance, and bias mitigation, said Chief Scientist Timothy Persons.

“I hope there’s some beautiful scorecard, like the FITARA Scorecard, that we can use,” Hurd said. “I don’t know what that is, but I think that’s the next evolution of this document and plan that Robin and I have been grateful to be able to work with.”

The post National AI strategy resolution could pass during lame duck session appeared first on FedScoop.

]]>
https://fedscoop.com/national-ai-strategy-resolution/feed/ 0 38325