automation Archives | FedScoop https://fedscoop.com/tag/automation/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Fri, 05 Apr 2024 16:02:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 automation Archives | FedScoop https://fedscoop.com/tag/automation/ 32 32 How automation and AI are streamlining traditional government IT modernization https://fedscoop.com/how-automation-ai-streamline-government-it-modernization/ Wed, 20 Mar 2024 19:30:00 +0000 https://fedscoop.com/?p=76719 A new report highlights how automation and process mining tools give agencies, including USDA, IRS and the U.S. Navy, new abilities to modernize operations.

The post How automation and AI are streamlining traditional government IT modernization appeared first on FedScoop.

]]>
Federal agencies are undertaking the “largest wholesale modernization in government history.” At the same time, says a former government IT leader in a new report, agency leaders are coming to terms with the reality that the traditional model for IT modernization, involving years of planning and execution, is no longer sustainable.

Fortunately, advances in process automation and AI are giving government agencies new capabilities to identify system bottlenecks and streamline business and operations processes in ways that can improve business and mission outcomes in a fraction of the time and cost of traditional IT modernization projects.

Read the report.

Today’s business process mining and automation tools allow “executives to shift their dependence on outsourced knowledge to in-house control for continuous problem-solving,” according to Todd Schroeder, formerly a U.S. Department of Agriculture IT systems chief who is now vice president for public sector at UiPath. “That translates into a radically different time-to-value modernization quotient — and a radically lower cost structure,” he says in the report produced by Scoop News Group and underwritten by UiPath.

The report “How Automation and AI are Changing the Traditional Approach to Government IT Modernization” highlights how robotic process automation has evolved from a tool to streamline redundant tasks such as financial accounting work to what has increasingly become an enterprise-wide effort to improve mission outcomes.

One example cited in the report is the work underway at the USDA’s Intelligent Automation Center of Excellence office. The office is automating routine processes across the department and fostering a rising generation of “citizen developers” to automate work processes in individuals’ respective jobs.

The report also highlights how automation work that began in the Navy’s Financial Management and Comptroller’s Office is now expanding to improve operations in other Naval support offices and between different departments in government.

Schroeder says agency leaders are on the verge of realizing even greater capabilities with UiPath’s push into AI. UiPath’s AI Trust Layer platform, he says, provides customers with a new level of “auditability, traceability, observability, and replicability” when applying AI to business processes.

“This is the moment,” says Schroeder, “when agency leaders not only have the means to rethink how they modernize but reimagine how federal workers can accomplish their work in new and more effective ways. And that’s critical if the government is to catch up and meet the needs of society’s requirements.”

Download and read the full report.  

This article was produced by Scoop News Group for FedScoop and sponsored by UiPath.

The post How automation and AI are streamlining traditional government IT modernization appeared first on FedScoop.

]]>
76719
FBI finance team working on first software bot https://fedscoop.com/fbi-finance-team-to-roll-out-first-software-bot/ https://fedscoop.com/fbi-finance-team-to-roll-out-first-software-bot/#respond Tue, 09 May 2023 20:09:39 +0000 https://fedscoop.com/?p=68205 The Federal Bureau of Investigation’s finance modernization team said Tuesday it will soon roll out a bot for automatically paying invoices and updating budget line items that could act as pilot for the future automation of back-office systems at the agency. The launch of the bot comes amid a push across federal government to use […]

The post FBI finance team working on first software bot appeared first on FedScoop.

]]>
The Federal Bureau of Investigation’s finance modernization team said Tuesday it will soon roll out a bot for automatically paying invoices and updating budget line items that could act as pilot for the future automation of back-office systems at the agency.

The launch of the bot comes amid a push across federal government to use robotic process automation to streamline agency processes. It will automate the currently manual process of paying invoices every month and updating budget lines items needed to pay invoices to customers or vendors. 

“It’s the first time we’re actually automating something through robotic process automation. So that’s what makes it so innovative for us is because the bureau doesn’t have bots right now, we were just sort of like putting our toes in that world,” Peter Sursi, head of finance modernization, accounts payable and relocation services said at the Adobe Government Forum in Washington on Tuesday. “So for us to get one on the finance side for us is pretty exciting. It’ll save us a lot of labor hours.” 

The tool would affect all 56 FBI field offices and approximately 250 task force officers that process the financial payments within those offices as well as FBI customers who get paid through the invoices which were previously manual processed and time intensive.

Sursi said the new finance bot was created in the past two months and is in final stages of testing, which has energized his team to create a longer list of FBI finance projects that could be automated and made much faster thanks to bot automation.

In March, State Department CIO Kelly Fletcher revealed that her agency had used robotic process automation to cut the processing time for its monthly financial statement from two months to two days.

Speaking at FedScoop’s ITModTalks, Fletcher said financial reporting was one of several areas where the agency is using AI to improve the efficiency of back-office operations, which has the ability to substantially improve reporting processes because of State’s federated structure and global operations.

The post FBI finance team working on first software bot appeared first on FedScoop.

]]>
https://fedscoop.com/fbi-finance-team-to-roll-out-first-software-bot/feed/ 0 68205
Biden administration announces crackdown on discrimination and bias in AI tools https://fedscoop.com/biden-administration-announces-crackdown-on-discrimination-and-bias-in-ai-tools/ https://fedscoop.com/biden-administration-announces-crackdown-on-discrimination-and-bias-in-ai-tools/#respond Tue, 25 Apr 2023 19:55:02 +0000 https://fedscoop.com/?p=67929 Leaders at four major federal agencies say they will use civil rights and consumer rights laws to take enforcement action against AI systems that perpetuate bias.

The post Biden administration announces crackdown on discrimination and bias in AI tools appeared first on FedScoop.

]]>
Four major federal agencies announced Tuesday that they are teaming up to crack down on the use of artificial intelligence tools that perpetuate bias and discrimination.

The Biden administration will use existing civil rights and consumer rights laws to take enforcement action against AI systems and automated systems that allow discrimination, top leaders within ​​the Justice Department, the Federal Trade Commission, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission pledged on Tuesday.

With AI tools increasingly central to private industry and soon potential government decisions about hiring, credit, housing and other services, top leaders from the four federal agencies warned about the risk of “digital redlining.” 

The officials said they were worried that inaccurate data sets and faulty design choices could perpetuate racial disparities and they pledged to use existing law to combat such risks.

“We’re going to hold companies responsible for deploying these technologies, and making sure that it is all in compliance with existing law. I think we are starting the process of figuring out where we’re identifying potentially illegal activity,” said Rohit Chopra, Director of the Consumer Financial Protection Bureau.

“And we’ve already started some work to continue to muscle up internally, when it comes to bringing on board data scientists, technologists and others, to make sure we can confront these challenges,” Chopra added.

The four federal agencies are taking the lead on holding AI companies and vendors responsible for any harmful behaviour because they are the key agencies in charge of enforcing civil rights, non-discrimination, fair competition, consumer protection, and other legal protections to citizens.

Each agency has previously expressed concern about potentially harmful uses of automated systems.

“There is no AI exemption to the laws on the books,” said trade commission Chair Lina Khan, one of several regulators who spoke during a news conference to signal a “whole of government” approach to enforcement efforts against discrimination and bias in automated systems.

Khan said the FTC recently launched a new Office of Technology, which is focused on hiring more technologists with expertise to fully grasp how AI technologies are functioning and potentially causing harm and have the capacity in-house to deal with such issues. 

AI and automated system companies that are government vendors or contractors could also be targeted by the federal government enforcement crackdown.

“So with respect to vendors and employers, obviously, we have very clear enforcement with respect to employers, depending on the facts, and this is true of pretty much every issue that we might look at is very fact intensive. 

“I want to emphasize that there may be liability for vendors as well. And it really depends on how they’re constructed,” said Charlotte Burrows, Chair of the Equal Employment Opportunity Commission (EEOC). 

“There are various legal authorities with respect to vendors and other actors that may be involved in the employment process and developing these tools. So it really just depends on what that relationship is with and what the role that the AI developer or the vendor may have with respect to the employee and processes, both for our authority with respect to interference under, for instance, Title Seven of the Civil Rights Act, or the ADA, which is actually quite a broad interference provision,” Burrows added.

The post Biden administration announces crackdown on discrimination and bias in AI tools appeared first on FedScoop.

]]>
https://fedscoop.com/biden-administration-announces-crackdown-on-discrimination-and-bias-in-ai-tools/feed/ 0 67929
When automation out-delivers IT modernization https://fedscoop.com/when-automation-out-delivers-it-modernization/ Wed, 15 Feb 2023 18:30:00 +0000 https://fedscoop.com/?p=65872 Government leaders report automation has fast-tracked large-scale service improvements faster and at lower costs than big-ticket IT modernization projects.

The post When automation out-delivers IT modernization appeared first on FedScoop.

]]>
Government leaders from a growing roster of federal and state agencies are realizing significant benefits from enterprise automation to drive business transformation without having to endure the lumbering pace and high cost of IT modernization projects, according to a new report.

“Automation enables [the U.S.] Army to create new capabilities in legacy systems without investing resources into changing the underlying system,” said Raj G. Iyer, former CIO of the U.S. Department of the Army. Iyer was one of several government officials cited in the report, produced by Scoop News Group for FedScoop and sponsored by UiPath, who detailed how automation is making a significant difference in their organization.

Iyer, who stepped down from his position at the end of last month, explained how the Assistant Secretary of the Army (Financial Management and Comptroller) office recently completed a pilot program where robotic process automation (RPA) expedited the handling of unmatched financial transactions. The ASA (FM&C) office handles more than one million such transactions per year, according to Iyer. “RPA is expected to save millions of dollars in manual labor each year,” he said.

Read the full report.

“I think that automation — specifically using bots — is really starting to take off and provide value to businesses,” added Krista Kinnard, chief of emerging technology in the CIO’s office at the U.S. Department of Labor.

Kinnard and others explained that automation isn’t just speeding up workflows but boosting productivity and improving agency services faster, at lower costs and with less risk than big-ticket IT modernization projects.

“Automation is moving from the edges, all the way inside into the enterprise. That’s a big change,” observed Sunil Madhugiri, chief technology officer at U.S. Customs and Border Protection, where approximately 250 automation “bots” are in production or under development, according to the report. Madhugiri highlighted one instance where automation helped CBP work with international airlines to notify and divert some 239,000 travelers from boarding U.S.-bound flights due to Covid restriction rules during the pandemic.

The report highlights how automation can effectively “operationalize” mission and business processes at federal agencies and deliver cost savings and service improvements that often prove elusive in IT modernization overhauls.

“Modernization has become synonymous with big, ‘rip-and-replace’ efforts, involving new systems, long-term physical transformations that are costly in technology, change management, workforce, opportunity cost, and time to value,” noted Mike Daniels, senior vice president, public sector at UiPath. “[Government agencies] have made huge investments to forklift systems to the cloud. But what’s gotten lost in that process is the need to examine whether those efforts drive a result quicker, faster or better.”

Todd Schroeder, a former chief of digital services at the U.S. Department of Agriculture who now serves at UiPath as public sector vice president, adds that automation platforms not only bring the power of scale to the work agency employees need to get done but can also address process pain points quickly. He cites in the report how the New York Department of Labor, despite a 10-fold increase in temporary staff, couldn’t keep up with demand for unemployment claims during the pandemic — and how deploying UiPath tools not only cut through the backlog but later helped save New York an estimated $12 billion in potential fraud.

Read the full report on how automation is helping government agencies improve mission services.

This article was produced by Scoop News Group for FedScoop and sponsored by UiPath.

The post When automation out-delivers IT modernization appeared first on FedScoop.

]]>
65872
IRS seeks tool to help automate solicitation evaluation https://fedscoop.com/irs-solicitation-evaluation-tool/ Wed, 12 Oct 2022 22:57:01 +0000 https://fedscoop.com/?p=62100 The agency wants to improve the efficiency of its procurement teams and help them to meet FAR and customer agencies' requirements.

The post IRS seeks tool to help automate solicitation evaluation appeared first on FedScoop.

]]>
The IRS wants an automated tool for evaluating solicitations to improve the efficiency of its procurement teams, according to a request for information.

Vendors are encouraged to submit their technical capabilities and Federal Risk and Authorization Management Program-certified, commercial-off-the-shelf offerings — especially cloud solutions.

The IRS Office of the Chief Procurement Officer maintains three contract-writing systems: Procurement for Public Sector (PPS), Contract Lifecycle Management (CLM) for the Bureau of Engraving and Printing, and Procurement Request Information System Management (PRISM) for the Treasury Department. The agency’s RFI is the latest to attempt to learn how automation might help its procurement teams adhere to the Federal Acquisition Regulation and further standardize vendor evaluation and selection.

Ideal solicitation evaluation tools will handle document management, auditing with analytics and performance metrics, and intelligent automation scoring while supporting ratings that indicate the degree to which proposals meet the customer agency’s standards with adjectives like excellent or acceptable. The IRS uses General Services Administration vehicles, governmentwide acquisition contracts and federal supply schedules to procure IT for customer agencies, and any tool would need to evaluate solicitations based on requirements within them.

The RFI further asks vendors to provide their timelines for implementing such a solution and examples of three customers receiving similar services. The deadline for submissions is Oct. 13 at 5 p.m. eastern time.

The post IRS seeks tool to help automate solicitation evaluation appeared first on FedScoop.

]]>
62100
VA watchdog finds automated IT system errors led agency to incorrectly collect debts from veterans https://fedscoop.com/va-watchdog-finds-automated-it-system-errors-led-agency-to-incorrectly-collect-debts-from-veterans/ Wed, 07 Sep 2022 22:02:41 +0000 https://fedscoop.com/?p=60016 The VA’s Office of Inspector General discovered several instances in which the agency collected debts without first providing veterans with legally required notice.

The post VA watchdog finds automated IT system errors led agency to incorrectly collect debts from veterans appeared first on FedScoop.

]]>
The Department of Veterans Affairs internal watchdog released a report Wednesday which showed several instances in which the agency’s automated business rules within the VA’s electronic systems improperly triggered debt collection from veterans. 

The VA’s Office of Inspector General (OIG) discovered several instances in which the agency collected debts without first providing veterans with legally required notice, and attributed the errors to automated rules followed by a software program. 

The OIG review team identified three scenarios in which VA improperly reduced payments to veterans to collect debts created under the Veterans Benefits Administration’s (VBA) compensation program: debts collected by reducing the retroactive payments created in the same award, debts collected by reducing the retroactive payments created in a later award, and debts collected by reducing monthly benefit payments. 

“VA officials agreed that the veterans in examples 1 through 3 were entitled to notices of indebtedness and due process before their benefit payments were reduced to recoup the debts,” Larry Reinkemeyer, the VA Assistant Inspector General wrote in the report

“Additionally, VBA officials agreed that creating the debts in these cases was actually incorrect, making collection actions particularly problematic,” he added. 

The VA provides tax-free monthly compensation benefits to veterans in recognition of the effects of disabilities incurred or aggravated during active military service. Debts are created within this benefits program when a compensation decision retroactively reduces a veteran’s payment rate.

When a veteran owes a debt, the VA can withhold all or part of the veteran’s retroactive or ongoing monthly payments to recoup the money but first generally has to give the veteran due process. 

If a veteran disputes the debt or requests a waiver within 30 days of the notification, the VA typically cannot withhold benefit payments until a decision is made on the dispute or waiver request.

The OIG report requested that the VA inform the OIG what actions it takes to ensure veterans receive notice and due process before debts are collected going forward. 

The post VA watchdog finds automated IT system errors led agency to incorrectly collect debts from veterans appeared first on FedScoop.

]]>
60016
SecOps automation to modernize cyber threat management https://fedscoop.com/modernize-federal-government-cyber-threat-management-automated-security-operations/ Wed, 07 Sep 2022 18:09:55 +0000 https://fedscoop.com/?p=59957 A new report explores how cloud-driven tools help agencies both combat sophisticated cyber threat campaigns and keep up with updated cybersecurity policies.

The post SecOps automation to modernize cyber threat management appeared first on FedScoop.

]]>
 

The post SecOps automation to modernize cyber threat management appeared first on FedScoop.

]]>
59957
Pentagon reaches important waypoint in long journey toward adopting ‘responsible AI’ https://fedscoop.com/pentagon-reaches-important-waypoint-in-long-journey-toward-adopting-responsible-ai/ Wed, 29 Jun 2022 19:23:33 +0000 https://fedscoop.com/?p=54689 Experts weigh in on the department's new Responsible AI Strategy and Implementation Pathway.

The post Pentagon reaches important waypoint in long journey toward adopting ‘responsible AI’ appeared first on FedScoop.

]]>
There’s a lot to unpack in the Pentagon’s new high-level plan of action to ensure all artificial intelligence use under its purview abides by U.S. ethical standards. Experts are weighing in on what the document means for the military’s pursuit of this crucial technology.

In many ways, the Responsible AI Strategy and Implementation Pathway, released last week, marks the culmination of years of work in the Defense Department to drive the adoption of such capabilities. At the same time, it’s also an early waypoint on the department’s long and ongoing journey that sets the tone for how key defense players will help safely implement and operationalize AI, while racing against competitors with less cautious approaches.

“As a nation, we’re never going to field a system quickly at the cost of ensuring that it’s safe, that it’s secure, and that it’s effective,” DOD’s Chief for AI Assurance Jane Pinelis said at a recent Center for Strategic and International Studies event.

“Implementing these responsible AI guidelines is actually an asymmetric advantage over our adversaries, and I would argue that we don’t need to be the fastest, we [just] need to be fast enough. And we need to be better,” she added.

The term “AI” generally refers to a blossoming branch of computer science involving systems capable of performing complex tasks that typically require some human intelligence. The technology has been widely adopted in society, underpinning maps and navigation apps, facial recognition, chatbots, social media monitoring, and more. And Pentagon officials have increasingly prioritized procuring and developing AI for specific mission needs in recent years.

“Over the coming decades, AI will play a role in nearly all DOD technology, just as computers do today,” Gregory Allen, AI Governance Project director and Strategic Technologies Program senior fellow at CSIS, told FedScoop. “I think this is the right next step.”

The Pentagon’s new 47-page responsible AI implementation plan will inform its work to sort through the incredibly thorny known and unknown issues that could come with fully integrating intelligent machines into military operations. FedScoop spoke with more than a half dozen experts and current and former DOD officials to discuss the nuances within this foundational policy document and their takeaways about the road ahead.

“I’ll be interested in how this pathway plays out in practice,” Megan Lamberth, associate fellow in the Center for a New American Security’s Technology and National Security Program, noted in an interview.

“Considering this implementation pathway as a next step in the Department’s RAI process — and not the end — then I think [its] lines of effort begin to provide some specificity to the department’s AI approach,” she said. “There’s more of an understanding of which offices in the Pentagon have the most skin in the game right now.”

Principles alone are not enough

Following leadership mandates and consultations with a number of leading AI professionals over many months, the Pentagon officially issued a series of five ethical principles to govern its use of the emerging technology in early 2020. At the time, the U.S. military was the first in the world to adopt such AI ethics standards, according to Pinelis. 

A little over a year later, the department reaffirmed its commitment to them and released corresponding tenets that serve as priority areas to shape how the department approaches and frames AI. Now, each of those six tenets — governance, warfighter trust, product and acquisition, requirements validation, the responsible AI ecosystem, and workforce — have been fleshed out with detailed goals, lines of effort, responsible components and estimated timelines via the new strategy and implementation plan. 

Source: DOD’s AI Strategy and Implementation Pathway

“Principles alone are not enough when it comes to getting senior leaders, developers, field officers and other DOD staff on the same page,” Kim Crider, managing director at Deloitte, who leads the consulting firm’s AI innovation for national security team, told FedScoop. “Meaningful governance of AI must be clearly defined via tangible ethical guidance, testing standards, accountability checks, human systems integration and safety considerations.”

Crider, a retired major general who served 35 years in the military, was formerly the chief innovation and technology officer for the Space Force and the chief data officer for the Air Force. In her view, “the AI pathway released last week appears to offer robust focus and clarity on DOD’s proposed governance structure, oversight and accountability mechanisms,” and marks “a significant step toward putting responsible AI principles into practice.”

“It will be interesting to see the DOD continue to explore and execute these six tenets as new questions concerning responsible AI implementation naturally arise,” she added.

The pathway’s rollout comes on the heels of a significant bureaucratic shakeup that merged several of DOD’s technology-focused components — Advana, Office of the Chief Data Officer, Defense Digital Service, and Joint Artificial Intelligence Center (JAIC) — under the nascent Chief Digital and Artificial Intelligence Office (CDAO). 

David Spirk, the Pentagon’s former chief data officer who helped inform the CDAO’s establishment, said this pathway’s “emphasis on modest centralization of testing capability and leadership with decentralized execution” is an “indication of the maturity of thought in how the Office of the Secretary of Defense is positioning the CDAO to drive the initiatives successfully into the future when they will be even more important.”

It’s “a clear demonstration of the DOD’s intent to lead the way for anyone considering high consequence AI employment,” Spirk, who is now a special adviser for CalypsoAI, told FedScoop.

Prior to joining CSIS, Allen was Spirk’s colleague in DOD — serving the JAIC’s director of strategy and policy — where he, too, was heavily involved in guiding the enterprise’s early pursuits with AI. Even where the new pathway’s inclusions seem modest, in his view, “they are actually quite ambitious.”

“The DOD includes millions of people performing hundreds of billions of dollars’ worth of activity,” he said. “Developing a governance structure where leadership can know and show that all AI-related activities are being performed ethically and responsibly, including in situations with life-and-death stakes, is no easy task.”

Beyond clarifying how the Pentagon’s leadership will “know and show” that their strategy is being implemented as envisioned, other experts noted how the pathway provides additional context and distinctions for programs, offices and industry partners to guide their planning in their connected paths toward robust RAI frameworks.

“Perhaps most importantly, the document provides additional structure and nomenclature that industry can utilize in collaboration activities, which will ultimately be required to achieve scale,” Booz Allen Hamilton’s Executive Vice President for AI Steve Escaravage, an early advocate in RAI, told FedScoop.

“I view it as industry’s responsibility to provide the department insights on the next layer of standards and practices to assist the department’s efforts,” he said. 

A journey toward ‘trust’

The Pentagon’s “desired end-state for RAI is trust,” officials wrote in the new pathway. 

Though a clear DOD-aligned definition of the term isn’t included, “trust” is mentioned dozens of times throughout the new plan. 

“In AI assurance, we try not to use the word ‘trust,’” Pinelis told FedScoop. “If you look up ‘trust,’ it has something like 300 definitions, and most of them are very qualitative — and we’re trying to get to a place that’s very objective.”

In her field, experts use the term “justified confidence,” which is considered evidence-based, more well-defined, and embraces testing and metrics to back it up. 

“But of course, in some of the like softer kind of sciences and software language, you will see ‘trust,’ and we try to reserve it either for kind of warfighter trust in their equipment, which manifests in reliance — like literally will the person use it — and that’s how I kind of measure it very tangibly. And then we also use it kind of in a societal context of like our international allies trusting that we won’t field weapons or systems that are going to cause fratricide or something along those lines,” Pinelis explained.

While complicated by the limits of language, this overarching approach is meant to help diverse Pentagon AI users have justifiable and appropriate levels of trust in all systems they lean on, which would in turn help accelerate adoption.

Source: DOD’s AI Strategy and Implementation Pathway

“AI only gets employed in production if the senior decision-makers, operators, and analysts at echelon have the confidence it works and will remain effective regardless of mission, time and place,” Spirk noted. During his time at the Pentagon, he came to recognize that “trust in AI is and will increasingly be a cultural challenge until it’s simply a norm — but that takes time.” 

Spirk and the majority of other officials who FedScoop spoke to highlighted the significance of the new responsibilities laid out in the goals and lines of efforts for the second tenet in the pathway, which is warfighter trust. Through them, DOD commits to a robust focus on independent testing and validation — including new training for staff, real-time monitoring, harnessing commercially available technologies and more. 

“This is one of the most important steps in making sure that [the Office of the Secretary of Defense] is setting conditions to provide the decision advantage the department and its allies and partners need to outpace our competitors at the speed of the best available commercial compute, whether cloud-based or operating in a disadvantaged and/or disconnected comms environment,” Spirk said.  

Allen also noted that tasks under that second tenet “are big.” 

“One of the key challenges in accelerating the adoption of AI for DOD is that there generally aren’t mature testing procedures that allow DOD organizations to prove that new AI systems meet required standards for mission critical and safety critical functions,” he explained. By investing now in maturing the AI test and evaluation ecosystem, DOD can prevent a future process bottleneck where promising AI systems in development can’t be operationally fielded because there is not enough capacity.

“Achieving trust in AI is a continuous effort, and we see a real understanding of this approach throughout the entire plan,” Deloitte’s Crider said. She and Lamberth both commended the pathway’s push for flexibility and an iterative approach.

“I like that the department is recognizing that emerging and future applications of AI may require an updated pathway or different kinds of oversight,” Lamberth noted. 

In her view, all the categories under each line of effort “cover quite a bit of ground.”

One calls for the creation of a DOD-wide central repository of “exemplary AI use cases,” for instance. Others concentrate on procurement processes and system lifecycles, as well as what Lamberth deemed “much-needed talent” initiatives, like developing a mechanism to identify and track AI expertise and completely staffing the CDAO. 

“While all the lines of effort listed in the pathway are important to tackle, the ones that stick out to me are the ones that call for formal activation of key processes to drive change across the organization,” Crider said.  

She pointed to “the call for best practices to incorporate operator input and system feedback throughout the AI lifecycle, the focus on developing responsible AI-related acquisition resources and tools, the use of a Joint Requirements Oversight Council Memorandum (JROC-M) to drive changes in requirement-setting processes and the development of a legislative strategy to ensure appropriate engagement, messaging and advocacy to Congress,” in particular. 

“Each of these lines of effort is critical to the long-term success of the pathway because they help drive systemic change, emphasize the need for resources and reinforce the goals of the pathway at the highest levels of the organization,” she said.

A variety of assigned activities are also associated with what could soon be major military policy revamps. 

For example, the department commits to addressing AI ethics in its upcoming update of the policy on autonomy in weapons systems, DOD directive 3000.09. And it calls for CDAO and other officials to explore whether a review procedure is needed to ensure the warfare capabilities will be consistent with the DOD’s ethics principles.

Spirk and other experts noted that such an assessment would be prudent for the Pentagon.

“As AI is developed and deployed across the department, it will reshape and improve our warfighting capabilities. Therefore, it is critical to consider how AI principles and governance align with overall military doctrine and operations,” Crider said.

Allen added that that line of effort demonstrates how the department recognizes that there are a lot of relevant existing DOD processes, such as weapons development legal reviews and existing safety standards, that apply to all systems — and not just AI-enabled ones.

“The DOD is still assessing whether the right approach is to consider a new, standalone review process focused on RAI or whether to update the existing processes. I’m strongly in favor of the latter approach,” he said. “DOD should build RAI into — not on top of — the existing institutions and processes that ensure lawful, safe and ethical behavior.”

Wait and see

It is undoubtedly difficult for large, complex bureaucratic organizations like the Pentagon to prioritize the implementation of tech-driving strategic plans while balancing other mission-critical work. 

Experts who spoke to FedScoop generally agreed that by identifying specific alignment tasks with existing DOD directives and frameworks from other offices, and outlining who will carry them out, the implementation pathway ensures greater integration and some accountability for everyone to execute on.

Still, some concerns remain. 

“Looking ahead, I think that many of the ambitions of the … pathway are in tension with the department’s technology infrastructure and security requirements. Creating shared repositories and workspaces requires the cloud, and it doesn’t work if data are siloed and access to open applications is restricted,” Melanie Sisson, a fellow in the Brookings Institution’s Center for Security, Strategy, and Technology, told FedScoop.

Spirk also noted that “a vulnerability in compressive oversight and leadership exists here, as the technical talent with domain expertise to understand how to both measure and overcome obstacles to gaps and weaknesses that will be illuminated will likely be significant.” 

To address these and many other unforeseen concerns, DOD could potentially benefit from developing a feedback mechanism and working body among the individuals and teams tasked as operational responsible AI leads, some experts recommended.

“It is important to keep the lines of communication open — both horizontally and vertically. Challenges that may come up during the implementation phase at the team or project level may be common issues across DOD,” Crider said.

The impact of the implementation plan remains to be seen. And investments in people, power and dollars will be needed to effectively guide, drive, test, apply and integrate responsible AI across the enterprise. 

But the officials FedScoop spoke to are mostly hopeful about what’s to come. 

“Looking at the lines of effort and the offices responsible for each, it is clear the department has made strides in establishing offices and processes for responsible AI development and adoption. While a lot of hard work remains, the department continues to show that it is committed to AI,” Lamberth said. 

“I’ll be interested to see how this guidance is communicated to the rest of the department,” she added. “How will it be communicated that these efforts are important across the services, and how will this pathway impact how the services develop and acquire potential AI technologies?” 

The post Pentagon reaches important waypoint in long journey toward adopting ‘responsible AI’ appeared first on FedScoop.

]]>
54689
What Russia’s invasion of Ukraine is revealing about tech in modern warfare https://fedscoop.com/what-russias-invasion-of-ukraine-is-revealing-about-tech-in-modern-warfare/ Thu, 19 May 2022 14:31:17 +0000 https://fedscoop.com/?p=52430 Experts argue that the U.S. government needs to better understand the weaknesses of its autocratic rivals — and then find ways to exploit them. 

The post What Russia’s invasion of Ukraine is revealing about tech in modern warfare appeared first on FedScoop.

]]>
Russia’s ongoing invasion of Ukraine is teaching national security experts new things about the current status of artificial intelligence and automation in modern warfare — and how to prepare for possible future conflicts with authoritarian regimes.

Much of the devastation so far is the result of conventional military systems. That likely won’t always be the case, former Defense Department officials and military experts warned this week.

“We haven’t seen a conflict on this scale in quite a long time, but many aspects of this conflict really highlight what has been changing in the 21st century. Unmanned systems, remotely piloted systems and autonomous systems were all the sorts of things that some have argued were not going to be a part of a high-intensity fight, they were only going to be relevant to counterinsurgency conflicts. I think that myth has been blown wide open. But what we’ve only seen is the first move, in which there will always be countermoves,” Gregory Allen, a senior fellow with the Center for Strategic and International Studies’ strategic technologies program, said Tuesday at the Nexus 2022 national security symposium.

Allen served as the director of strategy and policy review at the Pentagon’s Joint Artificial Intelligence Center (JAIC) before leaving the department earlier this year.

He recently assessed evidence alleging that Russia is using artificial intelligence-enabled autonomous weapons systems against Ukraine and, ultimately, did not find the claims to be credible. Still, many of the remotely piloted, unmanned systems operating in this conflict have been “really remarkably effective,” Allen noted.

Observations from this initial phase of the war can suggest how nations’ organizational structures and technological investments might need to adapt to ensure competitive military advantage down the line.

“What we’ve been seeing in Ukraine is munitions … where these are kamikaze drones that cost somewhere in the low tens of thousands of dollars a shot, that are annihilating million-dollar tanks at volume. There is a cost and competitiveness revolution going on in military technology, all of which is underpinned by the progress that we’ve seen in commercial digital technology — not least of which is artificial intelligence,” Allen said.

In following Russian-language media over the last few weeks, Allen observed a narrative that he said suggests that, as more electronic warfare systems and drone countermeasures are introduced in this unfolding conflict, pressure is mounting on all sides — but particularly from Russian military organizations — to deploy increasingly autonomous systems.

“I think we’ve seen, throughout history, Russia really underperforming in the early stage of just about every war, and that not necessarily being a great predictor of what the long-term outlook looks like,” Allen noted.

Margarita Konaev, a native-Russian speaker and non-resident senior fellow at the Atlantic Council who studies AI-related defense applications and Russian military innovation, said right now she feels like she and other analysts have “gotten a lot of things wrong.”

“If you are looking at the performance of the Russian military right now, it is very difficult to tell that the last decade and a half has been in fact dedicated to significant reforms that focus on professionalization, on new equipment, autonomous capabilities, a lot of robotics, unmanned systems, electronic warfare, AI for command and control, information, cyber warfare. There were grand expectations, and we have not seen them. And so it’s a great point of reflection for the community that has studied Russia,” she said.

In Konaev’s view, the invasion at this point is highlighting a sharp difference between development of military technology and actual adoption. 

“What we’re seeing right now is that the technical barriers to innovation are really not the most significant barriers to the use and scaling and integration of some of these sophisticated and advanced technologies and operations. A lot of it has to do with institutional, bureaucratic, cultural, human trust issues, let alone between humans and machines,” she noted.

By all assessments, Russia has access to some of the most sophisticated electronic warfare capabilities in the world. “So the fact that the Ukrainian military is able to inflict such massive damage with quite rudimentary and relatively cheap drones is significant,” she said.

Capabilities to counter automated technologies must be considered as a potential future priority in terms of Russia’s modernization pursuits, according to Konaev.

Looking ahead, Konaev is worried about “the pendulum swinging to a point where we completely underestimate what comes next from the Russian perspective.”

Russia is a nuclear power and still has access to a significant amount of conventional fighting capabilities, she noted, adding that “the relationship between Russia and China is also going to be very interesting and complicated.”

While he was at the JAIC, Allen made a number of trips to China where he spoke with dozens of Chinese officials and experts about artificial intelligence. 

“The Chinese military is in the midst of a major AI-enabled modernization effort. They are really changing a lot of the way that they do what they do,” he said.

At one recent conference, a senior Chinese weapons executive Allen had met told a global audience that “in the future, there will be no people fighting the wars,” and that his China-based company is building autonomous systems now to prepare. 

Allen also pointed to a recent report that China’s leader Xi Jinping was shaken by what he has seen in the Ukraine-Russia conflict, where commercially-derived drones are taking out expensive military-designed hardware. 

“That work may or may not be true, but it is absolutely the case that the Chinese military has drawn a lot of lessons from what we’re seeing in Ukraine, and that’s why time matters a lot,” Allen said. “I think we should expect a lot to change in a relatively short period of time.”

August Cole, a non-resident senior fellow at the Atlantic Council and author of novels about the use of emerging military technologies, added that any future conflict with China is going to be “fundamentally decided by data.”

Liza Tobin, senior director of research and analysis for economy at the Special Competitive Studies Project — who previously served on National Security Council staff as China director — said Beijing has a comprehensive plan to “control the networks, the platforms, and importantly, the standards of this emerging digital economy.”

To China, data marks a new source of innovation and economic growth. So much so, Tobin noted, that the nation’s leaders have updated their Marxist theory to add data as a fourth factor of production. 

“For those of you who may be rusty on your Marxist theory, the original three are land, labor and capital. So, when you put them all together in creative ways, it produces economic growth. But unfortunately, the Chinese economy is slowing. The era of easy industrialization and demographic growth is over. So they can’t squeeze any more marginal productivity out of land, labor and capital. Enter data, this new fourth factor of production. And so they are betting that this is a way out of the middle income trap and that by exploiting the many benefits and opportunities of data, they can actually grow their economy in ways that we can’t,” she explained.

The post What Russia’s invasion of Ukraine is revealing about tech in modern warfare appeared first on FedScoop.

]]>
52430
National Vetting Center piloting automation of citizenship verification https://fedscoop.com/nvp-automating-citizenship-determinations/ Tue, 05 Apr 2022 20:23:36 +0000 https://fedscoop.com/?p=50024 “We want to make sure that we are protecting privacy, to the extent we’re supposed to, when it comes to U.S. persons," said a DHS official.

The post National Vetting Center piloting automation of citizenship verification appeared first on FedScoop.

]]>
Customs and Border Protection plans to pilot technology that would automate the National Vetting Center’s process for verifying whether someone is a U.S. citizen.

The Department of Homeland Security Office of Intelligence & Analysis is assisting with the pilot, in the late planning stage, of an automation that will be available in a year, according to Chief Information Security Officer Eric Sanders.

President Trump established the NVC in 2018 to streamline information sharing between the intelligence community (IC), agencies and law enforcement when determining the threat posed by people crossing U.S. borders. That calculus changes when dealing with a U.S. citizen.

“We want to make sure that we are protecting privacy, to the extent we’re supposed to, when it comes to U.S. persons,” Sanders said, during an ATARC panel discussion Tuesday.

I&A is one of nine DHS components with an intelligence mission and the only one where it’s the sole mission, providing information to the IC and state, local, tribal and territorial governments. The office helped CBP create the NVC with a focus on automating vetting, which sped up the process for Afghani refugees.

While facial recognition isn’t part of NVC’s process to Sanders’ knowledge, automation, especially using microservices, helps agencies share intelligence better and faster.

“Whereas before they had to work manually with the FBI and [the National Counterterrorism Center] to adjudicate somebody wanting to come into the country, we’re now able to automate that across the IC to make sure that we’re getting a holistic understanding of the person or persons that are trying to enter the country,” Sanders said.

Sanders also wants to automate the assessment and authorization of new security capabilities, particularly low-risk ones, freeing up employees to focus on bigger problems.

“Whether you’re talking about the [National Security Memorandum] or the [Cybersecurity] Executive Order and zero trust, you’re not going to get there without automation,” he said.

Role-based access controls aren’t enough in zero-trust environments. Attributes need to be assigned to people and things to make access decisions in real time with large volumes of data coming in quickly, Sanders said.

I&A’s priority is automating data sharing between domains so it can continue to trust people across environments over time, as threat actors’ tactic get more sophisticated. That requires monitoring even low-level environments threat actors access first, before moving into high-level ones, Sanders said.

The task is easier to do in some environments than others, with I&A considering the use of tokens or other, cost-effective solutions in line with the IC’s future state.

“A lot of these classified systems are inside buildings where multi-factor is harder to do,” Sanders said. “I can’t use my cellphone for multi-factor authentication in a secure environment.”

The post National Vetting Center piloting automation of citizenship verification appeared first on FedScoop.

]]>
50024