AI Archives | FedScoop https://fedscoop.com/tag/ai/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Fri, 05 Apr 2024 16:02:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 AI Archives | FedScoop https://fedscoop.com/tag/ai/ 32 32 AI talent role, releasing code, deadline extension among additions in OMB memo https://fedscoop.com/ai-talent-role-releasing-code-deadline-extension-among-additions-in-omb-memo/ Fri, 29 Mar 2024 16:40:52 +0000 https://fedscoop.com/?p=76904 Requiring the release of custom AI code, designating an “AI Talent Lead,” and extending deadlines were among the changes made to the final version of a White House memo on AI governance.

The post AI talent role, releasing code, deadline extension among additions in OMB memo appeared first on FedScoop.

]]>
Additions and edits to the Office of Management and Budget’s final memo on AI governance create additional public disclosure requirements, provide more compliance time to federal agencies, and establish a new role for talent.

The policy, released Thursday, corresponds with President Joe Biden’s October executive order on AI and establishes a framework for federal agency use and management of the technology. Among the requirements, agencies must now vet their AI uses for risks, expand what they share in their annual AI use case inventories, and select a chief AI officer.

While the final version largely tracks with the draft version that OMB published for public comment in November, there were some notable changes. Here are six of the most interesting alterations and additions to the policy: 

1. Added compliance time: The new policy changes the deadline for agencies to be in compliance with risk management practices from Aug. 1 to Dec. 1, giving agencies four more months than the draft version. The requirement states that agencies must implement risk management practices or stop using safety- or rights-impacting AI tools until the agency is in compliance. 

In a document published Thursday responding to comments on the draft policy, OMB said it received feedback that the August deadline was “too aggressive” and that timeline didn’t account for action OMB is expected to take later this year on AI acquisition. 

2. Sharing code, data: The final memo adds an entirely new section requiring agencies to share custom-developed AI code model information on an ongoing basis. Agencies must “release and maintain that code as open source software on a public repository” under the memo, unless sharing it would pose certain risks or it’s restricted by law, regulation, or contract.

Additionally, the memo states that agencies must share and release data used to test AI if it’s considered a “data asset” under the Open, Public, Electronic and Necessary (OPEN) Government Data Act, a federal law that requires such information to be published in a machine-readable format.

Agencies are required to share whatever information possible, even if a portion of the information can’t be released publicly. The policy further states that agencies should, where they’re able, share resources that can’t be released without restrictions through federally operated means that allow controlled access, like the National AI Research Resource (NAIRR).

3. AI Talent Lead: The policy also states agencies should designate an “AI Talent Lead,” which didn’t appear in the draft. That official, “for at least the duration of the AI Talent Task Force, will be accountable for reporting to agency leadership, tracking AI hiring across the agency, and providing data to [the Office of Personnel Management] and OMB on hiring needs and progress,” the memo says. 

The task force, which was established under Biden’s AI executive order, will provide that official with “engagement opportunities to enhance their AI hiring practices and to drive impact through collaboration across agencies.” The memo also stipulates that agencies must follow hiring practices in OPM’s forthcoming AI and Tech Hiring Playbook.

Biden’s order placed an emphasis on AI hiring in the federal government, and so far OPM has authorized direct-hire authority for AI roles and outlined incentives for attracting and retaining AI talent. 

4. Aggregate metrics: Agencies and the Department of Defense will both have to “report and release aggregate metrics” for AI uses that aren’t included in their public inventory of use cases under the new memo. The draft version included only the DOD in that requirement, but the version released Thursday added federal agencies.

Those disclosures, which will be annual, will provide information about how many of the uses are rights- and safety-impacting and their compliance with the standards for those kinds of uses outlined in the memo. 

The use case inventories, which were established by a Trump-era executive order and later enshrined into federal statute, have so far lacked consistency across agencies. The memo and corresponding draft guidance for the 2024 inventories seeks to enhance and expand those reporting requirements.

5. Safety, rights determinations: The memo also added a new requirement that agencies have to validate the determinations and waivers that CAIOs make on safety- and rights-impacting use cases, and publish a summary of those decisions on an annual basis. 

Under the policy, CAIOs can determine that an AI application presumed to be safety- or rights-impacting — which includes a wide array of uses such as election security and conducting biometric identification — doesn’t match the memo’s definitions for what should be considered safety- or rights-impacting. CAIOs may also waive certain requirements for those uses.

While the draft stipulated that agencies should report lists of rights- and safety-impacting uses to OMB, the final memo instead requires the annual validation of those determinations and waivers and public summaries.

In its response to comments, OMB said it made the update to address concerns from some commenters that CAIOs “would hold too much discretion to waive the applicability of risk management requirements to particular AI uses cases.” 

6. Procurement considerations: Three procurement recommendations related to test data, biometric identification, and sustainability were also added to the final memo. 

On testing data, OMB recommends agencies ensure developers and vendors aren’t using test data that an agency might employ to evaluate an AI system to train that system. For biometrics, the memo also encourages agencies to assess risks and request documentation on accuracy when procuring AI systems that use identifiers such as faces and fingerprints. 

And finally on sustainability, the memo includes a recommendation that agencies consider the environmental impact of “computationally intensive” AI systems. “This should include considering the carbon emissions and resource consumption from supporting data centers,” the memo said. That addition was a response to commenters who wanted the memo to expand risk assessment requirements to include environmental considerations, according to OMB.

The post AI talent role, releasing code, deadline extension among additions in OMB memo appeared first on FedScoop.

]]>
76904
How automation and AI are streamlining traditional government IT modernization https://fedscoop.com/how-automation-ai-streamline-government-it-modernization/ Wed, 20 Mar 2024 19:30:00 +0000 https://fedscoop.com/?p=76719 A new report highlights how automation and process mining tools give agencies, including USDA, IRS and the U.S. Navy, new abilities to modernize operations.

The post How automation and AI are streamlining traditional government IT modernization appeared first on FedScoop.

]]>
Federal agencies are undertaking the “largest wholesale modernization in government history.” At the same time, says a former government IT leader in a new report, agency leaders are coming to terms with the reality that the traditional model for IT modernization, involving years of planning and execution, is no longer sustainable.

Fortunately, advances in process automation and AI are giving government agencies new capabilities to identify system bottlenecks and streamline business and operations processes in ways that can improve business and mission outcomes in a fraction of the time and cost of traditional IT modernization projects.

Read the report.

Today’s business process mining and automation tools allow “executives to shift their dependence on outsourced knowledge to in-house control for continuous problem-solving,” according to Todd Schroeder, formerly a U.S. Department of Agriculture IT systems chief who is now vice president for public sector at UiPath. “That translates into a radically different time-to-value modernization quotient — and a radically lower cost structure,” he says in the report produced by Scoop News Group and underwritten by UiPath.

The report “How Automation and AI are Changing the Traditional Approach to Government IT Modernization” highlights how robotic process automation has evolved from a tool to streamline redundant tasks such as financial accounting work to what has increasingly become an enterprise-wide effort to improve mission outcomes.

One example cited in the report is the work underway at the USDA’s Intelligent Automation Center of Excellence office. The office is automating routine processes across the department and fostering a rising generation of “citizen developers” to automate work processes in individuals’ respective jobs.

The report also highlights how automation work that began in the Navy’s Financial Management and Comptroller’s Office is now expanding to improve operations in other Naval support offices and between different departments in government.

Schroeder says agency leaders are on the verge of realizing even greater capabilities with UiPath’s push into AI. UiPath’s AI Trust Layer platform, he says, provides customers with a new level of “auditability, traceability, observability, and replicability” when applying AI to business processes.

“This is the moment,” says Schroeder, “when agency leaders not only have the means to rethink how they modernize but reimagine how federal workers can accomplish their work in new and more effective ways. And that’s critical if the government is to catch up and meet the needs of society’s requirements.”

Download and read the full report.  

This article was produced by Scoop News Group for FedScoop and sponsored by UiPath.

The post How automation and AI are streamlining traditional government IT modernization appeared first on FedScoop.

]]>
76719
Commerce launches AI safety consortium with more than 200 stakeholders https://fedscoop.com/commerce-launches-ai-safety-consortium/ Thu, 08 Feb 2024 20:58:02 +0000 https://fedscoop.com/?p=75984 The consortium operates under NIST’s AI Safety Institute and will contribute to actions in Biden’s AI executive order, the agency said.

The post Commerce launches AI safety consortium with more than 200 stakeholders appeared first on FedScoop.

]]>
The Department of Commerce announced a new consortium for AI safety that has participation from more than 200 companies and organizations, as the Biden administration continues its push to develop guardrails for the technology.

The consortium, which was launched Thursday, is part of the National Institute of Standards and Technology’s AI Safety Institute and will contribute to actions outlined in President Joe Biden’s October AI executive order, the department said in an announcement. That will include the creation of “guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content,” the agency said.

“The job of the consortia is to ensure that the AI Safety Institute’s research and testing is fully integrated with the broad community,” Secretary of Commerce Gina Raimondo said at a press conference announcing the consortium. The work that the safety institute is doing can’t be “done in a bubble separate from industry and what’s happening in the real world,” she added. 

Raimondo also highlighted the range of participants in the consortium, calling it “the largest collection of frontline AI developers, users, researchers, and interested groups in the world.”

The consortium’s participants are companies, academic institutions, unions, nonprofits, and other organizations. They include entities such as Amazon, IBM, Apple, OpenAI, Anthropic, Massachusetts Institute of Technology, and AFL-CIO Technology Institute (which is listed as a provisional member).

The announcement comes after the safety institute officially got its first leaders. On Wednesday, the Department of Commerce announced Elizabeth Kelly would lead the institute as its director and named Elham Tabassi to serve as chief technology officer. The institute was established last year at the direction of the administration.

After the Thursday press conference, Tabassi told reporters that as the department makes progress on the actions outlined in Biden’s AI order, they are looking to the consortium and institute to “continue to give a long-lasting approach” to those actions.

Participants applauded the announcement, lauding it as a positive step toward responsible AI.

“The new AI Safety Institute will play a critical role in ensuring that artificial intelligence made in the United States will be used responsibly and in ways people can trust,” Arvind Krishna, IBM’s chairman and chief executive officer, said in a statement. 

John Brennan, Scale AI’s public sector general manager, said in a statement that the company “applauds the Administration and its Executive Order on AI for recognizing that test & evaluation and red teaming are the best ways to ensure that AI is safe, secure, and trustworthy.” 

Meanwhile, David Zapolsky, Amazon’s senior vice president of global public policy and general counsel, said in a blog that the company is working with NIST in the consortium “to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote development of trustworthy AI and its responsible use.”

The post Commerce launches AI safety consortium with more than 200 stakeholders appeared first on FedScoop.

]]>
75984
Microsoft’s Brad Smith said AI ‘homework’ from White House helped speed pace of action https://fedscoop.com/microsoft-ai-white-house-davos/ Thu, 18 Jan 2024 15:11:24 +0000 https://fedscoop.com/?p=75620 The tech giant’s vice chair and president complimented White House efforts to see what companies were capable of in terms of AI safety and security during a panel discussion at the World Economic Forum's annual meeting.

The post Microsoft’s Brad Smith said AI ‘homework’ from White House helped speed pace of action appeared first on FedScoop.

]]>
The White House’s engagement with companies on their artificial intelligence capabilities — including giving those partners a “homework” assignment — helped speed up the pace of action on the technology, Microsoft Vice Chair and President Brad Smith said at the World Economic Forum on Wednesday.

When the Biden administration brought four companies, including Microsoft, to the White House in May to discuss AI, it gave those firms “homework assignments” to show what they were prepared to do to address safe, secure, and transparent use of the technology, Smith said on a panel about AI regulation around the world.

Though the assignment was due by the end of the month, Smith recalled that Microsoft was “proud” to have submitted a first draft quickly. The following day, however, the feedback came in.

“We sent it in on Sunday, and on Monday morning I had a call with [White House Office of Science and Technology Policy Director Arati Prabhakar and U.S. Secretary of Commerce Gina Raimondo], and they said, ‘Congratulations, you got it in first. You know what your grade is? Incomplete,’” Smith said. Prabhakar was also on the Wednesday panel in Davos, Switzerland.

The officials, he said, told Microsoft to build upon what they submitted. “And it broke the cycle that often happens when policymakers are saying ‘do this’ and industry is saying ‘that’s not practical.’ And especially for new technology that was evolving so quickly, it actually made it possible to speed up the pace,” Smith said.

Engagement with companies has been a key aspect of the Biden administration’s efforts to develop a U.S. policy for AI use and regulation, including obtaining voluntary commitments from firms that they’ll manage the risks posed by the budding and rapidly growing technology. 

“I don’t think that all of these governments would have gotten as far as they did by December if you hadn’t engaged some of the companies in that way,” Smith said.

Smith’s comment came after Prabhakar addressed the administration’s work with companies on the Wednesday panel, saying that Microsoft and others are on the “leading edge” of the technology. But she also noted that the administration engaged with small companies, civil society, workers, labor unions, and academia.

“I actually think this is an important part of our philosophy of regulation and governance, is not to just do it top-down and sit in our offices and make up answers,” Prabhakar said. “The way effective governance happens is with all those parties at the table.”

The post Microsoft’s Brad Smith said AI ‘homework’ from White House helped speed pace of action appeared first on FedScoop.

]]>
75620
HHS maintains deadline for AI transparency requirements in new tech certification rule https://fedscoop.com/hhs-maintains-ai-transparency-requirement-deadline/ Mon, 18 Dec 2023 22:06:23 +0000 https://fedscoop.com/?p=75293 Already certified health IT will need to comply with new artificial intelligence and algorithms requirements by the end of next year under an HHS final rule, underscoring the administration's focus on the technology.

The post HHS maintains deadline for AI transparency requirements in new tech certification rule appeared first on FedScoop.

]]>
A final Department of Health and Human Services rule will require developers seeking certification for health IT that employs artificial intelligence or other algorithms to meet certain transparency criteria by the end of 2024, despite calls in comments to push that deadline back.

While the final rule from HHS’s Office of the National Coordinator for Health Information Technology extended deadlines for other requirements between the proposed and final versions — such as requirements related to a new baseline standard for the health IT certification program — it maintained the end-of-next-year deadline for the AI and algorithms portion, underscoring the Biden administration’s focus on regulating the nascent and growing technology.

Under the rule — called Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, or HTI-1 — developers will need to update health IT currently certified under ONC’s old requirements by Dec. 31, 2024. Those new requirements mandate that tools used to aid decision-making that use AI and algorithms must share information about how the technology works as part of the agency’s certification process.

“I think it will be very interesting, you know, with that deadline coming a year out to see how the vendor community responds, and … can they make these new requirements work?” Jonathan French, senior director of public policy and content development at the Healthcare Information and Management Systems Society (HIMSS), said in an interview. 

In July, HIMSS recommended the agency delay the deadline to 2026. 

ONC Deputy National Coordinator Steven Posnack acknowledged the change in several deadlines in a call with reporters after the final rule was released last week, saying the agency “sought to space and pace many of the different requirements over time” to give industry time to make incremental adjustments. 

But Posnack added that “the algorithm-related transparency was a high priority for our office, the secretary, and administration. That’s one of the ones that we required straight out of the gate within a one-year time period.”

The algorithmic and AI requirements for decision support interventions (DSI) in the final rule come as the Biden administration has intensified its focus on regulating the technology. For example, the administration last week also announced voluntary commitments from health care companies to harness AI while managing its risks.

Generally, the algorithm requirements in the rule are aimed at promoting transparency and responsible AI use in health IT, such as electronic health records systems. In an interview with FedScoop in June, Micky Tripathi, the national coordinator for health IT, described the requirements as a “nutrition label” for algorithms.

In addition to the new AI and algorithm certification requirements, the rule also makes data interoperability updates to its health IT certification process and implements provisions under the 21st Century Cures Act.

ONC’s certification program is voluntary, but it’s incentivized by requirements that hospitals and physicians use certified systems when participating in certain Centers for Medicare and Medicaid Services payment programs. In a press release about the rule last week, ONC said health IT that it has certified “supports the care delivered by more than 96% of hospitals and 78% of office-based physicians around the country.”

While the deadline remains, industry experts analyzing the 916-page document said the final rule so far appears to address other concerns around the algorithm transparency portion. French, for example, pointed to more detailed information on the “source attribution” requirements and more detail on testing information. 

During the public comment period, the ONC received responses that said the rule’s AI and algorithmic requirements went too far, and others that said they didn’t go far enough. The American College of Cardiology called the proposal “overly broad,” whereas Ron Wyatt, the chief scientist and medical officer at the Society to Improve Diagnosis in Medicine, argued that the rule should go further, requiring information provided about the algorithms to be made publicly available. 

The final rule does include changes, according to ONC. “In response to public comments, the final DSI criterion includes clearer [and] more precisely scoped requirements for health IT developers,” ONC said in a fact sheet accompanying the rule. “In particular, the final criterion requires that health IT developers are responsible for only the predictive DSIs that they supply as part of their certified health IT.”

Joseph Cody, associate director for health IT and digital health policy at the American College of Cardiology, said ONC “made steps in the right direction” with some of the changes. 

In particular, he said “they did a much better job of delineating the difference between” evidence-based and predictive DSIs. Though he said some of the definitions are “still overly broad.”

“If you’re expecting clinicians to spend a lot of time to go through and look at all the different components that are required to be publicly transparent and available to them, it becomes very hard for that clinician to be able to spend that time,” Cody said. He added that ACC is looking forward to having conversations with federal agencies about additional steps.

The final rule also includes ongoing maintenance and risk management requirements for health IT developers to keep “source attributable” information in the DSIs up-to-date, according to ONC. Health IT will be required to comply with the maintenance certification starting January 2025.

Rebecca Heilweil contributed to this article.

The post HHS maintains deadline for AI transparency requirements in new tech certification rule appeared first on FedScoop.

]]>
75293
GAO preparing report on agency artificial intelligence use case inventories https://fedscoop.com/gao-preparing-agency-ai-use-case-inventory-report/ Wed, 06 Dec 2023 22:09:49 +0000 https://fedscoop.com/?p=75153 The report is expected to come as soon as next week, the Government Accountability Office’s Kevin Walsh said.

The post GAO preparing report on agency artificial intelligence use case inventories appeared first on FedScoop.

]]>
The Government Accountability Office is getting ready to publish a report on certain federal agencies’ progress with artificial intelligence use case inventories. 

The report will focus on the Chief Financial Officer Act agencies — not including the Defense Department — as well as the Office of Management and Budget and the Office of Science and Technology Policy, and is expected to be published next week, Kevin Walsh, a director with the GAO Information Technology and Cybersecurity team conducting the report, said in an email to FedScoop. 

Speaking on a panel at Informatica’s Data in Action Summit on Wednesday, Stephen Sanford, the GAO’s managing director of strategic planning and external liaison, described the forthcoming report as a look at how the CFO Act agencies are doing with the AI inventory process and where they are with requirements in various AI executive orders and legislation.

Federal agencies’ annual AI use case inventories, which were initially required under a Trump-era executive order, have so far lacked consistency and received criticism from academics and advocates as a result. A major 2022 report by several Stanford researchers analyzed progress on the existing AI requirements under statute and existing executive orders, detailing compliance issues with the inventories in the first year they were required. FedScoop has continued to report on issues with those public postings in recent months.

Following President Joe Biden’s recent AI executive order, the White House said the inventories are intended to be a more expansive resource for the government and the public. A list published on AI.gov ahead of that executive order consolidated the public inventories into one file, totaling more than 700 uses across the government. 

The coming report was initiated by GAO under the Comptroller General’s authority, Walsh said, as opposed to a congressional request. The Comptroller General’s authority is typically used for work on emerging issues, broad interest areas for Congress, and to respond to events of national or international significance, Walsh explained.

While DOD won’t be included in the new report, GAO has previously looked at the department’s AI-related work. Earlier this year, the GAO published a report recommending that DOD establish department-wide guidance for AI acquisition, and previously published a report recommending improvements to its own AI inventory process.

In addition to the use case report, Sanford said the GAO is also preparing a report on how the Department of Homeland Security and some of its components are doing with the implementation of the watchdog’s AI accountability framework. He said they expect to put it out “early next year.”

“We have a lot in the pipeline,” Sanford said. “The goal here is, I think, as a federal community, to try to learn from all this work. And these are, I think, some of the first wide-scoped evaluative jobs that we’re … doing that are going to be coming out.”

The post GAO preparing report on agency artificial intelligence use case inventories appeared first on FedScoop.

]]>
75153
Using AI and Generative AI for cloud-based modernization of federal agencies https://fedscoop.com/using-ai-and-generative-ai-for-cloud-based-modernization-of-federal-agencies/ Mon, 27 Nov 2023 13:30:00 +0000 https://fedscoop.com/?p=74922 Four key challenges that artificial intelligence can help federal agencies overcome.

The post Using AI and Generative AI for cloud-based modernization of federal agencies appeared first on FedScoop.

]]>
As cloud computing environments expand and diversify, government agencies are confronted with a growing array of cloud services, options, and offerings. To navigate this complexity effectively, agencies must formulate well-informed strategies aimed at understanding, anticipating, rationalizing, and optimizing major cloud architecture decisions to reduce technical debt and create seamless interaction with applications.

Types of clouds

Head shot of Sandeep Shilawat.
Sandeep Shilawat, vice president of cloud technology at Octo.

Strategy development relies on understanding basic cloud architecture. There are four main types of cloud computing architectures: private clouds, public clouds, hybrid clouds, and multi-clouds. Multi-clouds can have multiple public cloud service providers (CSPs); hybrid-clouds have an on-premises data center as well as a public cloud; and hybrid multi-clouds have a mixture of private and public cloud services. Each architecture has a different impact on an agency.

The intersection of AI and cloud computing

The synergy between AI and cloud computing has been evident since the inception of these technologies. Cloud computing facilitated the rise of AI, but in recent years, attention has shifted to leveraging AI for cloud management and cybersecurity. AI has played a pivotal role in the field of AIOPs (AI for Operations), aiding in the creation of various AI-driven copilots for tasks like security and modernization.

Utilizing AI to tackle cloud modernization challenges in the federal market

Throughout the cloud modernization journey, AI and generative AI can help ensure leaders make informed decisions and expedite the elimination of technical debt. Here are key challenges these technologies can help overcome.

Challenge #1: Legacy applications

Over the past few years, cloud adoption has become the default choice for new applications. However, a significant number of legacy applications in the U.S. federal market remains unmigrated for various reasons, which increases technical debt. There is a strong case for using AI to expedite the elimination of this technical debt to align with mission objectives. Additionally, there is tremendous potential for generative AI to migrate on-premises infrastructures to public or private clouds.

The primary challenge with legacy applications in the federal sector is the discovery and analysis process. To overcome this, deep learning tools can be used to crawl through legacy environments, collecting data to create a comprehensive application overview within complex infrastructures. The true value of such deep learning techniques emerges when applied at an enterprise scale. For years, creating an enterprise-wide view of the application portfolio has been a challenge for enterprise architects involved in modernization. But deep learning can create a reporting system that provides a digital modernization roadmap for the enterprise. Generative AI can then leverage these deep learning datasets to facilitate informed modernization decisions through interactive techniques.

Challenge #2: Choosing a path forward

Another critical concern in cloud modernization is selecting the optimal path. Binary choices between modernization and migration may not always serve the best interests of the organization or agency. While certain migration strategies make sense at the application level, adopting a portfolio view can lead to more optimal decisions for enterprises. 

Various generative AI-based techniques can resolve this dilemma at the portfolio level through an automated application rationalization framework. A generative-AI-based query system will greatly assist in making appropriate migration choices that produce the desired result.

Challenge #3: Cost management

While the true power of the cloud lies in its elasticity, over time, this elasticity can result in “cloud sprawl,” which increases cost and can increase technical debt.

Implementing AI and generative AI tools that perform cloud cost management through regular recommendation optimization of infrastructure can help enterprises save substantially and ensure compliance with federal contracts. A continuous, intelligent view of costs can aid in planning project budgets to minimize the impact of migration costs.

Challenge #4: Compliance

Ensuring cloud compliance is crucial in federal markets where regulations and security measures are, by necessity, always top of mind.

Generative AI can produce template-based infrastructure-as-code, forming the foundation for consistent, secure, and best-practice cloud environments. Following the creation of cloud infrastructure by a cloud AI companion, another Generative AI tool can autonomously scan for threats, compliance with corporate policies, adherence to industry best practices, and alignment with cloud provider frameworks such as the AWS Well-Architected Framework. Scanning for misconfigurations is especially vital, as those account for a significant portion of cloud security challenges.

Beyond the dashboard

Managing a cloud infrastructure that consists of millions of resources is a daunting task when relying solely on dashboards and spreadsheets, and it carries significant risks. Many cloud operation management vendors now offer AI-driven solutions specifically for managing hybrid and multi-cloud environments, a well-established use case in the realm of AI in cloud computing known as AIOPs. These generative AI use cases include:

  1. Workload management strategy through AI
  2. Financial operations (FinOps) management strategy through deep analytics
  3. Formulating policies for data management in the cloud
  4. Defining security postures and addressing workload-specific security requirements
  5. Identifying and training staff to develop a knowledgeable workforce

In summary, AI and generative AI hold great potential to assist federal agencies that need to reduce technical debt and accelerate cloud modernization initiatives at mission-critical speeds. AI and generative AI capabilities span every phase, from assessment to operations. While this market is still emerging and evolving, many independent software vendors (ISVs) and service providers are already offering solutions in this domain.

For clients looking to modernize on Amazon Web Services, IBM Consulting plans to integrate generative AI services into its proprietary IBM Consulting Cloud Accelerator to help accelerate the cloud transformation process.

Sandeep Shilawat is vice president of cloud technology at Octo, an IBM company.

For more information, contact Sandeep.Shilawat@octo.us or visit the IBM Booth #930 at the  AWS re:Invent Conference in Las Vegas, Nevada, beginning November 27, 2023.

The post Using AI and Generative AI for cloud-based modernization of federal agencies appeared first on FedScoop.

]]>
74922
Congressional AI Caucus leader Ted Lieu says most AI should not be regulated https://fedscoop.com/congressional-ai-caucus-leader-ted-lieu-says-most-ai-should-not-be-regulated/ Thu, 21 Sep 2023 22:34:51 +0000 https://fedscoop.com/?p=73088 Lieu argued also that we need more AI-trained regulators for a few high-risk industries.

The post Congressional AI Caucus leader Ted Lieu says most AI should not be regulated appeared first on FedScoop.

]]>
Rep. Ted Lieu, D-Calif., said Thursday that most artificial intelligence will likely not be regulated by the government, but in cases where it could harm or even kill people, it will be needed.

“My analogy from the perspective of a lawmaker is that most of AI we’re not going to regulate,” Lieu said during a Washington Post Live event Thursday on the Future of Work, pointing out that most AI technology like smart toasters for bagels or other seemingly innocuous uses of AI will not need to be regulated.

However, Lieu — a key member of the Congressional AI Caucus and one of three members of Congress with a computer science degree — AI tools that could hurt or kill individuals, like those baked into planes, trains, cars, and other sectors where human life is at risk, will need regulation. And because of that, the federal government will need more AI-trained regulators who are more attuned to the unique aspects of AI in those fields.

“So, think of two bodies of water: a large ocean of AI, and then this small lake of AI. So this large ocean is all the AI we don’t care about… The small lake of AI is AI we might want to think about. And to me, there’s three buckets [in that small lake]. The first is … AI that can destroy the world. Second is AI that isn’t going to destroy the world but can kill you individually…. And that last bucket, which is really the hardest, is AI that has some sort of harm to society,” said Lieu. 

He added that the most difficult uses of AI to control or regulate are AI that can subjectively harm parts of society through unfair monetization, AI algorithms that discriminate or have bias, or AI-driven facial recognition.

As a solution, Lieu pointed to bipartisan legislation he introduced in June, which proposed Congress create an AI blue-ribbon bipartisan commission to make policy and legal recommendations to Congress on how best to regulate AI.

Earlier this year, Lieu introduced the first measure in Congress that was written entirely by the generative AI tool ChatGPT: a nonbinding resolution on how to comprehensively regulate AI in Congress. Lieu’s office is also one of the first not to set restrictions on the use of ChatGPT for internal functions, the California congressman said. 

FedScoop first reported in April that the House of Representatives’ digital service had obtained 40 licenses of ChatGPT Plus, the first publicized congressional use of the popular AI tool. House offices said they were using ChatGPT for generating constituent response drafts and press documents, summarizing large amounts of text in speeches, and drafting policy papers or, in some cases, bill language.

Lieu has highlighted in the past that federal agencies need to be given the power and resources to better tackle the risks and concerns associated with AI, which he hopes his proposed blue-ribbon commission could help with.

“So I think we need to get more regulators in our federal agencies who are more cognizant and attuned to the unique risks and aspects of AI,” Lieu said in June. 

The post Congressional AI Caucus leader Ted Lieu says most AI should not be regulated appeared first on FedScoop.

]]>
73088
DHS names Eric Hysen chief AI officer, announces new policies for AI acquisition and facial recognition  https://fedscoop.com/dhs-names-eric-hysen-chief-ai-officer-announces-new-policies-for-ai-acquisition-and-facial-recognition/ Fri, 15 Sep 2023 18:35:20 +0000 https://fedscoop.com/?p=72952 The new policies focus on responsible acquisition and use of AI and machine learning, and governance of facial recognition applications.

The post DHS names Eric Hysen chief AI officer, announces new policies for AI acquisition and facial recognition  appeared first on FedScoop.

]]>
The Department of Homeland Security on Thursday released new policies regarding the acquisition and use of artificial intelligence and named its first chief AI officer to help champion the department’s responsible adoption of AI.

In a release, DHS Secretary Alejandro Mayorkas announced the directives — one to guide the acquisition and use of AI and machine learning, and another to govern facial recognition applications — and named department CIO Eric Hysen as chief of AI.

The new policies were developed by DHS’s Artificial Intelligence Task Force (AITF), which was created in April 2023.

The news comes after the Government Accountability Office released a report earlier this month outlining the DHS’s lack of policies and training for law enforcement personnel on facial recognition technology. 

“Artificial intelligence is a powerful tool we must harness effectively and responsibly,” said DHS Secretary Alejandro Mayorkas said in a statement. “Our Department must continue to keep pace with this rapidly evolving technology, and do so in a way that is transparent and respectful of the privacy, civil rights, and civil liberties of everyone we serve.”

The release explains that DHS already uses AI in several ways, “including combatting fentanyl trafficking, strengthening supply chain security, countering child sexual exploitation, and protecting critical infrastructure. These new policies establish key principles for the responsible use of AI and specify how DHS will ensure that its use of face recognition and face capture technologies is subject to extensive testing and oversight.”

As DHS’s appointed chief of AI, Eric Hysen will work to promote innovation and safety in the department’s uses of AI and advise Mayorkas and other DHS leadership.

 “Artificial intelligence provides the department with new ways to carry out our mission to secure the homeland,” Hysen said in a statement. “The policies we are announcing today will ensure that the Department’s use of AI is free from discrimination and in full compliance with the law, ensuring that we retain the public’s trust.”

During the past two years of the Biden administration, multiple prominent civil rights groups have harshly criticized DHS’s approach to facial recognition, particularly its contracts with controversial tech company, Clearview AI, which continues to work with the agency.

“DHS claims this technology is for our public safety, but we know the use of AI technology by DHS, including ICE, increases the tools at their disposal to surveil and criminalize immigrants at a new level,” Paromita Shah, executive director of Just Futures Law, a legal nonprofit focused on immigrants and criminal justice issues, said in a statement on the new policies. 

“We remain skeptical that DHS will be able to follow basic civil rights standards and transparency measures, given their troubling record with existing technologies. The infiltration of AI into the law enforcement sector will ultimately impact immigrant communities,” Shah added. 

The post DHS names Eric Hysen chief AI officer, announces new policies for AI acquisition and facial recognition  appeared first on FedScoop.

]]>
72952
Sen. Schumer’s first AI insight forum focuses on 2024 election, federal regulators https://fedscoop.com/sen-schumers-first-ai-insight-forum-focuses-on-2024-election-federal-regulators/ Fri, 15 Sep 2023 12:24:28 +0000 https://fedscoop.com/?p=72889 More than 65 senators and top tech CEOs debated openness and transparency for AI systems at the first meeting, among other key issues.

The post Sen. Schumer’s first AI insight forum focuses on 2024 election, federal regulators appeared first on FedScoop.

]]>
Two-thirds of the Senate along with top tech CEOs and labor and civil rights leaders gathered Wednesday on Capitol Hill to discuss the major AI issues affecting the world and to start sharing preliminary ideas on how the federal government could help solve them.

Senate Majority Leader Chuck Schumer’s first closed-door AI insight forum focused on issues including national security, privacy, high-risk applications, bias, and the implications of AI for the workforce, gathering those bullish on AI as well as skeptics and critics of the technology.

“The things we discussed were open AI, and the pros and cons of that, then health care — the amazing potential that AI could have in health care,” Schumer told reporters after the first of his nine planned AI insight forums.

“We talked about election law and the need to do something fairly immediate, before the election. We talked about the displacement of workers, both the training of workers into the new AI jobs but also what we do about displaced workers who might lose their jobs or have diminished jobs,” Schumer added. “We talked about who the regulators should be – lots of different decisions and questions about that. We talked about the need for immigration. We talked about transparency.” 

The AI insight forum included tech industry leaders like Google CEO Sundar Pichai; Tesla, X and SpaceX CEO Elon Musk; NVIDIA President Jensen Huang; Meta founder and CEO Mark Zuckerberg; technologist and Google alum Eric Schmidt; OpenAI CEO Sam Altman; and Microsoft CEO Satya Nadella, along with representatives from labor and civil rights advocacy groups.

Schumer said that tackling issues around AI-generated content that is fake or deceptive that can lead to widespread misinformation and disinformation was the most time-sensitive problem to solve due to the upcoming 2024 presidential election.

“There’s the issue of actually having deepfakes, where people really believe … that a candidate is saying something when they’re totally a creation of AI,” said Schumer. 

“We talked about watermarking … that one has a quicker timetable maybe than some of the others and it’s very important to do,” Schumer added.

The top Democrat in the Senate said there was much discussion during the meeting about the creation of a new AI agency and that there was also debate about how to use some of the existing federal agencies to regulate AI.

South Dakota Sen. Mike Rounds, Schumer’s Republican counterpart in leading the bipartisan AI forums, said: “We’ve got to have the ability to provide good information to regulators. And it doesn’t mean that every single agency has to have all of the top-end, high-quality of professionals but we need that group of professionals who can be shared across the different agencies when it comes to AI.”

Although there were no significant voluntary commitments made during the first AI insight forum, tech leaders who participated in the forum said there was much debate around how open and transparent AI developers and those using AI in the federal government will be required to be.

“I think the main debates during the forum were around openness and transparency for AI systems based on where it is and where it will go in the future,” Clément Delangue, CEO of Hugging Face, an AI startup focused on open-source, for-profit machine learning platforms, told FedScoop after the forum.

“We emphasize the importance of openness and transparency because we believe open systems are the way to distribute power and distribute value. We think it’s important for the U.S. to create tens of millions of jobs in AI,” Delangue added. “To do that, you need open systems, because like companies, especially small companies, they can’t start from scratch. They need to work based on the science and the models and the datasets that are available for them. Open systems also kind of like create more inclusiveness for everyone to be at the table, participate.”

Rounds said the forums were a way for the U.S. to urgently take leadership on AI regulations and policy-making alongside its existing dominance in development of AI products and tools. 

“We need to be the leaders in the international community. And we have the opportunity, we’re there now. We don’t want to lose that,” Rounds told reporters. “And that means that we become the place where we create but we also share in many cases with the rest of the world, that maintains our leadership that came across very strong today as well.”

Some participants told FedScoop that there was much more agreement than disagreement in the room regarding AI challenges and policymaking.

“I think this was a framing conversation regarding the AI problems, because the following sessions we’ll get into, I think, more detail and try to work out proposals,” Eric Fanning, president and CEO of Aerospace Industries Association, told FedScoop during an interview after the forum.

“This was a chance to sort of see where people are more aligned, or maybe less aligned. But I think, on the big issues, there’ll be a lot of alignment,” said Fanning. “It just was illuminating the different ways, the different perspectives that were brought to the table and the debates. There’s a lot of work to be done. It’s not going to be an easy thing. Because there’s lots of different philosophies on open versus closed, for example.”

The post Sen. Schumer’s first AI insight forum focuses on 2024 election, federal regulators appeared first on FedScoop.

]]>
72889