AI Risk Management Framework Archives | FedScoop https://fedscoop.com/tag/ai-risk-management-framework/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Tue, 23 Apr 2024 21:16:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 AI Risk Management Framework Archives | FedScoop https://fedscoop.com/tag/ai-risk-management-framework/ 32 32 Scientists must be empowered — not replaced — by AI, report to White House argues https://fedscoop.com/pcast-white-house-science-advisors-ai-report-recommendations/ Tue, 23 Apr 2024 21:15:59 +0000 https://fedscoop.com/?p=77551 The upcoming report from the President's Council of Advisors on Science and Technology pushes for the “empowerment of human scientists,” responsible AI use and shared resources.

The post Scientists must be empowered — not replaced — by AI, report to White House argues appeared first on FedScoop.

]]>
The team of technologists and academics charged with advising President Joe Biden on science and technology is set to deliver a report to the White House next week that emphasizes the critical role that human scientists must play in the development of artificial intelligence tools and systems.

The President’s Council of Advisors on Science and Technology voted unanimously in favor of the report Tuesday following a nearly hourlong public discussion of its contents and recommendations. The delivery of PCAST’s report will fulfill a requirement in Biden’s executive order on AI, which called for an exploration of the technology’s potential role in “research aimed at tackling major societal and global challenges.”

“Empowerment of human scientists” was the first goal presented by PCAST members, with a particular focus on how AI assistants should play a complementary role to human scientists, rather than replacing them altogether. The ability of AI tools to process “huge streams of data” should free up scientists “to focus on high-level directions,” the report argued, with a network of AI assistants deployed to take on “large, interdisciplinary, and/or decentralized projects.”

AI collaborations on basic and applied research should be supported across federal agencies, national laboratories, industry and academia, the report recommends. Laura H. Greene, a Florida State University physics professor and chief scientist at the National High Magnetic Field Laboratory, cited the National Science Foundation’s Materials Innovation Platforms as an example of AI-centered “data-sharing infrastructures” and “community building” that PCAST members envision. 

“We can see future projects that will include collaborators to develop next-generation quantum computing qubits, wholesale modeling, whole Earth foundation models” and an overall “handle on high-quality broad ranges of scientific databases across many disciplines,” Greene said.

The group also recommended that “innovative approaches” be explored on how AI assistance can be integrated into scientific workflows. Funding agencies should keep AI in mind when designing and organizing scientific projects, the report said.

The second set of recommendations from PCAST centered on the responsible and transparent use of AI, with those principles employed in all stages of the scientific research process. Funding agencies “should require responsible AI use plans from researchers that would assess potential AI-related risks,” the report states, matching the principles called out in the White House’s AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework.

Eric Horvitz, chief scientific officer at Microsoft, said PCAST’s emphasis on responsible AI use means putting forward “our best efforts to making sure these tools are used in the best ways possible and keeping an eye on possible downsides, whether the models are open source or not open source models. … We’re very optimistic about the wondrous, good things we can expect, but we have to sort of make sure we keep an eye on the rough edges.”

The potential for identifying those “rough edges” rests at least partially in the group’s third recommendation of having shared and open resources. PCAST makes its case in the report for an expansion of existing efforts to “broadly and equitably share basic AI resources.” There should be more secure access granted to federal datasets to aid critical research needs, the report noted, with the requisite protections and guardrails in place.

PCAST members included a specific callout for an expansion of NSF’s National Secure Data Service Demonstration project and the Census Bureau’s Federal Statistical Research Data Centers. The National Artificial Intelligence Research Resource should also be “fully funded,” given its potential as a “stepping-stone for even more ambitious ‘moonshot’ programs,” the report said.

AI-related work from the scientists who make up PCAST won’t stop after the report is edited and posted online next week. Bill Press, a computer science and integrative biology professor at the University of Texas at Austin, said it’s especially important now in this early developmental stage for scientists to test AI systems and learn to use them responsibly. 

“We’re dealing with tools that, at least right now, are ethically neutral,” Press said. “They’re not necessarily biased in the wrong direction. And so you can ask them to check these things. And unlike human people who write code, these tools don’t have pride of ownership. They’re just as happy to try to reveal biases that might have incurred as they are to create them. And that’s where the scientists are going to have to learn to use them properly.”

The post Scientists must be empowered — not replaced — by AI, report to White House argues appeared first on FedScoop.

]]>
77551
NIST seeks participants for new artificial intelligence consortium https://fedscoop.com/nist-seeks-ai-consortium-participants/ Thu, 02 Nov 2023 17:51:06 +0000 https://fedscoop.com/?p=74366 The National Institute of Standards and Technology is looking to collaborate with nonprofits, academia, tech companies and other government entities to help promote the responsible use of AI.

The post NIST seeks participants for new artificial intelligence consortium appeared first on FedScoop.

]]>
The Department of Commerce’s National Institute of Standards and Technology is looking for collaborators to be part of a newly announced AI Safety Institute Consortium following the release of the Biden administration’s executive order on the technology.

In a post to the Federal Register and a corresponding press release Thursday, NIST invited interested organizations to write letters describing their expertise in developing or deploying trustworthy AI, and/or creating models or products that support trustworthy AI.

The agency called the consortium a “core element of the new NIST-led U.S. AI Safety Institute,” which was announced Wednesday at the U.K. AI Safety Summit 2023, and said the group would be essential to its efforts to work with stakeholders to carry out its new responsibilities under the administration’s AI executive order (EO 14110). 

The order, among other things, requires that NIST develop a companion resource to its AI Risk Management Framework that’s focused on generative AI, create guidance on differentiating between human and AI-generated content, and establish benchmarks for AI evaluation and auditing. 

“The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies and impacted communities to help ensure that AI systems are safe and trustworthy,” NIST Director and Under Secretary of Commerce for Standards and Technology Laurie E. Locascio said in a release.

The consortium, NIST said in a frequently asked questions page, will help establish “a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote development and responsible use of safe and trustworthy AI.”

NIST said the consortium’s activities will begin after enough organizations have completed and signed letters of interest that meet all the requirements, but not earlier than Dec. 4. It will also hold a workshop for organizations interested in participating on Nov. 17.

The post NIST seeks participants for new artificial intelligence consortium appeared first on FedScoop.

]]>
74366
Experts warn of ‘contradictions’ in Biden administration’s top AI policy documents https://fedscoop.com/experts-warn-of-contradictions-in-biden-administrations-top-ai-policy-documents/ Wed, 23 Aug 2023 22:51:12 +0000 https://fedscoop.com/?p=72248 AI policy specialists say a lack of guidance from the White House on how to square divergent rights-based and risk-based approaches to AI is proving a challenge for companies working to create new products and safeguards.

The post Experts warn of ‘contradictions’ in Biden administration’s top AI policy documents appeared first on FedScoop.

]]>
The Biden administration’s cornerstone artificial intelligence policy documents, released in the past year, are inherently contradictory and provide confusing guidance for tech companies working to develop innovative products and the necessary safeguards around them, leading AI experts have warned.

Speaking with FedScoop, five AI policy experts said adhering to both the White House’s Blueprint for an AI ‘Bill of Rights’ and the AI Risk Management Framework (RMF), published by the National Institute of Standards and Technology, presents an obstacle for companies working to develop responsible AI products.

However, the White House and civil rights groups have pushed back on claims that the two voluntary AI safety frameworks send conflicting messages and have highlighted that they are a productive “starting point” in the absence of congressional action on AI. 

The two policy documents form the foundation of the Biden administration’s approach to regulating artificial intelligence. But for many months, there has been an active debate among AI experts regarding how helpful — or in some cases hindering — the Biden administration’s dual approach to AI policymaking has been.

The White House’s Blueprint for an AI ‘Bill of Rights’ was published last October. It takes a rights-based approach to AI, focusing on broad fundamental human rights as a starting point for the regulation of the technology. That was followed by the risk-based AI RMF in January, which set out to determine the scale and scope of risks related to concrete use cases and recognized threats to instill trustworthiness into the technology.

Speaking with FedScoop, Daniel Castro, a technology policy scholar and vice president at the Information Technology and Innovation Foundation (ITIF), noted that there are “big, major philosophical differences in the approach taken by the two Biden AI policy documents,” which are creating “different [and] at times adverse” outcomes for the industry.

“A lot of companies that want to move forward with AI guidelines and frameworks want to be doing the right thing but they really need more clarity. They will not invest in AI safety if it’s confusing or going to be a wasted effort or if instead of the NIST AI framework they’re pushed towards the AI blueprint,” Castro said.

Castro’s thoughts were echoed by Adam Thierer of the libertarian nonprofit R Street Institute who said that despite a sincere attempt to emphasize democratic values within AI tools, there are “serious issues” with the Biden administration’s handling of AI policy driven by tensions between the two key AI frameworks.

“The Biden administration is trying to see how far it can get away with using their bully pulpit and jawboning tactics to get companies and agencies to follow their AI policies, particularly with the blueprint,” Thierer, senior fellow on the Technology and Innovation team at R Street, told FedScoop.

Two industry sources who spoke with FedScoop but wished to remain anonymous said they felt pushed toward the White House’s AI blueprint over the NIST AI framework in certain instances during meetings regarding AI policymaking with the White House’s Office of Science and Technology (OSTP).

Rep. Frank Lucas, R-Okla., chair of the House Science, Space and Technology Committee, and House Oversight Chairman Rep. James Comer, R-Ky., have been highly critical of the White House blueprint as it compares to the NIST AI Risk Management Framework, expressing concern earlier this year that the blueprint sends “conflicting messages about U.S. federal AI policy.”

In a letter obtained exclusively by FedScoop, Arati Prabhakar responded to those concerns, arguing that “these documents are not contradictory” and highlighting how closely the White House and NIST are working together on future regulation of the technology.

At the same time, some industry AI experts say the way in which the two documents define AI clash with one another.

Nicole Foster, who leads global AI and machine learning policy at Amazon Web Services, said chief among the concerns with the documents are diverging definitions of the technology itself. She told FedScoop earlier this year that “there are some inconsistencies between the two documents for sure. I think just at a basic level they don’t even define things like AI in the same way.”

Foster’s thoughts were echoed by Raj Iyer, global head of public sector at cloud software provider ServiceNow and former CIO of the U.S. Army, who believes the two frameworks are a good starting point to get industry engaged in AI policymaking but that they lack clarity.

“I feel like the two frameworks are complementary. But there’s clearly some ambiguity and vagueness in terms of definition,” said Iyer.

“So what does the White House mean by automated systems? Is it autonomous systems? Is it automated decision-making? What is it? I think it’s very clear that they did that to kind of steer away from wanting to have a direct conversation on AI,” Iyer added.

Hodan Omaar, an AI and quantum research scholar working with Castro at ITIF, said the two documents appear to members of the tech industry as if they are on different tracks. According to Omaar, the divergence creates a risk that organizations will simply defer to either the “Bill of Rights” or the NIST RMF and ignore the other.

“There are two things the White House should be doing. First, it should better elucidate the ways the Blueprint should be used in conjunction with the RMF. And second, it should better engage with stakeholders to gather input on how the Blueprint can be improved and better implemented by organizations,” Omaar told FedScoop.

In addition to compatibility concerns about the two documents, experts have also raised concerns about the process followed by the White House to take industry feedback in creating the documents.

Speaking with FedScoop anonymously in order to speak freely, one industry association AI official said that listening sessions held by the Office of Science and Technology Policy were not productive.

“The Bill of Rights and the development of that, we have quite a bit of concern because businesses were not properly consulted throughout that process,” the association official said. 

The official added: “OSTP’s listening sessions were just not productive or helpful. We tried to actually provide input in ways in which businesses could help them through this process. Sadly, that’s just not what they wanted.”

The AI experts’ comments come as the Biden administration works to establish a regulatory framework that mitigates potential threats posed by the technology while supporting American AI innovation. Last month, the White House secured voluntary commitments from seven leading AI companies about how AI is used, and it is expected to issue a new executive order on AI safety in the coming weeks.

One of the contributors to the White House’s AI Blueprint sympathizes with concerns from industry leaders and AI experts regarding the confusion and complexity of the administration’s approach to AI policymaking. But it’s also an opportunity for companies seeking voluntary AI policymaking guidance to put more effort into asking themselves hard questions, he said.

“So I understand the concerns very much. And I feel the frustration. And I understand people just want clarity. But clarity will only come once you understand the implications, the broader values, discussion and the issues in the context of your own AI creations,” said Suresh Venkatasubramanian, a Brown University professor and former top official within the White House’s OSTP, where he helped co-author its Blueprint for an ‘AI Bill of Rights.’ 

“The goal is not to say: Do every single thing in these frameworks. It’s like, understand the issues, understand the values at play here. Understand the questions you need to be asking from the RMF and the Blueprint, and then make your own decisions,” said Venkatasubramanian.

On top of that, the White House Blueprint co-author wants those who criticize the documents’ perceived contradictions to be more specific in their complaints.

“Tell me a question in the NIST RMF that contradicts a broader goal in the White House blueprint — find one for me, or two or three. I’m not saying this because I think they don’t exist. I’m saying this because if you could come up with these examples, then we could think through what can we do about it?” he said.

Venkatasubramanian added that he feels the White House AI blueprint in particular has faced resistance from industry because “for the first time someone in a position of power came out and said: What about the people?” when it comes to tech innovation and regulations. 

Civil rights groups like the Electronic Privacy Information Center have also joined the greater discussion about AI regulations, pushing back on the notion that industry groups should play any significant role in the policymaking of a rights-based document created by the White House.

“I’m sorry that industry is upset that a policy document is not reflective of their incentives, which is just to make money and take people’s data and make whatever decisions they want to make more contracts. It’s a policy document, they don’t get to write it,” said Ben Winters, the senior counsel at EPIC, where he leads their work on AI and human rights.

Groups like EPIC and a number of others have called upon the Biden administration to take more aggressive steps to protect the public from the potential harms of AI.

“I actually don’t think that the Biden administration has taken a super aggressive role when trying to implement these two frameworks and policies that the administration has set forth. When it comes to using the frameworks for any use of AI within the government or federal contractors or recipients of federal funds, they’re not doing enough in terms of using their bully pulpit and applying pressure. I really don’t think they’re doing too much yet,” said Winters.

Meanwhile, the White House has maintained that the two AI documents were created for different purposes but designed to be used side-by-side as initial voluntary guidance, noting that both OSTP and NIST were involved in the creation of both frameworks.

OSTP spokesperson Subhan Cheema said: “President Biden has been clear that companies have a fundamental responsibility to ensure their products are safe before they are released to the public, and that innovation must not come at the expense of people’s rights and safety. That’s why the administration has moved with urgency to advance responsible innovation that manage the risks posed by AI and seize its promise — including by securing voluntary commitments from seven leading AI companies that will help move us toward AI development that is more safe, secure, and trustworthy.”

“These commitments are a critical step forward and build on the administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. The administration is also currently developing an executive order that will ensure the federal government is doing everything in its power to support responsible innovation and protect people’s rights and safety, and will also pursue bipartisan legislation to help America lead the way in responsible innovation,” Cheema added.

NIST did not respond to requests for comment.

The post Experts warn of ‘contradictions’ in Biden administration’s top AI policy documents appeared first on FedScoop.

]]>
72248
White House science adviser defends ‘conflicting’ AI frameworks released by Biden admin https://fedscoop.com/arati-prabhakar-ai-bill-of-rights-rmf-conflicting-definitions/ Wed, 02 Aug 2023 20:30:00 +0000 https://fedscoop.com/?p=71316 Arati Prabhakar said the White House AI Blueprint and the NIST AI framework "are not contradictory," in response to queries from House lawmakers.

The post White House science adviser defends ‘conflicting’ AI frameworks released by Biden admin appeared first on FedScoop.

]]>
The Biden administration’s AI ‘Bill of Rights’ Blueprint and the NIST AI Risk Management Framework do not send conflicting messages to federal agencies and private sector companies attempting to implement the two AI safety frameworks within their internal systems, according to the director of the White House Office of Science and Technology Policy.

In a letter obtained exclusively by FedScoop, Arati Prabhakar responded to concerns raised by senior House lawmakers on the House Science, Space and Technology Committee and the House Oversight Committee over apparent contradictions in definitions of AI used in the documents.

“These documents are not contradictory. For example, in terms of the definition of AI, the Blueprint does not adopt a definition of AI, but instead focuses on the broader set of “automated systems,” Prabhakar wrote in a letter sent to House Science Chairman Frank Lucas, R-Okla., and Oversight Chairman James Comer, R-Ky., a few months ago.

“Furthermore, both the AI RMF and the Blueprint propose that meaningful access to an AI system for evaluation should incorporate measures to protect intellectual property law,” Prabhakar added.

In the letter, Prabhakar also described the “critical roles” both documents play in managing risks from AI and automated systems, and said they illustrate how closely the White House and NIST are working together on future regulation of the technology.

The two Republican leaders sent a letter in January to the OSTP director voicing concern that the White House’s AI ‘Bill of Rights’ blueprint document is sending “conflicting messages about U.S. federal AI policy.”

Chairman Lucas and Chairman Comer were highly critical of the White House blueprint as it compares with the NIST AI risk management framework.

Prabhakar in her letter also noted the close partnership between NIST and OSTP regarding AI policymaking and the high engagement both entities have had with relevant stakeholders within industry and civil society in crafting AI policy.

She also highlighted that the AI ‘Bill of Rights’ document recognizes the need to protect technology companies’ intellectual property. Although it calls for the use of confidentiality waivers for designers, developers and deployers of automated systems, it says that such waivers should incorporate “measures to protect intellectual property and trade secrets from unwarranted disclosure as appropriate.”

Commerce Secretary Gina Raimondo said in April that NIST’s AI framework represents the “gold standard” for the regulatory guidance of AI technology and the framework has also been popular with the tech industry.

This came after the Biden administration in October 2022 published its AI ‘Bill of Rights’ Blueprint, which consists of five key principles for regulating the technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback.

Chairman Lucas and Chairman Comer’s engagement with OSTP earlier this year regarding conflicting messages being sent by the Biden administration on AI policy followed concerns expressed by industry and academia about varying definitions within the two documents and how they relate to the definitions used by other federal government agencies.

While they are both non-binding, AI experts and lawmakers have warned about the chilling effect that lack of specificity within framework documents could have on innovation both inside government and across the private sector.

“We’re at a critical juncture with the development of AI and it’s crucial we get this right. We need to give companies useful tools so that AI is developed in a trustworthy fashion, and we need to make sure we’re empowering American businesses to stay at the cutting edge of this competitive industry,” Chairman Lucas said in a statement to FedScoop.

“That’s why our National AI Initiative called for a NIST Risk Management Framework. Any discrepancies between that guidance and other White House documents can create confusion for industry. We can’t afford that because it will reduce our ability to develop and deploy safe, trustworthy, and reliable AI technologies,” he added.

Meanwhile, the White House has repeatedly said the two AI documents were created for different purposes but designed to be used side-by-side and noted that both the executive branch and the Department of Commerce had been involved in the creation of both frameworks.

OSTP spokesperson Subhan Cheema said: “President Biden has been clear that companies have a fundamental responsibility to ensure their products are safe before they are released to the public, and that innovation must not come at the expense of people’s rights and safety. That’s why the Administration has moved with urgency to advance responsible innovation that manage the risks posed by AI and seize its promise—including by securing voluntary commitments from seven leading AI companies that will help move us toward AI development that is more safe, secure, and trustworthy.”

“These commitments are a critical step forward, and build on the Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. The Administration is also currently developing an executive order that will ensure the federal government is doing everything in its power to support responsible innovation and protect people’s rights and safety, and will also pursue bipartisan legislation to help America lead the way in responsible innovation,” Cheema added.

Editor’s note, 8/2/23: This story was updated to add further context about NIST’s AI Risk Management Framework and prior concerns raised by AI experts.

The post White House science adviser defends ‘conflicting’ AI frameworks released by Biden admin appeared first on FedScoop.

]]>
71316
House Dems call on White House to make agencies adopt NIST AI framework https://fedscoop.com/dems-push-biden-admin-to-mandate-nist-ai-framework/ Fri, 21 Jul 2023 13:24:19 +0000 https://fedscoop.com/?p=70876 Reps. Lofgren, Lieu, and Stevens say the Office of Management and Budget should require federal agencies to follow NIST's AI Risk Management Framework.

The post House Dems call on White House to make agencies adopt NIST AI framework appeared first on FedScoop.

]]>
House Democrats on Thursday pushed the White House’s Office of Management and Budget to mandate federal agencies adopt the National Institute of Standards and Technology’s AI Risk Management Framework, which could significantly affect how the government designs and develops AI systems.

House Science Space and Technology Committee Ranking Member Zoe Lofgren, D-Calif., along with Reps. Ted  Lieu, D-Calif., and Haley Stevens, D-Mich., sent a letter to OMB urging that federal agencies and vendors be required to follow the currently voluntary NIST AI guidance to analyze and mitigate the risks associated with the technology.

“We ask that you also consider utilizing the NIST AI RMF and subsequent risk management guidance specifically tailored for the federal government, to ensure agencies and vendors meet baseline standards in mitigating risk,” the three Democratic members said in their letter to OMB.

The Democrats said that the federal government must take a coordinated approach to ensure cutting-edge technologies like AI are used responsibly and that the NIST AI framework served as a “great starting point for agencies and vendors to analyze the risks associated with AI and how their systems can be designed and developed with these risks in mind.”

The Biden administration in recent months has worked to hold organizations accountable for addressing bias that may be embedded within AI systems while also promoting innovation. In October, it published an AI ‘Bill of Rights’ blueprint document, which was followed by NIST’s voluntary risk management framework in January.

The NIST AI framework document sets out four key functions that it says are key to building responsible AI systems: govern, map, measure and manage.

The document is a “rules of the road” that senior technical advisers at NIST hope will provide a starting point for government departments and private sector companies big and small in deciding how to regulate their use of the technology. Organizations can currently adopt the framework on a voluntary basis.

Commerce Secretary Gina Raimondo said in April that NIST’s AI framework represents the “gold standard” for the regulatory guidance of AI technology and has so far received a warm reception from industry.

Republicans are also in support of the NIST AI framework being adopted by federal agencies when creating and designing AI going forward.

A House Republican Science, Space and Technology committee aide told FedScoop that committee Chairman Frank Lucas first raised this issue of federal agency adoption of the NIST AI framework in May and now Republicans are in the process of drafting legislation on this issue.

The post House Dems call on White House to make agencies adopt NIST AI framework appeared first on FedScoop.

]]>
70876
Google and IBM push for increased govt resources to support AI innovation and transparency https://fedscoop.com/google-and-ibm-respond-to-biden-administration-rfp/ https://fedscoop.com/google-and-ibm-respond-to-biden-administration-rfp/#respond Fri, 07 Jul 2023 21:38:58 +0000 https://fedscoop.com/?p=70265 In comments submitted in response to a request for information from the White House, the tech giants expressed opposition to the idea of creating a new single AI "super regulator".

The post Google and IBM push for increased govt resources to support AI innovation and transparency appeared first on FedScoop.

]]>
Technology giants Google and IBM are pushing for the federal government to take a more active role in promoting AI innovation and transparency and strongly oppose the creation of a new single AI “super regulator,” according to comments submitted to the White House on Friday and in past weeks.

The tech behemoths reiterated their support for flexible risk based AI regulatory frameworks like the National Institutes of Standards and Technology (NIST)’s AI Risk Management Framework rather than more horizontal, rigid, top down regulatory approaches like the proposed EU AI Act that’s currently being debated. 

Google and IBM were responding to a public consultation launched in May by the Biden administration to gather evidence from industry and researchers on the major threats and opportunities presented by AI. It is one of several recent inquiries launched to examine the technology, including a request for information from the National Telecommunications and Information Administration in April.

“IBM urges the Administration to adopt a “precision regulation” posture towards AI. This means establishing rules to govern the technology’s deployment in specific use-cases, not regulating the technology itself,” the company said in its comments submitted to the White House Office of Science and Technology Policy regarding national priorities for AI.

“IBM supports leveraging existing authorities to regulate AI. As such, we recommend that the Administration support an approach to regulating AI that prioritizes empowering every agency to be an AI agency,” the company said.

IBM in its comment to the OSTP added that the White House should push for the greater resources and the expansion of the GSA’s AI Center of Excellence, the National AI Research Resource (NAIRR), and agencies with high compute needs like the Commerce Department and the Energy Department.  

Google in its comment to the OSTP reiterated the importance of NIST taking the lead on trustworthy AI policies, standards and best practices in the U.S., and highlighted the need to ensure government acquisition policies are reformed to require AI training for acquisition workforce, remove barriers to data governance that harness the power of AI, and push federal agencies to use AI systems to enhance operations and decision making. 

The search giant also pushed for the White House to establish an AI competitiveness council in the form of a National AI Security & Competitiveness Council, or reactivate the National Security Commission on AI (NSCAI), to assess research and development (R&D) gaps and AI deployment to ensure that the US government is equipped to address security and defense challenges from foreign rivals and advocate for aligned international governance. 

IT global trade association, the Information Technology Industry Council (ITI), also submitted a comment to the OSTP calling for NIST to be at the forefront of AI regulatory technical standards. 

The post Google and IBM push for increased govt resources to support AI innovation and transparency appeared first on FedScoop.

]]>
https://fedscoop.com/google-and-ibm-respond-to-biden-administration-rfp/feed/ 0 70265
Reps. Buck and Lieu: AI regulation must reduce risk without sacrificing innovation https://fedscoop.com/reps-buck-and-lieu-ai-regulation-must-reduce-risk-without-sacrificing-innovation/ https://fedscoop.com/reps-buck-and-lieu-ai-regulation-must-reduce-risk-without-sacrificing-innovation/#respond Wed, 05 Jul 2023 15:06:48 +0000 https://fedscoop.com/?p=70059 In interviews with FedScoop, the congressional AI leaders share their unique and at times contrasting visions for regulation of the technology.

The post Reps. Buck and Lieu: AI regulation must reduce risk without sacrificing innovation appeared first on FedScoop.

]]>
Two leading congressional AI proponents, Rep. Ted Lieu, a California Democrat, and Rep. Ken Buck, a Colorado Republican, are working to boost the federal government’s ability to foster AI innovation through increased funding and competition while also reducing major risks associated with the technology.

Last week each lawmaker shared with FedScoop their own unique vision for how Congress and the federal government should approach AI in the coming months, with Lieu criticizing parts of the European Union’s proposed AI Act while Buck took a shot at the White House’s AI Bill of Rights blueprint.

Buck and Lieu recently worked together to introduce a bill which would create a blue-ribbon commission on AI to develop a comprehensive framework for the regulation of the emerging technology and earlier this year introduced a bipartisan bill to prevent AI from making nuclear launch decisions.

The bicameral National AI Commission Act would create a 20-member commission to explore AI regulation, including how regulation responsibility is distributed across agencies, the capacity of agencies to address challenges relating to regulation, and alignment among agencies in their enforcement actions. 

The AI Commission bill is one of several potential solutions for regulating the technology proposed by lawmakers, including Senate Majority Leader Chuck Schumer, who recently introduced a plan to develop comprehensive legislation in Congress to regulate and advance the development of AI in the U.S.

Buck said he would like to see “experts studying AI from trusted groups like the Bull Moose project and other think tanks, including American Compass,” to be a part of the AI commission. 

Buck and Lieu are both strongly focused on ensuring Congress and the federal government allow AI companies and their tools to keep innovating to ensure the US stays ahead of adversaries like China while ensuring any harms caused by the technology are understood and mitigated. 

With respect to increasing and supporting AI innovation in the U.S., Lieu said he is currently pushing for more funding within the Congressional appropriations process for AI safety, research and innovation that the federal government would disperse to qualified entities and institutions.

“I would like to see more funding from the government to research centers that create AI and to have different grants available for people who want to work on AI safety and AI risks and AI innovation,” said Lieu, who is a member of the House Artificial Intelligence Caucus and one of three members of Congress with a computer science degree.

Buck on the other hand highlighted that one of the keys to encouraging AI innovation is the government ensuring that “we don’t have a single controlling entity that we have dispersed AI competition,” in order to “make sure that we don’t have a Google in the AI space. I don’t mean Google specifically but I mean, I want to make sure we have five or six major generative AI competitors in the space,” he said.

For the past two years, Buck was the top Republican on the powerful House antitrust subcommittee and has played a key role in forging a bipartisan agreement in Congress that would rein in Big Tech companies such as Google, Amazon, Facebook, and Apple for anti competitive activities.

Buck also said he’s not in favor of OpenAI and ChatGPT CEO Sam Altman’s key approach to regulating the technology, which calls for the creation of a new federal agency to license and regulate large AI models. That proposal was floated by Altman along with other legislative ideas during congressional testimony in May.

“I’m not in favor of one agency with one commission, because it’s too easy to be captured by an outside group. So I think dispersing that oversight within the government is important,” Buck told FedScoop during an interview in his Congressional office on Capitol Hill. 

“I’m not in favor of one agency with one commission, because it’s too easy to be captured by an outside group.”

Rep. ted buck, r-colo.

Tech giant Google has also pushed the federal government to divide up oversight of AI tools across agencies rather than creating a single regulator focused on the technology, in contrast with rivals like Microsoft and OpenAI. 

Kent Walker, Google’s president of global affairs, told the Washington Post in June that he was in favor of a “hub-and-spoke model” of federal regulations that he argued is better suited to deal with how AI is affecting U.S. economy than the “one-size-fits-all approach” of creating a single agency devoted to the issue.

When asked about which AI regulatory framework he supports, Buck said the main frameworks currently being debated in Washington including the National Institute of Standards and Technology’s (NIST) voluntary AI Risk Management Framework, the White House’s AI Bill of Rights Blueprint, and the EU’s proposed AI Act all have “salvageable items.”  

WASHINGTON, DC – JULY 28: Rep. Ken Buck (R-Colo.) questions U.S. Attorney General William Barr during a House Judiciary Committee hearing on Capitol Hill on July 28, 2020 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)

However, Buck added that the White House’s AI Bill of Rights “has some woke items that won’t find support across partisan lines,” indicating Republicans will push back against parts of the blueprint document which consists of five key principles for the regulation of AI technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback.

On the other hand, Lieu, a Democrat, is strongly in favor of the White House’s AI blueprint which is intended to address concerns that unfettered use of AI in certain scenarios may cause discrimination against minority groups and further systemic inequality.

“The biggest area of AI use with the government [of concern] would be AI that has some sort of societal harm, such as discrimination against certain groups. Facial recognition technology that is less accurate for people with darker skin, I think we have to put some guardrails on that,” Lieu told FedScoop during a phone interview last week.  

“I am concerned with any AI model that could lead to systematic discrimination against a certain group of people, whether that’s in facial recognition or loan approval,” Lieu said.

“I am concerned with any AI model that could lead to systematic discrimination against a certain group of people, whether that’s in facial recognition or loan approval.”

rep. ted lieu, d-calif.

Lieu added that the federal government should be focused on regulating or curtailing AI that could be used to hack or cyberattack institutions and companies and how to mitigate such dangerous activity. 

In a paper examining popular generative AI tool ChatGPT’s code-writing model known as Codex, which powers GitHub’s Co-Pilot assistant, OpenAI researchers observed that the AI model “can produce vulnerable or misaligned code” and could be “misused to aid cybercrime.” The researchers added that while “future code generation models may be able to be trained to produce more secure code than the average developer,” getting there “is far from certain.” 

Lieu also said that “AI that can be very good at spreading disinformation and microtargeting, people with misinformation,” which needs to be addressed and highlighted AI will cause “there to be disruption in the labor force. And we need to think about how we’re going to mitigate that kind of disruption.”

Alongside the White House’s AI blueprint, Lieu said he was strongly in favor of the voluntary NIST AI framework AI regulatory framework focused on helping the private sector and eventually federal agencies build responsible AI systems centered on four key principles: govern, map, measure and manage.

However, Lieu took issue with parts of the EU’s AI Act which was proposed earlier this year and is currently being debated but unlike the White House AI Blueprint and the NIST AI framework would be mandatory by law for all entities to follow.

“My understanding is that the EU AI Act has provisions in it that for example, would prevent or dissuade AI from analyzing human emotions. I think that’s just really stupid,” Lieu told FedScoop during the interview.  

“Because one of the ways humans communicate is through emotions. And I don’t understand why you would want to prevent AI from getting the full communications of the individual if the interviewer chooses to communicate that to the AI,” Lieu added.

The post Reps. Buck and Lieu: AI regulation must reduce risk without sacrificing innovation appeared first on FedScoop.

]]>
https://fedscoop.com/reps-buck-and-lieu-ai-regulation-must-reduce-risk-without-sacrificing-innovation/feed/ 0 70059
White House launches public consultation on critical AI issues https://fedscoop.com/white-house-launches-public-consultation-on-critical-ai-issues/ Tue, 23 May 2023 18:30:00 +0000 https://fedscoop.com/?p=68529 The Biden administration also issues an updated AI research and development roadmap to include guidelines for collaboration with international partners.

The post White House launches public consultation on critical AI issues appeared first on FedScoop.

]]>
The Biden administration has launched a public consultation to gather evidence from industry and researchers on the major threats and opportunities presented by artificial intelligence as it works to sharpen its policy approach to the technology.

In a request for information document, the White House Office of Science and Technology Policy said it is seeking answers to questions ranging from how possible uses of the technology may threaten national security to how AI may be used to improve U.S. productivity.

Details of the consultation were published Tuesday alongside fresh guidance documents including an updated AI research and development roadmap and a paper from the Department of Education examining the future impact of the technology on learning.

In the RFI document, the White House said: “The Biden-Harris Administration is undertaking a process to ensure a cohesive and comprehensive approach to AI-related risks and opportunities … [t]hrough this RFI, OSTP and its National AI Initiative Office seeks information about AI and associated actions related to AI that could inform the development of a National AI Strategy.”

The public consultation is one of several policy initiatives issued by the Biden administration in recent months to address the key threats and opportunities presented by artificial intelligence technology. Through this consultation, it is seeking to fast-track the evidence-gathering process to help address areas of key policy concern around AI governance, including how the federal government supports the innovative use of the technology while protecting citizens’ rights.

The updated AI research and development roadmap published by the White House adds a ninth pillar to its existing strategy of establishing a “principled and coordinated approach to international collaboration in AI research.”

Other pillars, which were included in previous iterations of the roadmap document, include ensuring that investments in fundamental and responsible AI research are made with a long-term investment horizon and that effective methods are developed for humans to work alongside AI systems.

Earlier this month, the White House announced the launch of seven new AI research institutes, which will be housed within the National Science Foundation and will work to facilitate AI advances that are “ethical, trustworthy, responsible and serve the public group, as well as to drive breakthroughs in critical areas including climate, energy and cybersecurity.”

This came after the National Institute of Standards and Technology in January launched a voluntary AI Risk Management framework, which provides a voluntary risk-based guide for developing responsible AI.

In October, the White House issued an AI ‘Bill of Rights’ framework document, which sets out a rights-based approach to regulation of the technology, centered around five key principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration and fallback.

Individuals and organizations have until July 7 to respond to the request for information.

The post White House launches public consultation on critical AI issues appeared first on FedScoop.

]]>
68529
G7 nations agree on need for ‘risk-based’ approach to AI regulation https://fedscoop.com/g7-nations-agree-on-need-for-risk-based-approach-to-ai-regulation/ https://fedscoop.com/g7-nations-agree-on-need-for-risk-based-approach-to-ai-regulation/#respond Mon, 01 May 2023 19:02:05 +0000 https://fedscoop.com/?p=68010 The joint declaration sets out the need for a risk-based approach to regulating AI technology.

The post G7 nations agree on need for ‘risk-based’ approach to AI regulation appeared first on FedScoop.

]]>
Countries within the Group of Seven political forum have signed a declaration agreeing on the need for “risk-based” AI regulations.

Top technology officials from Britain, Canada, the EU, France, Germany, Italy, Japan and the United States on Sunday signed the joint statement, which seeks to establish parameters for how major countries govern the technology.

The statement said: “We reaffirm that AI policies and regulations should be human centric and based on democratic values, including protection of human rights and fundamental freedoms and the protection of privacy and personal data.”

It added: “We also reassert that AI policies and regulations should be risk-based and forward-looking to preserve an open and enabling environment for AI development and deployment that maximizes the benefit of the technology for people and the planet while mitigating its risks.”

The reference to a risk-based approach to regulating AI in the document follows the publication of NIST’s AI management framework in January, which sought to establish some “rules of then road” for the use of the technology by government and the private sector int he United States.

The G7 declaration comes also as the use of AI technology receives increased public attention with the launch of new mainstream tools including OpenAI’s Chat-GPT, which the federal government and Congress has started considering for use internally.

In the U.S., Commerce Secretary Gina Raimondo last week called NIST’s AI Risk Management Framework (AIRMF), which was first released in January, the “gold standard” for the regulatory guidance of AI technology.

However, NIST’s AI framework and the G7 agreement contrast in some ways with the foundational rights-based framework laid out in the White House’s October 2022 Blueprint for an AI ‘Bill of Rights,’ that some AI experts have advocated as a model for AI regulations going forward.

The post G7 nations agree on need for ‘risk-based’ approach to AI regulation appeared first on FedScoop.

]]>
https://fedscoop.com/g7-nations-agree-on-need-for-risk-based-approach-to-ai-regulation/feed/ 0 68010
Commerce’s NTIA launches trustworthy AI inquiry  https://fedscoop.com/commerces-ntia-launches-trustworthy-ai-inquiry/ Tue, 11 Apr 2023 17:08:29 +0000 https://fedscoop.com/?p=67548 The National Telecommunications and Information Administration has issued a request for comment on how government agencies should audit AI technology.

The post Commerce’s NTIA launches trustworthy AI inquiry  appeared first on FedScoop.

]]>
The National Telecommunications and Information Administration has launched an inquiry that will examine how companies and regulators can ensure artificial intelligence tools are trustworthy and work without causing harm.

Assistant Secretary of Commerce Alan Davidson announced the new initiative at an event at the University of Pittsburgh’s Institute of Cyber Law, Policy and Security.

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” Davidson said.

He added: “Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems.”

As part of the exercise, which is focused on determining how the federal government can effectively regulate the evolving technology, NTIA is seeking evidence on what policies can support the development of AI audits, assessments, certifications and other mechanisms to “create earned trust in AI systems.”

The Department of Commerce agency has issued a request for comment to seek feedback from a range of parties across industry and academia.

According to NTIA, insights collected through the request for comment will inform the Biden administration’s work to establish a joined-up regulatory framework for the technology.

Respondents have 60 days to submit comments following the publication of the request for comment in the Federal Register. They can do so by following instructions listed here.

The launch of NTIA’s inquiry follows the publication of a voluntary AI Risk Management Framework, which was issued in January by the National Institute of Standards and Technology.

That initial guidance document set out four key functions that NIST says are key to building responsible AI systems: govern, map, measure and manage.

NIST’s AI framework document followed the Biden administration’s AI ‘Bill of Rights’, which was published in October and sought to address the potential discriminatory effects of certain AI technology.

That blueprint document contained five key principles for the regulation of the technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration, and fallback.

The post Commerce’s NTIA launches trustworthy AI inquiry  appeared first on FedScoop.

]]>
67548