AI bill of rights Archives | FedScoop https://fedscoop.com/tag/ai-bill-of-rights/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Tue, 23 Apr 2024 21:16:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 AI bill of rights Archives | FedScoop https://fedscoop.com/tag/ai-bill-of-rights/ 32 32 Scientists must be empowered — not replaced — by AI, report to White House argues https://fedscoop.com/pcast-white-house-science-advisors-ai-report-recommendations/ Tue, 23 Apr 2024 21:15:59 +0000 https://fedscoop.com/?p=77551 The upcoming report from the President's Council of Advisors on Science and Technology pushes for the “empowerment of human scientists,” responsible AI use and shared resources.

The post Scientists must be empowered — not replaced — by AI, report to White House argues appeared first on FedScoop.

]]>
The team of technologists and academics charged with advising President Joe Biden on science and technology is set to deliver a report to the White House next week that emphasizes the critical role that human scientists must play in the development of artificial intelligence tools and systems.

The President’s Council of Advisors on Science and Technology voted unanimously in favor of the report Tuesday following a nearly hourlong public discussion of its contents and recommendations. The delivery of PCAST’s report will fulfill a requirement in Biden’s executive order on AI, which called for an exploration of the technology’s potential role in “research aimed at tackling major societal and global challenges.”

“Empowerment of human scientists” was the first goal presented by PCAST members, with a particular focus on how AI assistants should play a complementary role to human scientists, rather than replacing them altogether. The ability of AI tools to process “huge streams of data” should free up scientists “to focus on high-level directions,” the report argued, with a network of AI assistants deployed to take on “large, interdisciplinary, and/or decentralized projects.”

AI collaborations on basic and applied research should be supported across federal agencies, national laboratories, industry and academia, the report recommends. Laura H. Greene, a Florida State University physics professor and chief scientist at the National High Magnetic Field Laboratory, cited the National Science Foundation’s Materials Innovation Platforms as an example of AI-centered “data-sharing infrastructures” and “community building” that PCAST members envision. 

“We can see future projects that will include collaborators to develop next-generation quantum computing qubits, wholesale modeling, whole Earth foundation models” and an overall “handle on high-quality broad ranges of scientific databases across many disciplines,” Greene said.

The group also recommended that “innovative approaches” be explored on how AI assistance can be integrated into scientific workflows. Funding agencies should keep AI in mind when designing and organizing scientific projects, the report said.

The second set of recommendations from PCAST centered on the responsible and transparent use of AI, with those principles employed in all stages of the scientific research process. Funding agencies “should require responsible AI use plans from researchers that would assess potential AI-related risks,” the report states, matching the principles called out in the White House’s AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework.

Eric Horvitz, chief scientific officer at Microsoft, said PCAST’s emphasis on responsible AI use means putting forward “our best efforts to making sure these tools are used in the best ways possible and keeping an eye on possible downsides, whether the models are open source or not open source models. … We’re very optimistic about the wondrous, good things we can expect, but we have to sort of make sure we keep an eye on the rough edges.”

The potential for identifying those “rough edges” rests at least partially in the group’s third recommendation of having shared and open resources. PCAST makes its case in the report for an expansion of existing efforts to “broadly and equitably share basic AI resources.” There should be more secure access granted to federal datasets to aid critical research needs, the report noted, with the requisite protections and guardrails in place.

PCAST members included a specific callout for an expansion of NSF’s National Secure Data Service Demonstration project and the Census Bureau’s Federal Statistical Research Data Centers. The National Artificial Intelligence Research Resource should also be “fully funded,” given its potential as a “stepping-stone for even more ambitious ‘moonshot’ programs,” the report said.

AI-related work from the scientists who make up PCAST won’t stop after the report is edited and posted online next week. Bill Press, a computer science and integrative biology professor at the University of Texas at Austin, said it’s especially important now in this early developmental stage for scientists to test AI systems and learn to use them responsibly. 

“We’re dealing with tools that, at least right now, are ethically neutral,” Press said. “They’re not necessarily biased in the wrong direction. And so you can ask them to check these things. And unlike human people who write code, these tools don’t have pride of ownership. They’re just as happy to try to reveal biases that might have incurred as they are to create them. And that’s where the scientists are going to have to learn to use them properly.”

The post Scientists must be empowered — not replaced — by AI, report to White House argues appeared first on FedScoop.

]]>
77551
Congressional Democrats push Biden to codify AI Bill of Rights in executive order https://fedscoop.com/ai-bill-of-rights-biden-executive-order/ Wed, 11 Oct 2023 18:15:14 +0000 https://fedscoop.com/?p=73457 An upcoming executive order on artificial intelligence should incorporate the AI Bill of Rights published last October, a group of Democratic lawmakers said in a letter to the president.

The post Congressional Democrats push Biden to codify AI Bill of Rights in executive order appeared first on FedScoop.

]]>
A group of Democratic lawmakers led by Massachusetts Sen. Ed Markey and Rep. Pramila Jayapal of Washington urged President Joe Biden in a letter to strengthen the White House’s AI Bill of Rights as part of an upcoming executive order.

The AI Bill of Rights, which was published last October, emphasizes key values for the deployment of artificial intelligence, including privacy, protections against algorithm discrimination and explainability. But the principles don’t currently carry the force of law.

Now, these lawmakers want Biden to order federal agencies to apply these principles when deploying their own AI operations. Agencies should have to consider these requirements when using AI that could have a significant impact on the public, in addition to adopting best practices, they wrote in the letter. 

The push comes amid ongoing concerns, including tech policy researchers and digital rights advocates, that the Biden administration’s approach to artificial intelligence has lacked real measures for accountability and instead relied on non-binding and voluntary commitments from companies.  

“By turning the AI Bill of Rights from a non-binding statement of principles into federal policy, your Administration would send a clear message to both private actors and federal regulators: AI systems must be developed with guardrails,” the lawmakers wrote in the letter. “As a substantial purchaser, user, and regulator of AI tools, as well as a significant funder of state-level programs, the federal government’s commitment to the AI Bill of Rights would show that fundamental rights will not take a back seat in the AI era.”

Several AI initiatives are currently expected from the Biden administration, including the new executive order on AI and upcoming guidance from the Office of Management and Budget that will dictate rules for federal agency use of AI.

Right now, one of the primary AI initiatives that applies to federal agencies is Executive Order 13960, which ordered many federal agencies to inventory the AI tools at their disposal. The government is still working on fully complying with that order, as a major Stanford research paper and subsequent FedScoop reporting have noted. 

The post Congressional Democrats push Biden to codify AI Bill of Rights in executive order appeared first on FedScoop.

]]>
73457
Regulating AI risk: Why we need to revamp the ‘AI Bill of Rights’ and lean on depoliticized third parties https://fedscoop.com/regulating-ai-risk-why-we-need-to-revamp-the-ai-bill-of-rights-and-lean-on-depoliticized-third-parties/ Thu, 31 Aug 2023 18:51:09 +0000 https://fedscoop.com/?p=72428 In an exclusive commentary, Arthur Maccabe argues that AI must be regulated, and that it shouldn't be the job of the federal government alone.

The post Regulating AI risk: Why we need to revamp the ‘AI Bill of Rights’ and lean on depoliticized third parties appeared first on FedScoop.

]]>
The AI debate has transitioned from doomsday prophecies to big questions about its risks and how to effectively regulate AI technologies. AI brings a new level of intricacy to an already complex regulatory landscape as a rapidly evolving technology that will likely outpace the creation of comprehensive regulation.

While AI has tremendous potential to increase efficiencies, create new types of job opportunities and enable innovative public-private partnerships, it’s important to regulate its risks. Threats to U.S. cybersecurity and national defense are a major concern, along with the risk of bias and the ability of these tools to spread disinformation quickly and effectively. Additionally, there is a need for increased transparency amidst the development and ongoing use of AI, especially with popular, widely deployed tools like ChatGPT.

Washington D.C. is more focused on AI regulation than ever. The Biden Administration recently announced the National Institute of Standards and Technology’s (NIST) launch of a new AI public working group. Comprised of experts from private and public sectors, the group aims to better understand and tackle the risks of rapidly advancing generative AI. Additionally, Congress has held nearly a dozen hearings on AI since March.

While this momentum demonstrates progress, there is an urgent need to regulate AI as risks continue to emerge and other nations deploy their own AI regulation. Effectively regulating AI will first require the development of a regulatory framework created and upheld by a responsible and respected entity and produced with input from industry, academia and the federal government.  

Addressing biases through the federal government and academia 

This framework must address the potential biases of the technology and clearly articulate the rights of individuals and communities. The Blueprint for an AI ‘Bill of Rights’ developed by the Office of Science and Technology Policy (OSTP) is a good starting point. However, it doesn’t tie back to the original Bill of Rights or the Privacy Act of 1974, which articulates rights that individuals have in protecting their personal data. Going forward, it will be important to explicitly note why an AI-specific version is needed. The government can contribute to the framework by creating a stronger foundation for the Bill of AI Rights that will address AI biases by making them implicit and explicit. 

This regulatory framework should be motivated by potential risks to these rights. Regulations will need to be evaluated and updated regularly, as there can be unintended and unexpected consequences – like those of the European Union’s General Data Protection Regulation (GDPR). This regulation to safeguard personal data resulted in unintentional high-compliance costs which disproportionately impacted smaller businesses.

Academia’s commitment to scholarship, debate, and collaboration can also enable the formation of interdisciplinary teams to tackle AI system challenges. Fairness, for example, is a social construct; ensuring that a computational system is fair will require collaboration between social scientists and computer scientists. The emergence of generative AI systems like ChatGPT raises new questions about creation and learning, necessitating engagement from an even broader range of disciplines.

Why a regulatory framework alone won’t work 

Regulating AI shouldn’t just be the job of the federal government. The highly politicized legislative process is lengthy, which isn’t conducive to quickly evolving AI technology. Collaboration with industry, academia and professional societies is key to successfully deploying and enforcing AI regulation.

In Washington, D.C., previous attempts at AI regulation policy have been limited in scope and have ignited a debate about the federal government’s role. For example, the Algorithmic Accountability Act of 2022, which aimed to promote transparency and accountability in AI systems, was introduced in Congress but did not pass into law. While it did involve government oversight, it also encouraged industry self-regulation by giving companies flexibility in designing their own methods for conducting impact assessments. 

Additionally, Sen. Chuck Schumer, D-N.Y., recently introduced the Safe Innovation Framework for AI Policy to develop comprehensive legislation to regulate and advance AI development and questioned the federal government’s role in AI regulation.

Third-party self-regulation is a key component 

There are existing models of self-regulation used in other industries that could work for AI to complement this legislative framework. For example, the financial industry has implemented self-regulatory processes through organizations like the National Futures Association to certify that the products developed by its licensed members are valid.  

Self-regulation in AI could include third-party certification for AI products from professional societies like the Association for Computing Machinery or the Institute of Electrical and Electronics Engineers. Professional societies include academics and industry and can collaborate with government entities like NIST. They are also nimble and able to keep up with the rapid rate of change to depolarize and depoliticize AI regulation.  

Additionally, establishing and reviewing regulations could be done through Blue Ribbon panels organized by the National Academies which should include participants from government, industry and academia, especially the social sciences and humanities.

Across the globe, the race is on to regulate AI with the European Union already taking steps by releasing its regulatory framework. In the United States, elected officials in areas like New York City have passed laws on how companies can use AI in hiring and promotion

When it comes to AI, we must move quickly to protect fundamental rights. Leveraging the expertise of academia and industry experts, and taking a risk-based approach with self-regulating entities will be crucial. Now is the time to organize, evaluate and regulate AI. 

Dr. Arthur Maccabe is the executive director of the Institute for Computation and Data-Enabled Insight (ICDI) at the University of Arizona. Prior to this, he was the computer science and mathematics division director at Oak Ridge National Laboratory (ORNL) where he was responsible for fundamental research enabling and enabled by the nation’s leadership class Peta-scale computing capabilities, and he was co-author of the US Department of Energy’s roadmap for intelligent computing. Prior to that, he spent 26 years teaching computer science and serving as Chief Information Officer at the University of New Mexico and was instrumental in developing the high-performance computing capabilities at Sandia National Laboratory.

The post Regulating AI risk: Why we need to revamp the ‘AI Bill of Rights’ and lean on depoliticized third parties appeared first on FedScoop.

]]>
72428
Experts warn of ‘contradictions’ in Biden administration’s top AI policy documents https://fedscoop.com/experts-warn-of-contradictions-in-biden-administrations-top-ai-policy-documents/ Wed, 23 Aug 2023 22:51:12 +0000 https://fedscoop.com/?p=72248 AI policy specialists say a lack of guidance from the White House on how to square divergent rights-based and risk-based approaches to AI is proving a challenge for companies working to create new products and safeguards.

The post Experts warn of ‘contradictions’ in Biden administration’s top AI policy documents appeared first on FedScoop.

]]>
The Biden administration’s cornerstone artificial intelligence policy documents, released in the past year, are inherently contradictory and provide confusing guidance for tech companies working to develop innovative products and the necessary safeguards around them, leading AI experts have warned.

Speaking with FedScoop, five AI policy experts said adhering to both the White House’s Blueprint for an AI ‘Bill of Rights’ and the AI Risk Management Framework (RMF), published by the National Institute of Standards and Technology, presents an obstacle for companies working to develop responsible AI products.

However, the White House and civil rights groups have pushed back on claims that the two voluntary AI safety frameworks send conflicting messages and have highlighted that they are a productive “starting point” in the absence of congressional action on AI. 

The two policy documents form the foundation of the Biden administration’s approach to regulating artificial intelligence. But for many months, there has been an active debate among AI experts regarding how helpful — or in some cases hindering — the Biden administration’s dual approach to AI policymaking has been.

The White House’s Blueprint for an AI ‘Bill of Rights’ was published last October. It takes a rights-based approach to AI, focusing on broad fundamental human rights as a starting point for the regulation of the technology. That was followed by the risk-based AI RMF in January, which set out to determine the scale and scope of risks related to concrete use cases and recognized threats to instill trustworthiness into the technology.

Speaking with FedScoop, Daniel Castro, a technology policy scholar and vice president at the Information Technology and Innovation Foundation (ITIF), noted that there are “big, major philosophical differences in the approach taken by the two Biden AI policy documents,” which are creating “different [and] at times adverse” outcomes for the industry.

“A lot of companies that want to move forward with AI guidelines and frameworks want to be doing the right thing but they really need more clarity. They will not invest in AI safety if it’s confusing or going to be a wasted effort or if instead of the NIST AI framework they’re pushed towards the AI blueprint,” Castro said.

Castro’s thoughts were echoed by Adam Thierer of the libertarian nonprofit R Street Institute who said that despite a sincere attempt to emphasize democratic values within AI tools, there are “serious issues” with the Biden administration’s handling of AI policy driven by tensions between the two key AI frameworks.

“The Biden administration is trying to see how far it can get away with using their bully pulpit and jawboning tactics to get companies and agencies to follow their AI policies, particularly with the blueprint,” Thierer, senior fellow on the Technology and Innovation team at R Street, told FedScoop.

Two industry sources who spoke with FedScoop but wished to remain anonymous said they felt pushed toward the White House’s AI blueprint over the NIST AI framework in certain instances during meetings regarding AI policymaking with the White House’s Office of Science and Technology (OSTP).

Rep. Frank Lucas, R-Okla., chair of the House Science, Space and Technology Committee, and House Oversight Chairman Rep. James Comer, R-Ky., have been highly critical of the White House blueprint as it compares to the NIST AI Risk Management Framework, expressing concern earlier this year that the blueprint sends “conflicting messages about U.S. federal AI policy.”

In a letter obtained exclusively by FedScoop, Arati Prabhakar responded to those concerns, arguing that “these documents are not contradictory” and highlighting how closely the White House and NIST are working together on future regulation of the technology.

At the same time, some industry AI experts say the way in which the two documents define AI clash with one another.

Nicole Foster, who leads global AI and machine learning policy at Amazon Web Services, said chief among the concerns with the documents are diverging definitions of the technology itself. She told FedScoop earlier this year that “there are some inconsistencies between the two documents for sure. I think just at a basic level they don’t even define things like AI in the same way.”

Foster’s thoughts were echoed by Raj Iyer, global head of public sector at cloud software provider ServiceNow and former CIO of the U.S. Army, who believes the two frameworks are a good starting point to get industry engaged in AI policymaking but that they lack clarity.

“I feel like the two frameworks are complementary. But there’s clearly some ambiguity and vagueness in terms of definition,” said Iyer.

“So what does the White House mean by automated systems? Is it autonomous systems? Is it automated decision-making? What is it? I think it’s very clear that they did that to kind of steer away from wanting to have a direct conversation on AI,” Iyer added.

Hodan Omaar, an AI and quantum research scholar working with Castro at ITIF, said the two documents appear to members of the tech industry as if they are on different tracks. According to Omaar, the divergence creates a risk that organizations will simply defer to either the “Bill of Rights” or the NIST RMF and ignore the other.

“There are two things the White House should be doing. First, it should better elucidate the ways the Blueprint should be used in conjunction with the RMF. And second, it should better engage with stakeholders to gather input on how the Blueprint can be improved and better implemented by organizations,” Omaar told FedScoop.

In addition to compatibility concerns about the two documents, experts have also raised concerns about the process followed by the White House to take industry feedback in creating the documents.

Speaking with FedScoop anonymously in order to speak freely, one industry association AI official said that listening sessions held by the Office of Science and Technology Policy were not productive.

“The Bill of Rights and the development of that, we have quite a bit of concern because businesses were not properly consulted throughout that process,” the association official said. 

The official added: “OSTP’s listening sessions were just not productive or helpful. We tried to actually provide input in ways in which businesses could help them through this process. Sadly, that’s just not what they wanted.”

The AI experts’ comments come as the Biden administration works to establish a regulatory framework that mitigates potential threats posed by the technology while supporting American AI innovation. Last month, the White House secured voluntary commitments from seven leading AI companies about how AI is used, and it is expected to issue a new executive order on AI safety in the coming weeks.

One of the contributors to the White House’s AI Blueprint sympathizes with concerns from industry leaders and AI experts regarding the confusion and complexity of the administration’s approach to AI policymaking. But it’s also an opportunity for companies seeking voluntary AI policymaking guidance to put more effort into asking themselves hard questions, he said.

“So I understand the concerns very much. And I feel the frustration. And I understand people just want clarity. But clarity will only come once you understand the implications, the broader values, discussion and the issues in the context of your own AI creations,” said Suresh Venkatasubramanian, a Brown University professor and former top official within the White House’s OSTP, where he helped co-author its Blueprint for an ‘AI Bill of Rights.’ 

“The goal is not to say: Do every single thing in these frameworks. It’s like, understand the issues, understand the values at play here. Understand the questions you need to be asking from the RMF and the Blueprint, and then make your own decisions,” said Venkatasubramanian.

On top of that, the White House Blueprint co-author wants those who criticize the documents’ perceived contradictions to be more specific in their complaints.

“Tell me a question in the NIST RMF that contradicts a broader goal in the White House blueprint — find one for me, or two or three. I’m not saying this because I think they don’t exist. I’m saying this because if you could come up with these examples, then we could think through what can we do about it?” he said.

Venkatasubramanian added that he feels the White House AI blueprint in particular has faced resistance from industry because “for the first time someone in a position of power came out and said: What about the people?” when it comes to tech innovation and regulations. 

Civil rights groups like the Electronic Privacy Information Center have also joined the greater discussion about AI regulations, pushing back on the notion that industry groups should play any significant role in the policymaking of a rights-based document created by the White House.

“I’m sorry that industry is upset that a policy document is not reflective of their incentives, which is just to make money and take people’s data and make whatever decisions they want to make more contracts. It’s a policy document, they don’t get to write it,” said Ben Winters, the senior counsel at EPIC, where he leads their work on AI and human rights.

Groups like EPIC and a number of others have called upon the Biden administration to take more aggressive steps to protect the public from the potential harms of AI.

“I actually don’t think that the Biden administration has taken a super aggressive role when trying to implement these two frameworks and policies that the administration has set forth. When it comes to using the frameworks for any use of AI within the government or federal contractors or recipients of federal funds, they’re not doing enough in terms of using their bully pulpit and applying pressure. I really don’t think they’re doing too much yet,” said Winters.

Meanwhile, the White House has maintained that the two AI documents were created for different purposes but designed to be used side-by-side as initial voluntary guidance, noting that both OSTP and NIST were involved in the creation of both frameworks.

OSTP spokesperson Subhan Cheema said: “President Biden has been clear that companies have a fundamental responsibility to ensure their products are safe before they are released to the public, and that innovation must not come at the expense of people’s rights and safety. That’s why the administration has moved with urgency to advance responsible innovation that manage the risks posed by AI and seize its promise — including by securing voluntary commitments from seven leading AI companies that will help move us toward AI development that is more safe, secure, and trustworthy.”

“These commitments are a critical step forward and build on the administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. The administration is also currently developing an executive order that will ensure the federal government is doing everything in its power to support responsible innovation and protect people’s rights and safety, and will also pursue bipartisan legislation to help America lead the way in responsible innovation,” Cheema added.

NIST did not respond to requests for comment.

The post Experts warn of ‘contradictions’ in Biden administration’s top AI policy documents appeared first on FedScoop.

]]>
72248
White House science adviser defends ‘conflicting’ AI frameworks released by Biden admin https://fedscoop.com/arati-prabhakar-ai-bill-of-rights-rmf-conflicting-definitions/ Wed, 02 Aug 2023 20:30:00 +0000 https://fedscoop.com/?p=71316 Arati Prabhakar said the White House AI Blueprint and the NIST AI framework "are not contradictory," in response to queries from House lawmakers.

The post White House science adviser defends ‘conflicting’ AI frameworks released by Biden admin appeared first on FedScoop.

]]>
The Biden administration’s AI ‘Bill of Rights’ Blueprint and the NIST AI Risk Management Framework do not send conflicting messages to federal agencies and private sector companies attempting to implement the two AI safety frameworks within their internal systems, according to the director of the White House Office of Science and Technology Policy.

In a letter obtained exclusively by FedScoop, Arati Prabhakar responded to concerns raised by senior House lawmakers on the House Science, Space and Technology Committee and the House Oversight Committee over apparent contradictions in definitions of AI used in the documents.

“These documents are not contradictory. For example, in terms of the definition of AI, the Blueprint does not adopt a definition of AI, but instead focuses on the broader set of “automated systems,” Prabhakar wrote in a letter sent to House Science Chairman Frank Lucas, R-Okla., and Oversight Chairman James Comer, R-Ky., a few months ago.

“Furthermore, both the AI RMF and the Blueprint propose that meaningful access to an AI system for evaluation should incorporate measures to protect intellectual property law,” Prabhakar added.

In the letter, Prabhakar also described the “critical roles” both documents play in managing risks from AI and automated systems, and said they illustrate how closely the White House and NIST are working together on future regulation of the technology.

The two Republican leaders sent a letter in January to the OSTP director voicing concern that the White House’s AI ‘Bill of Rights’ blueprint document is sending “conflicting messages about U.S. federal AI policy.”

Chairman Lucas and Chairman Comer were highly critical of the White House blueprint as it compares with the NIST AI risk management framework.

Prabhakar in her letter also noted the close partnership between NIST and OSTP regarding AI policymaking and the high engagement both entities have had with relevant stakeholders within industry and civil society in crafting AI policy.

She also highlighted that the AI ‘Bill of Rights’ document recognizes the need to protect technology companies’ intellectual property. Although it calls for the use of confidentiality waivers for designers, developers and deployers of automated systems, it says that such waivers should incorporate “measures to protect intellectual property and trade secrets from unwarranted disclosure as appropriate.”

Commerce Secretary Gina Raimondo said in April that NIST’s AI framework represents the “gold standard” for the regulatory guidance of AI technology and the framework has also been popular with the tech industry.

This came after the Biden administration in October 2022 published its AI ‘Bill of Rights’ Blueprint, which consists of five key principles for regulating the technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback.

Chairman Lucas and Chairman Comer’s engagement with OSTP earlier this year regarding conflicting messages being sent by the Biden administration on AI policy followed concerns expressed by industry and academia about varying definitions within the two documents and how they relate to the definitions used by other federal government agencies.

While they are both non-binding, AI experts and lawmakers have warned about the chilling effect that lack of specificity within framework documents could have on innovation both inside government and across the private sector.

“We’re at a critical juncture with the development of AI and it’s crucial we get this right. We need to give companies useful tools so that AI is developed in a trustworthy fashion, and we need to make sure we’re empowering American businesses to stay at the cutting edge of this competitive industry,” Chairman Lucas said in a statement to FedScoop.

“That’s why our National AI Initiative called for a NIST Risk Management Framework. Any discrepancies between that guidance and other White House documents can create confusion for industry. We can’t afford that because it will reduce our ability to develop and deploy safe, trustworthy, and reliable AI technologies,” he added.

Meanwhile, the White House has repeatedly said the two AI documents were created for different purposes but designed to be used side-by-side and noted that both the executive branch and the Department of Commerce had been involved in the creation of both frameworks.

OSTP spokesperson Subhan Cheema said: “President Biden has been clear that companies have a fundamental responsibility to ensure their products are safe before they are released to the public, and that innovation must not come at the expense of people’s rights and safety. That’s why the Administration has moved with urgency to advance responsible innovation that manage the risks posed by AI and seize its promise—including by securing voluntary commitments from seven leading AI companies that will help move us toward AI development that is more safe, secure, and trustworthy.”

“These commitments are a critical step forward, and build on the Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. The Administration is also currently developing an executive order that will ensure the federal government is doing everything in its power to support responsible innovation and protect people’s rights and safety, and will also pursue bipartisan legislation to help America lead the way in responsible innovation,” Cheema added.

Editor’s note, 8/2/23: This story was updated to add further context about NIST’s AI Risk Management Framework and prior concerns raised by AI experts.

The post White House science adviser defends ‘conflicting’ AI frameworks released by Biden admin appeared first on FedScoop.

]]>
71316
Reps. Buck and Lieu: AI regulation must reduce risk without sacrificing innovation https://fedscoop.com/reps-buck-and-lieu-ai-regulation-must-reduce-risk-without-sacrificing-innovation/ https://fedscoop.com/reps-buck-and-lieu-ai-regulation-must-reduce-risk-without-sacrificing-innovation/#respond Wed, 05 Jul 2023 15:06:48 +0000 https://fedscoop.com/?p=70059 In interviews with FedScoop, the congressional AI leaders share their unique and at times contrasting visions for regulation of the technology.

The post Reps. Buck and Lieu: AI regulation must reduce risk without sacrificing innovation appeared first on FedScoop.

]]>
Two leading congressional AI proponents, Rep. Ted Lieu, a California Democrat, and Rep. Ken Buck, a Colorado Republican, are working to boost the federal government’s ability to foster AI innovation through increased funding and competition while also reducing major risks associated with the technology.

Last week each lawmaker shared with FedScoop their own unique vision for how Congress and the federal government should approach AI in the coming months, with Lieu criticizing parts of the European Union’s proposed AI Act while Buck took a shot at the White House’s AI Bill of Rights blueprint.

Buck and Lieu recently worked together to introduce a bill which would create a blue-ribbon commission on AI to develop a comprehensive framework for the regulation of the emerging technology and earlier this year introduced a bipartisan bill to prevent AI from making nuclear launch decisions.

The bicameral National AI Commission Act would create a 20-member commission to explore AI regulation, including how regulation responsibility is distributed across agencies, the capacity of agencies to address challenges relating to regulation, and alignment among agencies in their enforcement actions. 

The AI Commission bill is one of several potential solutions for regulating the technology proposed by lawmakers, including Senate Majority Leader Chuck Schumer, who recently introduced a plan to develop comprehensive legislation in Congress to regulate and advance the development of AI in the U.S.

Buck said he would like to see “experts studying AI from trusted groups like the Bull Moose project and other think tanks, including American Compass,” to be a part of the AI commission. 

Buck and Lieu are both strongly focused on ensuring Congress and the federal government allow AI companies and their tools to keep innovating to ensure the US stays ahead of adversaries like China while ensuring any harms caused by the technology are understood and mitigated. 

With respect to increasing and supporting AI innovation in the U.S., Lieu said he is currently pushing for more funding within the Congressional appropriations process for AI safety, research and innovation that the federal government would disperse to qualified entities and institutions.

“I would like to see more funding from the government to research centers that create AI and to have different grants available for people who want to work on AI safety and AI risks and AI innovation,” said Lieu, who is a member of the House Artificial Intelligence Caucus and one of three members of Congress with a computer science degree.

Buck on the other hand highlighted that one of the keys to encouraging AI innovation is the government ensuring that “we don’t have a single controlling entity that we have dispersed AI competition,” in order to “make sure that we don’t have a Google in the AI space. I don’t mean Google specifically but I mean, I want to make sure we have five or six major generative AI competitors in the space,” he said.

For the past two years, Buck was the top Republican on the powerful House antitrust subcommittee and has played a key role in forging a bipartisan agreement in Congress that would rein in Big Tech companies such as Google, Amazon, Facebook, and Apple for anti competitive activities.

Buck also said he’s not in favor of OpenAI and ChatGPT CEO Sam Altman’s key approach to regulating the technology, which calls for the creation of a new federal agency to license and regulate large AI models. That proposal was floated by Altman along with other legislative ideas during congressional testimony in May.

“I’m not in favor of one agency with one commission, because it’s too easy to be captured by an outside group. So I think dispersing that oversight within the government is important,” Buck told FedScoop during an interview in his Congressional office on Capitol Hill. 

“I’m not in favor of one agency with one commission, because it’s too easy to be captured by an outside group.”

Rep. ted buck, r-colo.

Tech giant Google has also pushed the federal government to divide up oversight of AI tools across agencies rather than creating a single regulator focused on the technology, in contrast with rivals like Microsoft and OpenAI. 

Kent Walker, Google’s president of global affairs, told the Washington Post in June that he was in favor of a “hub-and-spoke model” of federal regulations that he argued is better suited to deal with how AI is affecting U.S. economy than the “one-size-fits-all approach” of creating a single agency devoted to the issue.

When asked about which AI regulatory framework he supports, Buck said the main frameworks currently being debated in Washington including the National Institute of Standards and Technology’s (NIST) voluntary AI Risk Management Framework, the White House’s AI Bill of Rights Blueprint, and the EU’s proposed AI Act all have “salvageable items.”  

WASHINGTON, DC – JULY 28: Rep. Ken Buck (R-Colo.) questions U.S. Attorney General William Barr during a House Judiciary Committee hearing on Capitol Hill on July 28, 2020 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)

However, Buck added that the White House’s AI Bill of Rights “has some woke items that won’t find support across partisan lines,” indicating Republicans will push back against parts of the blueprint document which consists of five key principles for the regulation of AI technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback.

On the other hand, Lieu, a Democrat, is strongly in favor of the White House’s AI blueprint which is intended to address concerns that unfettered use of AI in certain scenarios may cause discrimination against minority groups and further systemic inequality.

“The biggest area of AI use with the government [of concern] would be AI that has some sort of societal harm, such as discrimination against certain groups. Facial recognition technology that is less accurate for people with darker skin, I think we have to put some guardrails on that,” Lieu told FedScoop during a phone interview last week.  

“I am concerned with any AI model that could lead to systematic discrimination against a certain group of people, whether that’s in facial recognition or loan approval,” Lieu said.

“I am concerned with any AI model that could lead to systematic discrimination against a certain group of people, whether that’s in facial recognition or loan approval.”

rep. ted lieu, d-calif.

Lieu added that the federal government should be focused on regulating or curtailing AI that could be used to hack or cyberattack institutions and companies and how to mitigate such dangerous activity. 

In a paper examining popular generative AI tool ChatGPT’s code-writing model known as Codex, which powers GitHub’s Co-Pilot assistant, OpenAI researchers observed that the AI model “can produce vulnerable or misaligned code” and could be “misused to aid cybercrime.” The researchers added that while “future code generation models may be able to be trained to produce more secure code than the average developer,” getting there “is far from certain.” 

Lieu also said that “AI that can be very good at spreading disinformation and microtargeting, people with misinformation,” which needs to be addressed and highlighted AI will cause “there to be disruption in the labor force. And we need to think about how we’re going to mitigate that kind of disruption.”

Alongside the White House’s AI blueprint, Lieu said he was strongly in favor of the voluntary NIST AI framework AI regulatory framework focused on helping the private sector and eventually federal agencies build responsible AI systems centered on four key principles: govern, map, measure and manage.

However, Lieu took issue with parts of the EU’s AI Act which was proposed earlier this year and is currently being debated but unlike the White House AI Blueprint and the NIST AI framework would be mandatory by law for all entities to follow.

“My understanding is that the EU AI Act has provisions in it that for example, would prevent or dissuade AI from analyzing human emotions. I think that’s just really stupid,” Lieu told FedScoop during the interview.  

“Because one of the ways humans communicate is through emotions. And I don’t understand why you would want to prevent AI from getting the full communications of the individual if the interviewer chooses to communicate that to the AI,” Lieu added.

The post Reps. Buck and Lieu: AI regulation must reduce risk without sacrificing innovation appeared first on FedScoop.

]]>
https://fedscoop.com/reps-buck-and-lieu-ai-regulation-must-reduce-risk-without-sacrificing-innovation/feed/ 0 70059
G7 nations agree on need for ‘risk-based’ approach to AI regulation https://fedscoop.com/g7-nations-agree-on-need-for-risk-based-approach-to-ai-regulation/ https://fedscoop.com/g7-nations-agree-on-need-for-risk-based-approach-to-ai-regulation/#respond Mon, 01 May 2023 19:02:05 +0000 https://fedscoop.com/?p=68010 The joint declaration sets out the need for a risk-based approach to regulating AI technology.

The post G7 nations agree on need for ‘risk-based’ approach to AI regulation appeared first on FedScoop.

]]>
Countries within the Group of Seven political forum have signed a declaration agreeing on the need for “risk-based” AI regulations.

Top technology officials from Britain, Canada, the EU, France, Germany, Italy, Japan and the United States on Sunday signed the joint statement, which seeks to establish parameters for how major countries govern the technology.

The statement said: “We reaffirm that AI policies and regulations should be human centric and based on democratic values, including protection of human rights and fundamental freedoms and the protection of privacy and personal data.”

It added: “We also reassert that AI policies and regulations should be risk-based and forward-looking to preserve an open and enabling environment for AI development and deployment that maximizes the benefit of the technology for people and the planet while mitigating its risks.”

The reference to a risk-based approach to regulating AI in the document follows the publication of NIST’s AI management framework in January, which sought to establish some “rules of then road” for the use of the technology by government and the private sector int he United States.

The G7 declaration comes also as the use of AI technology receives increased public attention with the launch of new mainstream tools including OpenAI’s Chat-GPT, which the federal government and Congress has started considering for use internally.

In the U.S., Commerce Secretary Gina Raimondo last week called NIST’s AI Risk Management Framework (AIRMF), which was first released in January, the “gold standard” for the regulatory guidance of AI technology.

However, NIST’s AI framework and the G7 agreement contrast in some ways with the foundational rights-based framework laid out in the White House’s October 2022 Blueprint for an AI ‘Bill of Rights,’ that some AI experts have advocated as a model for AI regulations going forward.

The post G7 nations agree on need for ‘risk-based’ approach to AI regulation appeared first on FedScoop.

]]>
https://fedscoop.com/g7-nations-agree-on-need-for-risk-based-approach-to-ai-regulation/feed/ 0 68010
Commerce’s NTIA launches trustworthy AI inquiry  https://fedscoop.com/commerces-ntia-launches-trustworthy-ai-inquiry/ Tue, 11 Apr 2023 17:08:29 +0000 https://fedscoop.com/?p=67548 The National Telecommunications and Information Administration has issued a request for comment on how government agencies should audit AI technology.

The post Commerce’s NTIA launches trustworthy AI inquiry  appeared first on FedScoop.

]]>
The National Telecommunications and Information Administration has launched an inquiry that will examine how companies and regulators can ensure artificial intelligence tools are trustworthy and work without causing harm.

Assistant Secretary of Commerce Alan Davidson announced the new initiative at an event at the University of Pittsburgh’s Institute of Cyber Law, Policy and Security.

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” Davidson said.

He added: “Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems.”

As part of the exercise, which is focused on determining how the federal government can effectively regulate the evolving technology, NTIA is seeking evidence on what policies can support the development of AI audits, assessments, certifications and other mechanisms to “create earned trust in AI systems.”

The Department of Commerce agency has issued a request for comment to seek feedback from a range of parties across industry and academia.

According to NTIA, insights collected through the request for comment will inform the Biden administration’s work to establish a joined-up regulatory framework for the technology.

Respondents have 60 days to submit comments following the publication of the request for comment in the Federal Register. They can do so by following instructions listed here.

The launch of NTIA’s inquiry follows the publication of a voluntary AI Risk Management Framework, which was issued in January by the National Institute of Standards and Technology.

That initial guidance document set out four key functions that NIST says are key to building responsible AI systems: govern, map, measure and manage.

NIST’s AI framework document followed the Biden administration’s AI ‘Bill of Rights’, which was published in October and sought to address the potential discriminatory effects of certain AI technology.

That blueprint document contained five key principles for the regulation of the technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration, and fallback.

The post Commerce’s NTIA launches trustworthy AI inquiry  appeared first on FedScoop.

]]>
67548
Deirdre Mulligan appointed White House deputy chief technology officer for policy https://fedscoop.com/white-house-names-deirdre-mulligan-deputy-chief-technology-officer-for-policy/ Tue, 28 Feb 2023 20:19:51 +0000 https://fedscoop.com/?p=66258 Deirdre Mulligan is a professor in the School of Information at UC Berkeley and a faculty director of the Berkeley Center for Law and Technology.

The post Deirdre Mulligan appointed White House deputy chief technology officer for policy appeared first on FedScoop.

]]>
The Biden administration has appointed UC Berkley law professor Deirdre Mulligan as deputy United States chief technology officer for policy.

In the role, she will work to ensure that U.S. government policy is informed by tech and data expertise and will also act as a principal adviser to the National AI Initiative Office. Mulligan takes over from Lynne Parker, who last year stepped down from the post to return to academia.

While serving in the White House, she will be on leave from UC Berkeley, and in the new role will draw on her academic research, which focuses on how regulatory choices shape privacy and online content moderation practices and definitions of emerging “responsible AI” practices.

At Berkeley, she is a professor in the School of Information and a faculty director of the Berkeley Center for Law and Technology.

“I’m excited to bring the insights I’ve garnered through my interdisciplinary research and my decades of experience working on internet policy issues to assist the Biden Administration in advancing the privacy and equity priorities set out in the Blueprint for an AI Bill of Rights,” Mulligan said in a statement.

The National AI Initiative Office, launched in January 2021 under President Donald Trump, is responsible for coordinating artificial intelligence research and policymaking across government, industry and academia. It is focused on implementing a national AI strategy as directed by the National Defense Authorization Act of 2021 to increase research investment, improve access to computing and data resources, set technical standards, build a workforce, and engage with allies.

The post Deirdre Mulligan appointed White House deputy chief technology officer for policy appeared first on FedScoop.

]]>
66258
White House OSTP chief Alondra Nelson to step down https://fedscoop.com/white-house-ostp-chief-alondra-nelson-to-depart/ Mon, 06 Feb 2023 19:29:16 +0000 https://fedscoop.com/?p=65504 She has led the White House Office of Science and Technology Policy since the resignation of Eric Lander last year.

The post White House OSTP chief Alondra Nelson to step down appeared first on FedScoop.

]]>
Alondra Nelson, who led the White House Office of Science and Technology Policy during a challenging period, is leaving government after two years to return to her faculty position at the Institute for Advanced Study in Princeton, New Jersey. 

Nelson initially led the OSTP Science and Society team, which had been newly created under President Joe Biden, and then led all of OSTP for eight months after Eric Lander resigned last February.

“We have landed some really big planes over these two years, and we’re in really good shape,” Nelson told Axios. “It’s a good moment to step away with some work launched that’s on the way to becoming implemented, and leave that work for others to do.”

Nelson, who is the first Black person and first woman of color, to lead OSTP, led the White House’s work on artificial intelligence, including the Blueprint for an AI Bill of Rights, helped roll out more rigorous scientific research standards when crafting federal policies, and boosted STEM programs.

“The space of automated systems and AI policy moves very quickly, and we really can’t be on the sidelines,” she said. The popularity of ChatGPT and other generative AI programs being available to the public “is probably going to be a real shift in how people engage with technology and their day-to-day lives.”

Nelson will return to being a professor at the school of social sciences at IAS. She previously served on the faculties of Yale and Columbia universities. 

Nelson steps down from her position on Feb. 10. Details of her replacement at OSTP were not immediately available. 

The post White House OSTP chief Alondra Nelson to step down appeared first on FedScoop.

]]>
65504