Arthur Maccabe Archives | FedScoop https://fedscoop.com/author/arthur-maccabe/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 08 Nov 2023 15:05:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Arthur Maccabe Archives | FedScoop https://fedscoop.com/author/arthur-maccabe/ 32 32 Regulating AI risk: Why we need to revamp the ‘AI Bill of Rights’ and lean on depoliticized third parties https://fedscoop.com/regulating-ai-risk-why-we-need-to-revamp-the-ai-bill-of-rights-and-lean-on-depoliticized-third-parties/ Thu, 31 Aug 2023 18:51:09 +0000 https://fedscoop.com/?p=72428 In an exclusive commentary, Arthur Maccabe argues that AI must be regulated, and that it shouldn't be the job of the federal government alone.

The post Regulating AI risk: Why we need to revamp the ‘AI Bill of Rights’ and lean on depoliticized third parties appeared first on FedScoop.

]]>
The AI debate has transitioned from doomsday prophecies to big questions about its risks and how to effectively regulate AI technologies. AI brings a new level of intricacy to an already complex regulatory landscape as a rapidly evolving technology that will likely outpace the creation of comprehensive regulation.

While AI has tremendous potential to increase efficiencies, create new types of job opportunities and enable innovative public-private partnerships, it’s important to regulate its risks. Threats to U.S. cybersecurity and national defense are a major concern, along with the risk of bias and the ability of these tools to spread disinformation quickly and effectively. Additionally, there is a need for increased transparency amidst the development and ongoing use of AI, especially with popular, widely deployed tools like ChatGPT.

Washington D.C. is more focused on AI regulation than ever. The Biden Administration recently announced the National Institute of Standards and Technology’s (NIST) launch of a new AI public working group. Comprised of experts from private and public sectors, the group aims to better understand and tackle the risks of rapidly advancing generative AI. Additionally, Congress has held nearly a dozen hearings on AI since March.

While this momentum demonstrates progress, there is an urgent need to regulate AI as risks continue to emerge and other nations deploy their own AI regulation. Effectively regulating AI will first require the development of a regulatory framework created and upheld by a responsible and respected entity and produced with input from industry, academia and the federal government.  

Addressing biases through the federal government and academia 

This framework must address the potential biases of the technology and clearly articulate the rights of individuals and communities. The Blueprint for an AI ‘Bill of Rights’ developed by the Office of Science and Technology Policy (OSTP) is a good starting point. However, it doesn’t tie back to the original Bill of Rights or the Privacy Act of 1974, which articulates rights that individuals have in protecting their personal data. Going forward, it will be important to explicitly note why an AI-specific version is needed. The government can contribute to the framework by creating a stronger foundation for the Bill of AI Rights that will address AI biases by making them implicit and explicit. 

This regulatory framework should be motivated by potential risks to these rights. Regulations will need to be evaluated and updated regularly, as there can be unintended and unexpected consequences – like those of the European Union’s General Data Protection Regulation (GDPR). This regulation to safeguard personal data resulted in unintentional high-compliance costs which disproportionately impacted smaller businesses.

Academia’s commitment to scholarship, debate, and collaboration can also enable the formation of interdisciplinary teams to tackle AI system challenges. Fairness, for example, is a social construct; ensuring that a computational system is fair will require collaboration between social scientists and computer scientists. The emergence of generative AI systems like ChatGPT raises new questions about creation and learning, necessitating engagement from an even broader range of disciplines.

Why a regulatory framework alone won’t work 

Regulating AI shouldn’t just be the job of the federal government. The highly politicized legislative process is lengthy, which isn’t conducive to quickly evolving AI technology. Collaboration with industry, academia and professional societies is key to successfully deploying and enforcing AI regulation.

In Washington, D.C., previous attempts at AI regulation policy have been limited in scope and have ignited a debate about the federal government’s role. For example, the Algorithmic Accountability Act of 2022, which aimed to promote transparency and accountability in AI systems, was introduced in Congress but did not pass into law. While it did involve government oversight, it also encouraged industry self-regulation by giving companies flexibility in designing their own methods for conducting impact assessments. 

Additionally, Sen. Chuck Schumer, D-N.Y., recently introduced the Safe Innovation Framework for AI Policy to develop comprehensive legislation to regulate and advance AI development and questioned the federal government’s role in AI regulation.

Third-party self-regulation is a key component 

There are existing models of self-regulation used in other industries that could work for AI to complement this legislative framework. For example, the financial industry has implemented self-regulatory processes through organizations like the National Futures Association to certify that the products developed by its licensed members are valid.  

Self-regulation in AI could include third-party certification for AI products from professional societies like the Association for Computing Machinery or the Institute of Electrical and Electronics Engineers. Professional societies include academics and industry and can collaborate with government entities like NIST. They are also nimble and able to keep up with the rapid rate of change to depolarize and depoliticize AI regulation.  

Additionally, establishing and reviewing regulations could be done through Blue Ribbon panels organized by the National Academies which should include participants from government, industry and academia, especially the social sciences and humanities.

Across the globe, the race is on to regulate AI with the European Union already taking steps by releasing its regulatory framework. In the United States, elected officials in areas like New York City have passed laws on how companies can use AI in hiring and promotion

When it comes to AI, we must move quickly to protect fundamental rights. Leveraging the expertise of academia and industry experts, and taking a risk-based approach with self-regulating entities will be crucial. Now is the time to organize, evaluate and regulate AI. 

Dr. Arthur Maccabe is the executive director of the Institute for Computation and Data-Enabled Insight (ICDI) at the University of Arizona. Prior to this, he was the computer science and mathematics division director at Oak Ridge National Laboratory (ORNL) where he was responsible for fundamental research enabling and enabled by the nation’s leadership class Peta-scale computing capabilities, and he was co-author of the US Department of Energy’s roadmap for intelligent computing. Prior to that, he spent 26 years teaching computer science and serving as Chief Information Officer at the University of New Mexico and was instrumental in developing the high-performance computing capabilities at Sandia National Laboratory.

The post Regulating AI risk: Why we need to revamp the ‘AI Bill of Rights’ and lean on depoliticized third parties appeared first on FedScoop.

]]>
72428