National AI Advisory Committee (NAIAC) Archives | FedScoop https://fedscoop.com/tag/national-ai-advisory-committee-naiac/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Tue, 30 Apr 2024 17:20:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 National AI Advisory Committee (NAIAC) Archives | FedScoop https://fedscoop.com/tag/national-ai-advisory-committee-naiac/ 32 32 How often do law enforcement agencies use high-risk AI? Presidential advisers want answers https://fedscoop.com/summary-reports-facial-recognition-high-risk-ai/ Tue, 30 Apr 2024 16:50:53 +0000 https://fedscoop.com/?p=77809 A national AI advisory group will recommend this week that law enforcement agencies be required to create and publish annual summary usage reports for facial recognition and other AI tools of that kind.

The post How often do law enforcement agencies use high-risk AI? Presidential advisers want answers appeared first on FedScoop.

]]>
A visit with the Miami Police Department by a group of advisers to the president on artificial intelligence may ultimately inform how federal law enforcement agencies are required to report their use of facial recognition and other AI tools of that kind.

During a trip to South Florida earlier this year, Law Enforcement Subcommittee members on the National AI Advisory Committee asked MPD leaders how many times they used facial recognition software in a given year. The answer they got was “around 40.” 

“That just really changes the impression, right? It’s not like everyone’s being tracked everywhere,” Jane Bambauer, NAIAC’s Law Enforcement Subcommittee chair and a University of Florida law professor, said in an interview with FedScoop. “On the other hand, we can imagine that there could be a technology that seems relatively low-risk, but based on how often it’s used … the public understanding of it should change.” 

Based in part on that Miami fact-finding mission, Bambauer’s subcommittee on Thursday will recommend to the full NAIAC body that federal law enforcement agencies be required to create and publish yearly summary usage reports for safety- or rights-impacting AI. Those reports would be included in each agency’s AI use case inventory, in accordance with Office of Management and Budget guidance finalized in March.

Bambauer said NAIAC’s Law Enforcement Subcommittee, which also advises the White House’s National AI Initiative Office, came to the realization that simply listing certain types of AI tools in agency use case inventories “doesn’t tell us much about the scope” or the “quality of its use in a real-world way.” 

“If we knew an agency, for example, was using facial recognition, some observers would speculate that it’s a fundamental shift into a sort of surveillance state, where our movements will be tracked everywhere we go,” Bambauer said. “And others said, ‘Well, no, it’s not to be used that often, only when the circumstances are consistent … with the use limitations.’” 

The draft recommendation calls on federal law enforcement agencies to include in their annual usage reports a description of the technology and the number of times it has been used that year, as well as the purpose of the tool and how many people used it. The report would also include total annual costs for the tool and detail when it was used on behalf of other agencies. 

The subcommittee had previously tabled discussions of the public summary reporting requirement for the use of high-risk AI, but after some refinement brought it back into conversation during an April 5 public meeting of the group. 

Anthony Bak, head of AI at Palantir, said during that meeting that the goal of the recommendation was to make “the production” of those summary statistics a “very low lift for the agencies that are using AI.” Internal IT systems that track AI use cases within law enforcement agencies “should be able to produce these statistics very easily,” he added.

Beyond the recommendation’s topline takeaway on reporting the frequency of AI usage, Bak said the proposed rule would also provide law enforcement agencies with a “gut check for AI use case policy adherence.”

If an agency says they’re using an AI tool “only for certain kinds of crimes” and then they’re reporting for a “much broader category of crimes, you can check that very quickly and easily with these kinds of summary statistics,” Bak said.

Benji Hutchinson, a subcommittee member and chief revenue officer of Rank One Computing, said that from a commercial and technical perspective, it wouldn’t be an “overly complex task” to produce these summary reports. The challenges would come in coordination and standardization.

“Being able to make sure that we have a standard approach to how the systems are built and implemented is always the tough thing,” Hutchinson said. “Because there’s just so many layers to state and local and federal government and how they share their data. And there’s all sorts of different MOUs in place and challenges associated with that.”

The subcommittee seemingly aimed to address the standardization issue by noting in its draft that summary statistics “should include counts by type of case or investigation” according to definitions spelled out in the Uniform Crime Reporting Program’s National Incident-Based Reporting System. Data submitted to NIBRS — which includes victim details, known offenders, relationships between offenders and victims, arrestees, and property involved in crimes — would be paired with information on the source of the image and the person conducting the search.

The Law Enforcement Subcommittee plans to deliver two other recommendations to NAIAC members Thursday: The first is the promotion of a checklist for law enforcement agencies “to test the performance of an AI tool before it is fully adopted and integrated into normal use,” per a draft document, and the second encourages the federal government to invest in the development of statewide repositories of body-worn camera footage that can be accessed and analyzed by academic researchers.

Those recommendations serve as a continuation of the work that the Law Enforcement Subcommittee has prioritized this year. During February’s NAIAC meeting, Bambauer delivered recommendations to amend Federal CIO Council guidance on sensitive use case and common commercial product exclusions from agency inventories. Annual summary usage reports for safety- and rights-impacting AI align with an overarching goal to create more comprehensive use case inventories. 

“We want to sort of prompt a public accounting of whether the use actually seems to be in line with expectations,” Bambauer said.

The post How often do law enforcement agencies use high-risk AI? Presidential advisers want answers appeared first on FedScoop.

]]>
77809
AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions https://fedscoop.com/ai-advisory-law-enforcement-use-case-recommendations/ Wed, 28 Feb 2024 18:12:01 +0000 https://fedscoop.com/?p=76246 The National AI Advisory Committee’s Law Enforcement Subcommittee voted unanimously to edit CIO Council recommendations on sensitive use case and common commercial product exclusions, moves intended to broaden law enforcement agency inventories.

The post AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions appeared first on FedScoop.

]]>
There’s little debate that facial recognition and automated license plate readers are forms of artificial intelligence used by police. So the omissions of those technologies in the Department of Justice’s AI use case inventory late last year were a surprise to a group of law enforcement experts charged with advising the president and the National AI Initiative Office on such matters.

“It just seemed to us that the law enforcement inventories were quite thin,” Farhang Heydari, a Law Enforcement Subcommittee member on the National AI Advisory Committee, said in an interview with FedScoop.

Though the DOJ and other federal law enforcement agencies in recent weeks made additions to their use case inventories — most notably with the FBI’s disclosure of Amazon’s image and video analysis software Rekognition — the NAIAC Law Enforcement Subcommittee wanted to get to the bottom of the initial exclusions. With that in mind, subcommittee members last week voted unanimously in favor of edits to two recommendations governing excluded AI use cases in Federal CIO Council guidance

The goal in delivering updated recommendations, committee members said, is to clarify the interpretations of those exemptions, ensuring more comprehensive inventories from federal law enforcement agencies.

“I think it’s important for all sorts of agencies whose work affects the rights and safety of the public,” said Heydari, a Vanderbilt University law professor who researches policing technologies and AI’s impact on the criminal justice system. “The use case inventories play a central role in the administration’s trustworthy AI practices — the foundation of trustworthy AI is being transparent about what you’re using and how you’re using it. And these inventories are supposed to guide that.” 

Office of Management and Budget guidance issued last November called for additional information from agencies on safety- or rights-impacting uses — an addendum especially relevant to law enforcement agencies like the DOJ. 

That guidance intersected neatly with the NAIAC subcommittee’s first AI use case recommendation, which permitted agencies to “exclude sensitive AI use cases,” defined by the Federal CIO Council as those “that cannot be released practically or consistent with applicable law and policy, including those concerning the protection of privacy and sensitive law-enforcement, national security, and other protected interests.”

Subcommittee members said during last week’s meeting that they’d like the CIO Council to go back to the drawing board and make a narrower recommendation, with more specificity around what it means for a use case to be sensitive. Every law enforcement use of AI “should begin with a strong presumption in favor of public disclosure,” the subcommittee said, with exceptions limited to information “that either would substantially undermine ongoing investigations or would put officers or members of the public at risk.”

“If a law enforcement agency wants to use this exception, they have to basically get clearance from the chief AI officer in their unit,” Jane Bambauer, NAIAC’s Law Enforcement Subcommittee chair and a University of Florida law professor, said in an interview with FedScoop. “And they have to document the reason that the technology is so sensitive that even its use at all would compromise something very important.”

It’s no surprise that law enforcement agencies use technologies like facial or gait recognition, Heydari added, making the initial omissions all the more puzzling. 

“We don’t need to know all the details, if it were to jeopardize some kind of ongoing investigation or security measures,” Heydari said. “But it’s kind of hard to believe that just mentioning that fact, which, you know, most people would probably guess on their own, is really sensitive.”

While gray areas may still exist when agencies assess sensitive AI use cases, the second AI use case exclusion targeted by the Law Enforcement Subcommittee appears more cut-and-dried. The CIO Council’s exemption for agency usage of “AI embedded within common commercial products, such as word processors or map navigation systems” resulted in technologies such as automated license plate readers and voice spoofing to often be left on the cutting-room floor. 

Bambauer said very basic AI uses, such as autocomplete or some Microsoft Edge features, shouldn’t be included in inventories because they aren’t rights-impacting technologies. But common commercial AI products might not have been listed because they’re not “bespoke or customized programs.”

“If you’re just going out into the open market and buying something that [appears to be exempt] because nothing is particularly new about it, we understand that logic,” Bambauer said. “But it’s not actually consistent with the goal of inventory, which is to document not just what’s available, but to document what is actually a use. So we recommended a limitation of the exceptions so that the end result is that inventory is more comprehensive.”

Added Heydari: “The focus should be on the use, impacting people’s rights and safety. And if it is, potentially, then we don’t care if it’s a common commercial product — you should be listing it on your inventory.” 

A third recommendation from the subcommittee, which was unrelated to the CIO Council exclusions, calls on law enforcement agencies to adopt an AI use policy that would set limits on when the technology can be used and by whom, as well as who outside the agency could access related data. The recommendation also includes several oversight mechanisms governing an agency’s use of AI.

After the subcommittee agrees on its final edits, the three recommendations will be posted publicly and sent to the White House and the National AI Initiative Office for consideration. Recommendations from NAIAC — a collection of AI experts from the private sector, academia and nonprofits — have no direct authority, but Law Enforcement Subcommittee members are hopeful that their work goes a long way toward improving transparency with AI and policing.

“If you’re not transparent, you’re going to engender mistrust,” Heydari said. “And I don’t think anybody would argue that mistrust between law enforcement and communities hasn’t been a problem, right? And so this seems like a simple place to start building trust.”

The post AI advisory committee wants law enforcement agencies to rethink use case inventory exclusions appeared first on FedScoop.

]]>
76246
Institute for Defense Analyses expected to support National AI Advisory Committee https://fedscoop.com/nist-sole-sources-ida-support-naiac/ Fri, 22 Jul 2022 17:59:28 +0000 https://fedscoop.com/?p=56270 The nonprofit administrator of three federally funded R&D centers will draft and finalize NAIAC's overdue first report if no company comes forward.

The post Institute for Defense Analyses expected to support National AI Advisory Committee appeared first on FedScoop.

]]>
The National Institute of Standards and Technology intends to have the Institute for Defense Analyses provide technical analysis to the National Artificial Intelligence Advisory Committee, citing its unique experience handling confidential data without conflicts of interest.

IDA would identify concerns surrounding U.S. AI competitiveness, the National AI Initiative, AI science, the National AI Research and Development Strategic Plan, and international standards development, should NIST determine no company is capable.

The National AI Initiative Act of 2020 tasked NIST with providing administrative support to NAIAC, and the agency is turning to IDA, a nonprofit that administers three federally funded R&D centers (FFRDCs), to conduct that work.

“IDA has the unique combined skills, knowledge and experience to effectively and efficiently provide the services needed to support NAIAC’s mission,” reads NIST’s sole source notice issued Thursday. “The FFRDC can provide a bridge to combine research and academic data with professional knowledge, experience, practice and understanding of policy in a way that others across the private industry would be unable to satisfy in the timeline stipulated by Congress.”

NAIAC had a year from the act’s passage to provide its first report with recommendations to the president and Congress, but it wasn’t established until September 2021 or filled until April.

IDA would conduct research at the request of NAIAC and its working groups, prepare fact sheets and memos ahead meetings, outline discussions, and draft and finalize the report after seeking public comment between Sept. 15 and Sept. 14, 2023.

The nonprofit would operate under the Science and Technology Policy Institute FFRDC contract, an indefinite delivery, indefinite quantity vehicle.

Companies have until Aug. 4 to respond to NIST’s notice with why it should instead conduct a competitive procurement.

The post Institute for Defense Analyses expected to support National AI Advisory Committee appeared first on FedScoop.

]]>
56270
National AI Advisory Committee establishes 5 working groups https://fedscoop.com/naiac-establishes-working-groups/ Wed, 04 May 2022 20:30:19 +0000 https://fedscoop.com/?p=51493 Trustworthy AI, R&D, the workforce, U.S. competitiveness, and international cooperation are the five initial focus areas.

The post National AI Advisory Committee establishes 5 working groups appeared first on FedScoop.

]]>
The National Artificial Intelligence Advisory Committee established five working groups to focus its efforts during its inaugural meeting Wednesday.

Leadership in Trustworthy AI, Leadership in Research and Development, Supporting the U.S. Workforce and Providing Opportunity, U.S. Leadership in Competitiveness, and International Cooperation are the initial groups.

The Department of Commerce set up NAIAC in September to advise the president and federal agencies in accordance with the National AI Initiative Act of 2020, and 27 members were appointed in April.

“Leadership at commerce was very thoughtful to ensure that we have a broad cross section of geography, perspectives, backgrounds, experience so that we can model what we’re intending to do and show that multi-stakeholder approach to the development and deployment of AI,” said NAIAC Chair Miriam Vogel, who’s also CEO of EqualAI, during the meeting. “To make sure that we are able to do our work as effectively as possible, we are going to focus our efforts into, initially, these five different working groups.”

Members were assigned to two each based on their stated interests and expertises.

Victoria Espinel, CEO of BSA, will lead the Leadership in Trustworthy AI working group.

Ayanna Howard, dean of the Ohio State University College of Engineering, and Ashley Llorens, vice president of Microsoft Research, will co-lead the Leadership in Research and Development working group.

Trooper Sanders, CEO of Benefits Data Trust, will lead the Supporting the U.S. Workforce and Providing Opportunity working group.

Yll Bajraktari, CEO of the Special Competitive Studies Project, will lead the U.S. Leadership in Competitiveness working group create to counter China‘s strategy to become the global leader in AI by 2030. The working group will examine how federal agencies are organized for the competition and ensure the National AI Initiative Office has the personnel and other resources it needs to coordinate them.

“We will look at how we can ensure better coordination among federal agencies so we stay ahead in all the elements of the AI competition,” Bajraktari said.

Zoë Baird, CEO of the Markle Foundation, will lead the International Cooperation working group that will help shape the commercialization environment for AI technologies with an emphasis on standards, citizen protections and economic inclusion.

Commerce Secretary Gina Raimondo, who spoke at the first meeting, stressed the incorporation of two Biden administration initiatives, in particular, into NAIAC’s work: the US-EU Trade and Technology Council and the Indo-Pacific Economic Dialogue, both with technology standards-setting components.

Raimando wants to integrate NAIAC’s policy suggestions into both initiatives to ensure any international technology standards established align with U.S. values, given increased Chinese influence in the space.

“I am so worried about China overtaking tech standard-setting bodies,” Raimando said. “So we have to stand up with the Europeans, with our like-minded allies.”

The post National AI Advisory Committee establishes 5 working groups appeared first on FedScoop.

]]>
51493
Commerce appoints 27 experts to National AI Advisory Committee https://fedscoop.com/commerce-appoints-experts-ai-committee/ Thu, 14 Apr 2022 19:41:41 +0000 https://fedscoop.com/?p=50501 The committee, established in September, will make recommendations on U.S. global competitiveness and the National AI Initiative.

The post Commerce appoints 27 experts to National AI Advisory Committee appeared first on FedScoop.

]]>
The Department of Commerce appointed 27 experts Thursday to its committee tasked with advising the White House on artificial intelligence issues.

Candidates across academia, industry, nonprofits and civil society were nominated by the public to serve on the National AI Advisory Committee, which will make recommendations on U.S. global competitiveness, the state of science and the workforce in the space.

DOC established the committee in September in accordance with the National AI Initiative Act of 2020, but these mark the first appointments to the body, which is also expected to advise the president and National AI Initiative Office on the management and funding of the initiative itself.

“Responsible AI development is instrumental to our strategic competition with China,” said Don Graves, deputy secretary of Commerce, in the announcement. “At the same time, we must remain steadfast in mitigating the risks associated with this emerging technology and others while ensuring that all Americans can benefit.”

The committee will influence U.S. AI policy for decades, Graves added.

Members include representatives from Google; BSA: The Software Alliance; Salesforce; Stanford University; Carnegie Mellon University; Microsoft; IBM; Credo AI; Amazon Web Services; NVIDIA; and the SAS Institute among others. They will serve three-year terms and up to two consecutively, at the discretion of Commerce Secretary Gina Raimondo.

The committee will also establish a subcommittee on the use of AI in law enforcement to advise on bias, data security, adoptability, and legal standards around privacy and civil rights. NAIAC’s first meeting is on May 4 and will be publicly webcast with administrative support provided by the National Institute of Standards and Technology.

“AI is already transforming the world as we know it including science, medicine, transportation, communications, and access to goods and services,” said Alondra Nelson, head of the Office of Science and Technology Policy and deputy assistant to the president, in a statement. “The expertise of NAIAC will be critical in helping to ensure the U.S. leads the world in the ethical development and adoption of AI, provides inclusive employment and education opportunities for the American public, and protects civil rights and civil liberties in our digital age.”

The post Commerce appoints 27 experts to National AI Advisory Committee appeared first on FedScoop.

]]>
50501
2021 in review: Oversight questions loom over federal AI efforts https://fedscoop.com/questions-around-federal-ai-oversight/ https://fedscoop.com/questions-around-federal-ai-oversight/#respond Tue, 28 Dec 2021 20:37:01 +0000 https://fedscoop.com/?p=46009 Federal oversight of artificial intelligence systems continues to lag behind government development and use, experts say.

The post 2021 in review: Oversight questions loom over federal AI efforts appeared first on FedScoop.

]]>
The Biden administration established several artificial intelligence bodies in 2021 likely to impact how agencies use the technology moving forward, but oversight mechanisms are lacking, experts say.

Bills mandating greater accountability around AI haven’t gained traction because the U.S. lacks comprehensive privacy legislation, like the European Union’s General Data Protection Regulation, which would serve as a foundation for regulating algorithmic systems, according to an Open Technology Institute brief published in November.

Still the White House launched the National AI Research Resource (NAIRR) Task Force and the National AI Advisory Committee, both authorized by the National AI Initiative Act of 2020, hoping to strengthen the U.S.’s competitive position globally, which may prove a losing battle absent oversight.

“Right now most advocates and experts in the space are really looking to the EU as the place that’s laying the groundwork for these kinds of issues,” Spandana Singh, policy analyst at OTI, told FedScoop. “And the U.S. is kind of lagging behind because it hasn’t been able to identify a more consolidated approach.”

Instead lawmakers propose myriad bills addressing aspects of privacy, transparency, impact assessments, intermediary liability, or a combination in a fragmented approach year after year. The EU has only the Digital Services Act, requiring transparency around algorithmic content curation, and the AI Act, providing a risk-based framework for determining if a particular AI system is “high risk.”

OTI holds foundational privacy legislation needs strong protections, data practices safeguarding civil rights, requirements that governments enforce privacy rights, and redress for violations. The White House Office of Science and Technology Policy is in the early stages of crafting an AI Bill of Rights, but that’s just one component.

The bulk of legislative efforts to hold platforms accountable for their algorithms has been around repealing Section 230 of the Communications Decency Act, which grants intermediary liability protections to social media companies, website operators and other web hosts of user-generated content. Companies like Facebook can’t be held liable for illegal content, allowing for a free and open internet but also misuse, in the case of discriminatory ads, Singh said.

Repeal of Section 230 could lead to overbroad censorship. OTI hasn’t taken an official position on the matter, but it does advocate for including algorithm audits and impact assessments in the federal procurement process to root out harm throughout the AI life cycle.

Currently there’s no federal guidance on how such evaluations should be conducted and by whom, but officials are beginning to call for more.

“Artificial intelligence will amplify disparities between people and the way they receive services, unless these core issues of ethics and bias and transparency and accessibility are addressed early and often,” said Anil Chaudhry, director of federal AI implementations at the General Services Administration, during an AFFIRM webinar earlier this month.

The Department of Justice’s Civil Rights Division has started identifying AI enforcement opportunities across employment, housing, credit, public benefits, education, disability, voting, and criminal justice. The division is also involved in multiple agencies’ efforts to create ethical AI frameworks and guidelines.

Legislation will take longer.

“We are studying whether policy or legislative solutions may also offer effective approaches for addressing AI’s potential for discrimination,” said Kristen Clarke, assistant attorney general, in a speech to the National Telecommunications and Information Administration on Dec. 14. “We are also reviewing whether guidance on algorithmic fairness and the use of AI may be necessary and effective.”

The work continues

The NAIRR Task Force continues to develop recommendations to Congress around the feasibility, implementation and sustainability of a national AI research resource, irrespective of the uncertainty surrounding federal oversight.

Members discussed NAIRR governance models at their October meeting and the data resources, testbeds, testing resources and user tools that could comprise it during their Dec. 13 session.

Some research groups contend investing in a shared computing and data infrastructure would subsidize Big Tech suppliers without implementing ethics and accountability controls, but tech giants Google and IBM pushed back that a federated, multi-cloud model would democratize researchers’ access to their AI capabilities.

“A big part of this is to provide, really democratize and make sure you have broad access to this resource for all researchers,” Manish Parashar, director of the Office of Advanced Cyberinfrastructure at the National Science Foundation and task force co-chair, told FedScoop. “And especially researchers that are part of underrepresented groups, as well as small businesses funded by the Small Business Innovation Research and Small Business Technology Transfer programs.”

The task force invited a panel of outside experts to its Dec. 13 meeting to share perspectives on how the NAIRR can account for privacy and civil rights to ensure trustworthy AI research and development. Speakers recommended building inclusive datasets and conducting impact assessments, as well as choosing a governance approach that treats equity as a core design principle.

Members are considering models like the ones NSF uses for its Extreme Science and Engineering Discovery Environment (XSEDE), Open Science Grid, Partnerships for Advanced Computational Infrastructure, COVID-19 High-Performance Computing Consortium, and federally funded research and development centers (FFRDCs).

“At this point it’s possible that maybe the existing models don’t meet the requirements,” Parashar said. “And we may want to think of some new or hybrid model that needs to be developed here.”

During the most recent meeting, Task Force Co-Chair Lynne Parker suggested the governance body grant NAIRR access to researchers approved to receive federal funds and process proposals from those who aren’t and private sector users.

The task force is leaning toward recommending contracting with a company to design and maintain the NAIRR portal, with the caveat that the resource should curate training materials for researchers of different skill levels.

A range of testbeds exist that could be included in the NAIRR, each with their own pros and cons in combating issues like bias.

For example, open-book modeling allows researchers to see if their algorithms perform better than a public test dataset, but absent lots of data there’s a danger an algorithm will only perform well in that situation.

Meanwhile closed-book modeling lets researchers design an algorithm with a training dataset before receiving the test dataset, but there’s no guarantee algorithms will perform significantly better.

Both models are also focused on machine learning predictive performance, when artificial intelligence also includes self-improving features, said Andrew Moore, director of Google Cloud AI and task force member.

Simulation testbeds allow for the end-to-end monitoring of AI system performance, sometimes across thousands of runs, but just one costs millions of dollars to create. The task force is leaning toward not recommending them.

Ultimately NAIRR can prevent agencies from duplicating each other’s AI testbed efforts.

“We do believe that we should provide access to testbeds,” Moore said. “Testbeds have unambiguously accelerated research in the past.”

The post 2021 in review: Oversight questions loom over federal AI efforts appeared first on FedScoop.

]]>
https://fedscoop.com/questions-around-federal-ai-oversight/feed/ 0 46009
Commerce establishes National AI Advisory Committee https://fedscoop.com/commerce-national-ai-advisory-committee/ https://fedscoop.com/commerce-national-ai-advisory-committee/#respond Wed, 08 Sep 2021 17:12:38 +0000 https://fedscoop.com/?p=43574 The new advisory panel will issue recommendations on a range of issues including U.S. AI competitiveness.

The post Commerce establishes National AI Advisory Committee appeared first on FedScoop.

]]>
The Department of Commerce has set up a committee to advise the president and other federal agencies on artificial intelligence issues, Secretary Gina Raimondo announced Wednesday.

It seeks to recruit top-level talent to serve on the new panel, which is called the National AI Advisory Committee. DOC also seeks members for a new AI and Law Enforcement subcommittee.

DOC and the National AI Initiative Office formed the committee, in accordance with the National AI Initiative Act of 2020. It will issue recommendations on U.S. AI competitiveness, workforce equity, funding, research and development, international cooperation, and legal issues.

“We have seen major advances in the design, development and use of AI, especially in the past several years,” said Eric Lander, director of the White House Office of Science and Technology Policy, in a statement. “We must be sure that these advances are matched by similar progress in ensuring that AI is trustworthy and that it ensures fairness and protections for civil rights.”

NAIAC will consist of members from academia, industry, nonprofits and federal laboratories.

Meanwhile the National Institute of Standards and Technology continues to develop guidance on trustworthy, explainable AI and will offer administrative support to the committee.

Nominations for NAIAC and its subcommittee nominations will occur on a rolling basis, with new members to be considered as vacancies arise.

The post Commerce establishes National AI Advisory Committee appeared first on FedScoop.

]]>
https://fedscoop.com/commerce-national-ai-advisory-committee/feed/ 0 43574