R Street Institute Archives | FedScoop https://fedscoop.com/tag/r-street-institute/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Wed, 08 Nov 2023 15:05:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 R Street Institute Archives | FedScoop https://fedscoop.com/tag/r-street-institute/ 32 32 Experts warn of ‘contradictions’ in Biden administration’s top AI policy documents https://fedscoop.com/experts-warn-of-contradictions-in-biden-administrations-top-ai-policy-documents/ Wed, 23 Aug 2023 22:51:12 +0000 https://fedscoop.com/?p=72248 AI policy specialists say a lack of guidance from the White House on how to square divergent rights-based and risk-based approaches to AI is proving a challenge for companies working to create new products and safeguards.

The post Experts warn of ‘contradictions’ in Biden administration’s top AI policy documents appeared first on FedScoop.

]]>
The Biden administration’s cornerstone artificial intelligence policy documents, released in the past year, are inherently contradictory and provide confusing guidance for tech companies working to develop innovative products and the necessary safeguards around them, leading AI experts have warned.

Speaking with FedScoop, five AI policy experts said adhering to both the White House’s Blueprint for an AI ‘Bill of Rights’ and the AI Risk Management Framework (RMF), published by the National Institute of Standards and Technology, presents an obstacle for companies working to develop responsible AI products.

However, the White House and civil rights groups have pushed back on claims that the two voluntary AI safety frameworks send conflicting messages and have highlighted that they are a productive “starting point” in the absence of congressional action on AI. 

The two policy documents form the foundation of the Biden administration’s approach to regulating artificial intelligence. But for many months, there has been an active debate among AI experts regarding how helpful — or in some cases hindering — the Biden administration’s dual approach to AI policymaking has been.

The White House’s Blueprint for an AI ‘Bill of Rights’ was published last October. It takes a rights-based approach to AI, focusing on broad fundamental human rights as a starting point for the regulation of the technology. That was followed by the risk-based AI RMF in January, which set out to determine the scale and scope of risks related to concrete use cases and recognized threats to instill trustworthiness into the technology.

Speaking with FedScoop, Daniel Castro, a technology policy scholar and vice president at the Information Technology and Innovation Foundation (ITIF), noted that there are “big, major philosophical differences in the approach taken by the two Biden AI policy documents,” which are creating “different [and] at times adverse” outcomes for the industry.

“A lot of companies that want to move forward with AI guidelines and frameworks want to be doing the right thing but they really need more clarity. They will not invest in AI safety if it’s confusing or going to be a wasted effort or if instead of the NIST AI framework they’re pushed towards the AI blueprint,” Castro said.

Castro’s thoughts were echoed by Adam Thierer of the libertarian nonprofit R Street Institute who said that despite a sincere attempt to emphasize democratic values within AI tools, there are “serious issues” with the Biden administration’s handling of AI policy driven by tensions between the two key AI frameworks.

“The Biden administration is trying to see how far it can get away with using their bully pulpit and jawboning tactics to get companies and agencies to follow their AI policies, particularly with the blueprint,” Thierer, senior fellow on the Technology and Innovation team at R Street, told FedScoop.

Two industry sources who spoke with FedScoop but wished to remain anonymous said they felt pushed toward the White House’s AI blueprint over the NIST AI framework in certain instances during meetings regarding AI policymaking with the White House’s Office of Science and Technology (OSTP).

Rep. Frank Lucas, R-Okla., chair of the House Science, Space and Technology Committee, and House Oversight Chairman Rep. James Comer, R-Ky., have been highly critical of the White House blueprint as it compares to the NIST AI Risk Management Framework, expressing concern earlier this year that the blueprint sends “conflicting messages about U.S. federal AI policy.”

In a letter obtained exclusively by FedScoop, Arati Prabhakar responded to those concerns, arguing that “these documents are not contradictory” and highlighting how closely the White House and NIST are working together on future regulation of the technology.

At the same time, some industry AI experts say the way in which the two documents define AI clash with one another.

Nicole Foster, who leads global AI and machine learning policy at Amazon Web Services, said chief among the concerns with the documents are diverging definitions of the technology itself. She told FedScoop earlier this year that “there are some inconsistencies between the two documents for sure. I think just at a basic level they don’t even define things like AI in the same way.”

Foster’s thoughts were echoed by Raj Iyer, global head of public sector at cloud software provider ServiceNow and former CIO of the U.S. Army, who believes the two frameworks are a good starting point to get industry engaged in AI policymaking but that they lack clarity.

“I feel like the two frameworks are complementary. But there’s clearly some ambiguity and vagueness in terms of definition,” said Iyer.

“So what does the White House mean by automated systems? Is it autonomous systems? Is it automated decision-making? What is it? I think it’s very clear that they did that to kind of steer away from wanting to have a direct conversation on AI,” Iyer added.

Hodan Omaar, an AI and quantum research scholar working with Castro at ITIF, said the two documents appear to members of the tech industry as if they are on different tracks. According to Omaar, the divergence creates a risk that organizations will simply defer to either the “Bill of Rights” or the NIST RMF and ignore the other.

“There are two things the White House should be doing. First, it should better elucidate the ways the Blueprint should be used in conjunction with the RMF. And second, it should better engage with stakeholders to gather input on how the Blueprint can be improved and better implemented by organizations,” Omaar told FedScoop.

In addition to compatibility concerns about the two documents, experts have also raised concerns about the process followed by the White House to take industry feedback in creating the documents.

Speaking with FedScoop anonymously in order to speak freely, one industry association AI official said that listening sessions held by the Office of Science and Technology Policy were not productive.

“The Bill of Rights and the development of that, we have quite a bit of concern because businesses were not properly consulted throughout that process,” the association official said. 

The official added: “OSTP’s listening sessions were just not productive or helpful. We tried to actually provide input in ways in which businesses could help them through this process. Sadly, that’s just not what they wanted.”

The AI experts’ comments come as the Biden administration works to establish a regulatory framework that mitigates potential threats posed by the technology while supporting American AI innovation. Last month, the White House secured voluntary commitments from seven leading AI companies about how AI is used, and it is expected to issue a new executive order on AI safety in the coming weeks.

One of the contributors to the White House’s AI Blueprint sympathizes with concerns from industry leaders and AI experts regarding the confusion and complexity of the administration’s approach to AI policymaking. But it’s also an opportunity for companies seeking voluntary AI policymaking guidance to put more effort into asking themselves hard questions, he said.

“So I understand the concerns very much. And I feel the frustration. And I understand people just want clarity. But clarity will only come once you understand the implications, the broader values, discussion and the issues in the context of your own AI creations,” said Suresh Venkatasubramanian, a Brown University professor and former top official within the White House’s OSTP, where he helped co-author its Blueprint for an ‘AI Bill of Rights.’ 

“The goal is not to say: Do every single thing in these frameworks. It’s like, understand the issues, understand the values at play here. Understand the questions you need to be asking from the RMF and the Blueprint, and then make your own decisions,” said Venkatasubramanian.

On top of that, the White House Blueprint co-author wants those who criticize the documents’ perceived contradictions to be more specific in their complaints.

“Tell me a question in the NIST RMF that contradicts a broader goal in the White House blueprint — find one for me, or two or three. I’m not saying this because I think they don’t exist. I’m saying this because if you could come up with these examples, then we could think through what can we do about it?” he said.

Venkatasubramanian added that he feels the White House AI blueprint in particular has faced resistance from industry because “for the first time someone in a position of power came out and said: What about the people?” when it comes to tech innovation and regulations. 

Civil rights groups like the Electronic Privacy Information Center have also joined the greater discussion about AI regulations, pushing back on the notion that industry groups should play any significant role in the policymaking of a rights-based document created by the White House.

“I’m sorry that industry is upset that a policy document is not reflective of their incentives, which is just to make money and take people’s data and make whatever decisions they want to make more contracts. It’s a policy document, they don’t get to write it,” said Ben Winters, the senior counsel at EPIC, where he leads their work on AI and human rights.

Groups like EPIC and a number of others have called upon the Biden administration to take more aggressive steps to protect the public from the potential harms of AI.

“I actually don’t think that the Biden administration has taken a super aggressive role when trying to implement these two frameworks and policies that the administration has set forth. When it comes to using the frameworks for any use of AI within the government or federal contractors or recipients of federal funds, they’re not doing enough in terms of using their bully pulpit and applying pressure. I really don’t think they’re doing too much yet,” said Winters.

Meanwhile, the White House has maintained that the two AI documents were created for different purposes but designed to be used side-by-side as initial voluntary guidance, noting that both OSTP and NIST were involved in the creation of both frameworks.

OSTP spokesperson Subhan Cheema said: “President Biden has been clear that companies have a fundamental responsibility to ensure their products are safe before they are released to the public, and that innovation must not come at the expense of people’s rights and safety. That’s why the administration has moved with urgency to advance responsible innovation that manage the risks posed by AI and seize its promise — including by securing voluntary commitments from seven leading AI companies that will help move us toward AI development that is more safe, secure, and trustworthy.”

“These commitments are a critical step forward and build on the administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. The administration is also currently developing an executive order that will ensure the federal government is doing everything in its power to support responsible innovation and protect people’s rights and safety, and will also pursue bipartisan legislation to help America lead the way in responsible innovation,” Cheema added.

NIST did not respond to requests for comment.

The post Experts warn of ‘contradictions’ in Biden administration’s top AI policy documents appeared first on FedScoop.

]]>
72248
Library of Congress is spending $1.5M on a public Congressional Research Service reports website. Is it worth it? https://fedscoop.com/library-of-congress-public-crs-reports-site-project-criticism/ https://fedscoop.com/library-of-congress-public-crs-reports-site-project-criticism/#respond Thu, 12 Jul 2018 13:22:43 +0000 https://fedscoop.com/?p=29042 The library's implementation plan, and critiques from some in civil society, is raising a quintessential tension in civic tech.

The post Library of Congress is spending $1.5M on a public Congressional Research Service reports website. Is it worth it? appeared first on FedScoop.

]]>
When President Donald Trump signed the Consolidated Appropriations Act of 2018 into law, he put a legislative mandate behind a decades-old transparency initiative. Buried in the bill’s 2,232 pages is a section that directs the Library of Congress to build and maintain a new website — a public-facing home for the taxpayer-funded reports written by the Congressional Research Service.

In response, the library has crafted a plan for development, a schedule for deployment and an estimated price tag for the build. Fans of the CRS’s work, however, are wondering whether it’s all worth it.

The library’s plan

The Congressional Research Service has been called “Congress’ think tank” — it’s a public policy research group within the Library of Congress that churns out thousands of nonpartisan reports a year on issues as varied as U.S. policy in Kuwait, next generation 911 technologies and violent crime in American cities, to name just a few. Congress, however, has been keeping the reports out of the hands of most regular citizens, initially by arguing that the cost for reproducing copies would be too high, since 1954.

On Sept. 18, if all goes according to plan, that will change. The CRS’s non-confidential “R” series reports — which currently circulate to the public only if someone in a congressional office shares them — will be officially available to more than just lawmakers and aides for the first time.

According to the library’s website implementation plan, which was obtained by FedScoop, the new site will publish the reports in PDF format with a digital signature to ensure data integrity and will keep a version history as reports are updated. At launch, the library estimates that 500 of the approximately 2,700 active R-series reports will be available to read and download, and staff will continue to add new reports to the site “as soon as is practicable.” The whole thing will cost the library’s Office of the CIO about $1.5 million — most of that in labor costs.

Much of the work hinges on a modification that is being made to the CRS’s backend publishing system, known as the Authoring and Publishing (A&P) Tool. Library developers are building in an option that will allow A&P to send reports to both the internal congressional site, CRS.gov,  and the new public-facing one.

The public-facing website will have the “same look and feel” as CRS.gov, John Rutledge, the library’s customer engagement director, told FedScoop. This not only will save time and money for the library, Rutledge argued, it also means that the new site will be intuitive for the congressional staffers who might need to use it in a pinch.

So what’s the problem?

‘Unreasonably expensive’

Daniel Schuman is the policy director at Demand Progress, a transparency organization that runs a site called EveryCRSReport.com, currently home to nearly 14,500 of the agency’s reports. Schuman is concerned that the library’s planned website does not meet the requirements of the law, will lack functionality and is too expensive.

The EveryCRSReport.com site, Schuman told FedScoop, cost less than $20,000 to build and maintain and has much of the same functionality that the LOC is looking for (like redactions of author contact information). What’s more, the website’s code is available, open source and free to use, on GitHub.

In light of such open source resources, Schuman said, the $1.5 million figure, even considering that government tech expenses are always higher than similar private-sector work, seems “unreasonably expensive.”

But Schuman’s concerns don’t end at cost. In a formal written response to the library’s implementation plan, a copy of which was obtained by FedScoop, Schuman and two other observers lay out additional critiques. Joshua Tauberer of GovTrack.us, a transparency initiative tracking legislative activities on the Hill, and Kevin Kosar from the R Street Institute, a free-market public policy think tank, joined him in the letter.

The library should commit to publishing more than just the R-series — insights, infographics, testimony and more should also be included on the site, the group wrote. Additionally, the library should publish reports in HTML as well as PDF format, and look into automatic processing to speed up the work of transitioning reports from CRS.gov to the public site at Congress.gov/crsreports.

Above all, Schuman argues, the LOC could be doing a better job of leveraging outside expertise. After all, a number of civil society groups, including his own, are already doing this work. “We’d be more than happy to chat with [the library],” Schuman told FedScoop. “We’ve tried.”

“My goal is not to stick it to them,” he said. He truly wants the project to be a success, he argued. “My real concern is that they’re not going to accomplish this by the end of this Congress… and then it will lose steam.”

‘It has become almost redundant’

Steven Aftergood, director of the Federation of American Scientists, doesn’t think the cost of the website is such a red flag. “It’s not a shockingly high cost, as far as I can tell,” he told FedScoop.

What worries him is the potential redundancy — most non-confidential CRS reports are already in the public domain if you know where to find them. Groups like the American Library Association have advocated for around 20 years to give the public easy access, and players like EveryCRSReport.com have stepped in to fill the void in the interim.

So while the library’s website will be new, it won’t exactly be novel.

“I feel a little puzzled that it took such a long time to accomplish this move,” Aftergood said. “You could even say that it is taking place at a time when it has become almost redundant.”

“The net increase in ‘transparency’ resulting from the new legislation is less than it would have been years ago,” Aftergood wrote in a blog post about the news. This isn’t to say that Aftergood thinks the forthcoming LOC site is a waste — it has the potential to grow and become more valuable, he said, especially if it expands beyond the R-series.

Both Aftergood and Schuman acknowledge that there is a value in finding CRS reports at a government domain. It’s an “implicit acknowledgment” by Congress that CRS reports should be public, and that’s new, Aftergood said.

“Congress is the authoritative source for this information,” Schuman said. “If Congress or the Library of Congress is the information publisher, then you know that what is being published is accurate, complete, authentic, and unaltered. Any secondary source (like us) means that you cannot have that level of assurance.”

At the end of the day, Schuman and Aftergood’s critiques boil down to a desire for the site to be as good as it possibly can be: more transparent, more user-friendly, less expensive. It’s a quintessential tension in civic tech — this isn’t the first time the dynamic has played out and it won’t be the last.

“I have no criticism of it,” Aftergood said. “I’m just sorry it took so long.”

The post Library of Congress is spending $1.5M on a public Congressional Research Service reports website. Is it worth it? appeared first on FedScoop.

]]>
https://fedscoop.com/library-of-congress-public-crs-reports-site-project-criticism/feed/ 0 29042
Congress needs to revive in-house tech office, think tank says https://fedscoop.com/congress-needs-revive-house-tech-office-think-tank-says/ https://fedscoop.com/congress-needs-revive-house-tech-office-think-tank-says/#respond Tue, 30 Jan 2018 17:15:53 +0000 https://fedscoop.com/?p=27180 Since the Office of Technology Assessment was cut in 1995, policymakers have struggled to understand and create laws around new technology, a think tank argues.

The post Congress needs to revive in-house tech office, think tank says appeared first on FedScoop.

]]>
For Congress to keep up with the pace of technology change, it needs to bring back the shuttered Office of Technology Assessment, a Washington, D.C., the R Street Institute argues in a new report.

Recreating OTA is crucial because Congress direly needs the in-house expertise and in-depth research functions that the office formerly provided, argue Kevin Kosar, the institute’s vice president of policy, and Zachary Graves, its director of technology and innovation policy, in a new study.

Times have changed drastically since OTA was first created in 1972 as an expert adviser agency that served as a think tank within Congress, providing technology assessments to assist in the crafting of legal frameworks for new technology. Now, a majority of the population has a smartphone or access to internet, so it’s more important than ever for Congress to bolster its technology policy knowledge by reviving OTA, Graves and Kosar write.

Since the agency was cut in 1995, policymakers have struggled to understand and create laws around new technology, the authors say. The new Republican majority that came into power after the 1994 elections dismantled the OTA.

“The loss of this capability is becoming rapidly evident as we find the First Branch less prepared than ever to shape a regulatory environment that has hitherto allowed America to lead the world in technological innovation,” Kosar said in a statement.

And even though Congress has a variety of offices that serve in a similar advisory capacity to what OTA once did — like the Congressional Budget Office, the Congressional Research Service and the Government Accountability Office — those offices have also diminished in their size and tech savvy, the report says.

Graves and Kosar believe it wouldn’t cost much to bring back the office. In 1995, OTA had budget of $22 million, which even then was only a tiny fraction of the federal budget. Rep. Mark Takano, D-Calif., is cited in the report as saying $2.5 million could get OTA started again in the 21st century.

But despite the low estimated cost to bring the agency back, Republicans have pushed back on the idea as a liberal entity with a Democratic agenda, according to Graves and Kosar. They said, however, this would be easily solved since the current speaker of the House would choose the board members, evenly representing both parties.

“Let me be blunt here: Failing to augment Congress’ technological expertise ensures that the preferences of executive branch agencies and private interests hold the greatest sway in technology policy decisions, to the detriment of the public interest,” Graves said. “To address this, Congress needs to bring back its nerds. And fast.”

The post Congress needs to revive in-house tech office, think tank says appeared first on FedScoop.

]]>
https://fedscoop.com/congress-needs-revive-house-tech-office-think-tank-says/feed/ 0 27180