DOJ Archives | FedScoop https://fedscoop.com/tag/doj/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Mon, 29 Jan 2024 22:45:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 DOJ Archives | FedScoop https://fedscoop.com/tag/doj/ 32 32 Justice Department discloses FBI project with Amazon Rekognition tool https://fedscoop.com/doj-fbi-amazon-rekognition-technology-ai-use-case/ Thu, 25 Jan 2024 23:36:25 +0000 https://fedscoop.com/?p=75733 The disclosure comes after Amazon said in 2020 that it would institute a moratorium on police use of Rekognition.

The post Justice Department discloses FBI project with Amazon Rekognition tool appeared first on FedScoop.

]]>
The Department of Justice has disclosed that the FBI is in the “initiation” phase of using Amazon Rekognition, an image and video analysis software that has sparked controversy for its facial recognition capabilities, according to an update to the agency’s AI inventory

In response to questions from FedScoop, neither Amazon nor the DOJ clarified whether the FBI had access to or is using facial recognition technology, specifically, through this work. But the disclosure is notable, given that Amazon had previously announced a moratorium on police use of Rekognition.

An AI inventory released on the DOJ website discloses that the FBI has a project named “Amazon Rekognition – AWS – Project Tyr.” The description does not mention the term “facial recognition” but states that the agency is working on customizing the tool to “review and identify items containing nudity, weapons, explosives, and other identifying information.” 

“Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from lawfully acquired images and videos,” states a summary of the use case that echoes the Amazon website’s description of the product. In regard to developer information, the disclosure says the system was commercial and off-the-shelf, and that it was purchased pre-built from a third party.

Other aspects of the project have not yet been finalized, according to the inventory. The disclosure says that in collaboration with Amazon Web Services, the agency will determine where the training data originates, whether the source code is made publicly available, and what specific AI techniques were used. The DOJ states that the agency is not able to conduct ongoing testing of the code but can perform audits. The justice agency also claims the use case is consistent with Executive Order 13960, a Trump-era order on artificial intelligence. 

“To ensure the Department remains alert to the opportunities and the attendant risks posed by artificial intelligence (AI) and other emerging technologies, the Deputy Attorney General recently established the Emerging Technologies Board to coordinate and govern AI and other emerging technology issues across the Department,” DOJ spokesperson Wyn Hornbuckle said in response to a series of questions from FedScoop about the use case.

He added: “The board will advance the use of AI and other emerging technologies in a manner that is lawful and respectful of our nation’s values, performance-driven, reliable and effective, safe and resilient, and that will promote information sharing and best practices, monitor taskings and progress on the department’s AI strategy, support interagency coordination, and provide regular updates to leadership.” 

The DOJ did not address several aspects of the work with Amazon, including questions about whether the FBI had put any limits on the use of its technology, the purpose of nudity detection, or the extent to which the law enforcement agency could access facial recognition through the work discussed in the disclosure. Through the DOJ, the FBI declined to comment. 

Amazon was given 24 hours to comment on a series of questions sent from FedScoop but did not respond by the time of publication. A day later, Amazon spokesperson Duncan Neasham emailed FedScoop the following statement:

“We imposed a moratorium on police departments’ use of Amazon Rekognition’s face comparison feature in connection with criminal investigations in June 2020, and to suggest we have relaxed this moratorium is false. Rekognition is an image and video analysis service that has many non-facial analysis and comparison features. Nothing in the Department of Justice’s disclosure indicates the FBI is violating the moratorium in any way.”

The tool was not disclosed in an earlier version of the DOJ’s AI inventory. While it’s not clear when the inventory was updated, a consolidated list of federal AI uses posted to AI.gov in September didn’t include the disclosure. The source date of the DOJ page appears to be incorrect and tags the page to October 2013, though the executive order requiring inventories wasn’t signed by President Donald Trump until late 2020. 

A page on Amazon’s website featuring the Rekognition technology highlights the tool’s applications in “face liveness,” “face compare and search,” and “face detection and analysis,” as well as applications such as “content moderation,” “custom labels,” and “celebrity recognition.” 

Beyond the application examples listed in the inventory, the DOJ did not explain the extent to which the FBI could or would use facial recognition as part of this work. Amazon previously told other media outlets that its moratorium on providing facial recognition to police had been extended indefinitely, though it’s not clear how Amazon interprets that moratorium for federal law enforcement. Notably, Amazon’s website has guidance for public safety uses.

But others have raised concerns about the technology. In 2019, a group of researchers called on Amazon to stop selling Rekognition to law enforcement following the release of a study by AI experts Inioluwa Deborah Raji and Joy Buolamwini that found that an August 2018 version of the technology had “much higher error rates while classifying the gender of darker skinned women than lighter skinned men,” according to the letter. 

Amazon had previously pushed back on those findings and has defended its technology. The National Institute of Standards and Technology confirmed that Amazon has not voluntarily submitted its algorithms for study by the agency. 

“Often times companies like Amazon provide AI services that analyze faces in a number of ways offering features like labeling the gender or providing identification services,” Buolamwini wrote in an early 2019 blog post. “All of these systems regardless of what you call them need to be continuously checked for harmful bias.”

The company has argued in a corporate blog defending its technology that the “mere existence of false positives doesn’t mean facial recognition is flawed. Rather, it emphasizes the need to follow best practices, such as setting a reasonable similarity threshold that correlates with the given use case.” 

The DOJ’s disclosure is also notable because, in the wake of George Floyd’s murder in 2020 — and following an extensive and pre-existing movement against the technology — Amazon said it would implement a one-year pause on providing Rekognition to police. In 2021, the company extended that moratorium indefinitely, according to multiple reports. Originally, Amazon said the moratorium was meant to give Congress time to pass regulation of the technology. 

“It would be a potential civil rights nightmare if the Department of Justice was indeed using Amazon’s facial recognition technology ‘Rekognition,’” Matt Cagle, a senior staff attorney at the American Civil Liberties Union of Northern California, said in a written statement to FedScoop, pointing to the racial bias issues with facial recognition. “After immense public pressure, Amazon committed to not providing a face recognition product to law enforcement, and so any provision of Rekognition to DOJ would raise serious questions about whether Amazon has broken that promise and engaged in deception.”

A 2018 test of Rekognition’s facial recognition capabilities by the ACLU incorrectly matched 28 members of Congress with mugshots. Those members were “disproportionately people of color,” according to the ACLU. 

The DOJ inventory update noting the use of the Amazon tool was “informative, but in some ways surprising,” said Caitlin Seeley George, the director of campaigns and operations at the digital rights group Fight for the Future, because “we haven’t seen specific examples of FBI using Amazon Rekognition in recent years and because Amazon has said and has continued to say that they will not sell their facial recognition technology to law enforcement.” 

“This is the problem with trusting a company like Amazon — or honestly any company — that’s selling this technology,” she added. “Not only could they change their mind at any point, but they can decide the barriers of what their word means and if and how they’re willing to make adjustments to what they have said that they would or wouldn’t do with their product and who they will or won’t sell it to.”

Ben Winters, senior counsel for the Electronic Privacy Information Center, said that “it feels like a weird time to be adopting this big, sensitive type system,” noting that once the technology is there, it’s “more entrenched.” He pointed to the recent executive order on AI and draft guidance for rights-impacting AI that’s due to be finalized by the Office of Management and Budget. 

A NextGov story from 2019 reported that the FBI was piloting Rekognition facial matching software for the purpose of mining through video surveillance footage. According to that story, the pilot started in 2018, though the DOJ did not address a FedScoop question about what happened to it or if it’s the same project discussed in the updated AI inventory.

A record available on the FBI’s Vault, the agency’s electronic Freedom of Information Act library, appears to show that the agency took issue with some reporting on that pilot at the time, but much of the document is redacted. 

The post Justice Department discloses FBI project with Amazon Rekognition tool appeared first on FedScoop.

]]>
75733
Proactive approach from White House, NIST needed for facial recognition technology, report says https://fedscoop.com/facial-recognition-technology-report-dhs-fbi-nist-white-house/ Wed, 17 Jan 2024 16:00:00 +0000 https://fedscoop.com/?p=75604 A National Academies of Sciences, Engineering, and Medicine report sponsored by the FBI and DHS recommends executive action regarding facial recognition technology and making NIST the “logical home” for regulatory activities and standards.

The post Proactive approach from White House, NIST needed for facial recognition technology, report says appeared first on FedScoop.

]]>
Federal laws and regulations haven’t kept pace with advancements in facial recognition technology, a fact that merits executive action and more responsibility for agencies including the National Institute of Standards and Technology, a new report sponsored by the Department of Homeland Security and Federal Bureau of Investigation recommends.

The National Academies of Sciences, Engineering, and Medicine report, released Wednesday, found that the nation is lacking in “authoritative guidance, regulations, or laws to adequately address issues related to facial recognition,” a technology that has only grown in use in recent years with the rapid adoption of artificial intelligence models that fuel the tech. 

The report’s authors said it’s incumbent upon the U.S. government and lawmakers to be more proactive on legal and regulatory questions. 

“It is crucial that governments make tackling these issues a priority,” Jennifer Mnookin, University of Wisconsin-Madison chancellor and co-chair of the committee that wrote the report, said in a statement. “Failing or choosing not to adopt policies and regulations on the development and use of facial recognition technology would effectively cede decisionmaking and rulemaking on these important questions of great public concern entirely to the private sector and the marketplace.”

The report’s authors — who conducted the study independently of their DHS and FBI sponsors but were guided by questions from the agencies and NASEM staff and board members — noted the race and equity shortcomings inherent in facial recognition technologies, which are disproportionately reliant on data from white people. 

With that in mind, the authors urge the president to issue an executive order that develops guidelines for federal agencies on “the appropriate use of facial recognition technology” that takes into account “both equity concerns and the protection of privacy and civil liberties.” 

Meanwhile, the report said that Congress should consider a handful of legislative efforts on facial recognition, including storage limits on facial images and templates; mandated training and certification for system operators and decision-makers, such as those working in law enforcement; passage of a federal privacy law surrounding facial recognition technology or the adoption of federal privacy legislation targeting commercial practices that undermine privacy; and tackling specific concerns regarding the technology, such as surveillance and the potential for harassment and blackmail.

“The number of uses will continue to expand as the technology becomes more widespread and inexpensive,” Edward Felten, a committee co-chair and founding director of the Center for Information Technology Policy at Princeton University, said in a statement. “For example, it is likely only a matter of time before stores routinely scan customers’ faces upon entry to personalize shopping experiences and marketing, and perhaps more troubling, private individuals could potentially use it to target others.”

From a federal government perspective, the report’s authors recommended that NIST take on a greater role, calling on the agency to “sustain a vigorous program of facial recognition technology testing and evaluation to drive continued improvements in accuracy and reduction in demographic biases.” NIST’s Face Recognition Technology Evaluation verification process was cited as “a valuable tool,” making the agency the “logical home” for facial recognition regulatory activities within the government.

The authors also recommended that the federal government develop a risk management framework for organizations that takes into account the “performance, equity, privacy, civil liberties, and effective governance” implications of facial recognition technology. NIST’s Cybersecurity Framework and AI Risk Management Framework were singled out as positive examples of this approach, making the agency a natural fit for developing something similar for facial recognition technology.

DHS and the Department of Justice, meanwhile, are charged by the report’s authors with developing a “a multi-disciplinary and multi-stakeholder working group on facial recognition technology to develop and periodically review standards for reasonable and equitable use, as well as other needed guidelines and requirements for the responsible use” of the technology by federal, state and local law authorities.  

“As governments and other institutions take affirmative steps through both law and policy to ensure the responsible use of [facial recognition technology], they will need to take into account the views of government oversight bodies, civil society organizations, and affected communities to develop appropriate safeguards,” the report stated.

The post Proactive approach from White House, NIST needed for facial recognition technology, report says appeared first on FedScoop.

]]>
75604
Senate lawmakers identify AI as potential solution for combating robocalls, scams https://fedscoop.com/ai-potential-solution-combating-robocalls/ Tue, 24 Oct 2023 22:13:19 +0000 https://fedscoop.com/?p=73778 Commerce subcommittee hearing follows a Federal Communications Commission report that identified 20 robocall risk mitigation documents that did not comply with the agency’s guidelines for telephone communication providers.

The post Senate lawmakers identify AI as potential solution for combating robocalls, scams appeared first on FedScoop.

]]>
Senate lawmakers are considering how generative artificial intelligence may be used to protect consumers from illegal robocalls and robotexts that also use AI. 

Members of the Senate Commerce, Science and Transportation Subcommittee on Communications, Media and Broadband agreed that the Federal Communications Commission and the Department of Justice are not doing enough to protect consumers from robocall scams. And witnesses during the Tuesday hearing pointed to the DOJ specifically for failing to pursue cases that violate the Telephone Consumer Protection Act.

Much of the questioning from Sens. Ben Ray Luján, D-N.M. and J.D. Vance, R-Ohio centered on the use of large language AI models to aid the FCC’s robocall mitigation database, which houses telephone providers’ caller ID verification system, or telecoms’ plans to address illegal robocalls. Last week, the FCC found 20 submissions that were not compliant with their rules; one company had submitted a blank piece of paper as their plan.

“Part of the reason these scammers are so effective in tricking consumers and evading enforcement is that the technology is constantly evolving,” Luján said. “Automated robocalls and robotexts are using chatbots and generative artificial intelligence to impersonate a real, live person, lulling the recipient into a false sense of security by mimicking voices and mannerisms.”

Luján and Vance agreed on the potential to use AI to assist in protecting consumers against robocall scams and address noncompliance with the FCC. Luján cited a Monday filing from the FCC, in which Chairwoman Jessica Rosenworcel unveiled an inquiry into how AI might fit into agency responsibilities for the TCPA in order to prevent robocalls and robotexts. 

The senators, meanwhile, asked witnesses to provide suggestions on how AI can be used to prevent robocalls.

“One of the suggestions I had earlier was to use a large language model to go through robocall mitigation database filings and toss out all the ones that are junk,” Mike Rudolph, chief technology officer at YouMail, said in response to Luján’s question. “So LLMs can be trained pretty quickly to synthesize that data and understand the intent.”

The AI discussed by lawmakers and witnesses that would be used to protect against robocalls and texts is different from the generative AI that bad actors often use to scam citizens. Sen. Amy Klobuchar, D-Minn., shared a story about a family that avoided a robocall scam featuring an impersonation of their Marine son, who was deployed during that time. 

Rudolph said that voice service providers are continuing to use technology to identify these specific scams, assuring lawmakers that VSPs “take protecting their customers very seriously.” 

In addition to the exploration of AI to combat robocalls, Megan Brown, a U.S. Chamber of Commerce representative and partner at Wiley Rein, recommended that Congress incentivize the DOJ to pursue fines and punishments for reported violations against the TCPA.

Congress should urge the DOJ “to make enforcement a priority by acting aggressively on referrals it gets from the FCC and represent its own cases directly,” Brown said. “As a former DOJ official, I think it’s a missed opportunity for them.”

The post Senate lawmakers identify AI as potential solution for combating robocalls, scams appeared first on FedScoop.

]]>
73778
NextGen to pay $31M in False Claims Act settlement over health record allegations https://fedscoop.com/nextgen-false-claims-act-settlement/ Fri, 14 Jul 2023 22:20:32 +0000 https://fedscoop.com/?p=70538 The Justice Department alleged NextGen Healthcare used an "auxiliary product" to obtain certification and gave incentive credits to users who recommended the system.

The post NextGen to pay $31M in False Claims Act settlement over health record allegations appeared first on FedScoop.

]]>
The electronic health record vendor NextGen Healthcare Inc. agreed to a multi-million dollar settlement to resolve allegations that it violated federal fraud statute by misrepresenting its product. 

The $31 million dollar agreement follows allegations that the company misrepresented what some versions of its electronic health record (EHR) software was could do and provided “unlawful remuneration” to users as an inducement to recommend the product, the Department of Justice said in a Friday statement.

The DOJ alleged that NextGen “improperly obtained certification for its EHR product” under the 2014 edition of certification program for health technology operated by the Office of the National Coordinator, according to a DOJ complaint filed with the settlement. It then used that certification “to obtain incentive payments.”

NextGen, DOJ alleged, used “an auxiliary product” during the certification that was designed to run test scripts it needed to perform for approval. As a result, the EHR released to users lacked functionalities, such as “the ability to record vital sign data, translate data into required medical vocabularies, and create complete clinical summaries,” the statement said.

The government also alleged that NextGen violated the Anti-Kickback Statute by giving credits to users whose recommendation resulted in a sale of the EHR system. Those credits “often worth as much as $10,000,” DOJ said.

The settlement includes resolution of whistleblower claims brought by two health care professionals — Toby Markowitz and Elizabeth Ringold — under the False Claims Act. The whistleblowers in the case will receive roughly $5.6 million, the DOJ said.

In a written statement regarding the settlement, a NextGen spokesperson said: “The Company denies that any of its conduct violated the law, and the settlement agreement does not include any admissions of wrongdoing. This agreement relates to claims from more than a decade ago.”

The spokesperson added that the settlement doesn’t change NextGen’s products or policies for compliance. They added: “To avoid the distraction and expense of litigation, we believe it is in the best interest of the Company to put this historical matter behind us and keep our attention focused on innovating solutions that enable better healthcare outcomes for all.”

The post NextGen to pay $31M in False Claims Act settlement over health record allegations appeared first on FedScoop.

]]>
70538
Years later, the Marshals Service is still looking for help with seized crypto https://fedscoop.com/marshals-service-still-looking-for-help-with-seized-crypto/ Mon, 26 Jun 2023 21:27:10 +0000 https://fedscoop.com/?p=69717 Two agreements for managing seized cryptocurrency assets appear to have fallen through.

The post Years later, the Marshals Service is still looking for help with seized crypto appeared first on FedScoop.

]]>
Amid a surging number of criminal convictions involving cryptocurrency, the U.S. Marshals Service has been tasked with managing and disposing of bitcoin and other digital assets. Like other seized property, the law enforcement agency is in charge of taking custody of crypto through the Department of Justice’s Asset Forfeiture Program — and even periodically auctioning it off. 

But, at least from a software perspective, keeping track of crypto is a lot harder than selling a Chagall. For that reason, the law enforcement agency has spent the past few years trying to hire a private tech company to help. But despite settling on contracts with crypto companies, at least two agreements appear to have fallen through. Today, the Marshals Service is still maintaining seized crypto on its own. 

“As the seizure and forfeiture of cryptocurrency has become commonplace, the USMS has sought to create a contract with private industry, just as it does with nearly all other asset types,” a spokesperson for the DOJ’s Asset Forfeiture Division told FedScoop. “Currently there is no private company that manages USMS’s cryptocurrency portfolio.” 

The search for a contractor started several years ago, when the US Marshals Service requested information from companies about the prospect of managing the agency’s cryptocurrency. In April 2021, a company called Bitgo, a crypto security company based in California, won a $4.5 million contract.

But, then, BitGo lost the agreement a few months after the Small Business Administration flagged the company as being too big to meet the contract eligibility. (Back in May, a company called Galaxy Digital had announced it planned to spend $1.2 billion to acquire BitGo, though the deal fell apart afterward.) In July, the Marshals Service hired another company, Anchorage Digital, which is based in San Francisco and also offers cryptocurrency holding services. 

Now, though, the Anchorage Digital contract also appears to have collapsed. As with the BitGo contract, the federal procurement data system shows that a Marshal Service contract with Anchor Labs was “terminate[d] for convenience.” Anchorage Digital is a subsidiary of Anchor Labs, according to its website. The company appears to have taken down a Medium post touting the agreement.

“Both awards were subsequently stayed pending the outcome of protests filed with the U.S. Small Business Administration (SBA), challenging the companies’ business size,” the USMS spokesperson told FedScoop. “Ultimately, SBA determined that both companies were other than small business.”

The company did not respond to a request for comment, though it’s worth noting that the Comptroller of the Currency issued a consent order against the company, which has an OCC banking charter, in 2022.  The Small Business Administration did not provide a comment by the time of publication.

“Not all cryptocurrency seized for forfeiture by the federal government is transferred to the USMS for custody and liquidation,” added the DOJ spokesperson. “The USMS utilizes the best practices and services of private industry to most effectively and securely manage and liquidate all assets in its custody.” 

The USMS has struggled with handling crypto, as a DOJ Office of Inspector General report highlighted last summer. At the time of the report’s publication, the Marshals Service was using multiple spreadsheets to manage its crypto, primarily because digital assets like bitcoin aren’t easily tracked in a DOJ property management program called the Consolidated Asset Tracking System (CATS).

These documents, according to the inspector general, don’t have “inventory management controls” and “documented operating procedures.” Policies for handling, storing, and valuing crypto are also “inadequate or absent, and in some instances provide conflicting guidance.” 

“The USMS’s supplemental spreadsheets do not have the capability to track edits made to the cryptocurrency entries in the USMS’s inventory records,” warned the inspector general. “As a result, these inventory records could be edited or deleted without a record of such a change being made and without the knowledge of individuals responsible for maintaining the spreadsheets.”

In some circumstances, the Marshals Service was “not fully complying” with rules for tracking crypto in CATS, the reported added.

The inspector general also said that the Marshals Service needs to develop more fleshed-out crypto policies before beginning work with a private company, cautioning that “without properly documented policies and procedures, the USMS lacks an adequate foundation for building performance requirements for a cryptocurrency services contract.”

The post Years later, the Marshals Service is still looking for help with seized crypto appeared first on FedScoop.

]]>
69717
Justice Department exploring generative AI to overhaul IT service desk https://fedscoop.com/justice-department-exploring-generative-ai-to-overhaul-it-service-desk/ https://fedscoop.com/justice-department-exploring-generative-ai-to-overhaul-it-service-desk/#respond Thu, 15 Jun 2023 15:29:10 +0000 https://fedscoop.com/?p=69510 In an interview, CIO Melinda Rogers paints a portrait of how generative AI tools could make the DOJ IT service desk program less cumbersome and frustrating.

The post Justice Department exploring generative AI to overhaul IT service desk appeared first on FedScoop.

]]>
The Justice Department’s chief information officer said one of her first priorities in experimenting with generative artificial intelligence will likely be to use it to overhaul the department’s IT customer service desk to make it smoother, faster and more customer-friendly. 

Melinda Rogers, who oversees the Justice Department’s $3.1 billion IT portfolio and leads the agency’s tech and cybersecurity programs, told FedScoop recently that she hopes to make significant improvements to DOJ’s IT service desk program through new and recompeted contracts that will deploy cutting edge AI technologies. 

“I think something as basic as our service desk, that’s an area where we can have lots of opportunities for improvement [using AI]. And our IT service desk is just one of those areas where it’s hard to get it smooth and clean and so if I would start anywhere with AI, I would probably start there to help improve our user experience,” Rogers told FedScoop during a wide-ranging interview at the Justice Department headquarters earlier this month.  

“So we’re recompeting our IT service desk contract … but I want to be very intentional on how we go about deploying our service desk so that I can have the opportunity to bring in some artificial intelligence and make it a better customer experience,” Rogers said. 

Rogers said that the Justice Department could look to certain companies in the private sector that have excelled at IT customer service as examples of how to successfully overhaul its own program.

“For example, American Express, they have a pretty well-honed, good customer IT experience where you can chat with their reps easily online and it’s fast and super responsive. So that’s one area where we need to be more like American Express,” said Rogers.

“I could do a great job on the backend IT infrastructure stuff or our analytics or whatnot, but if I can’t get the basic customer-facing customer service desk stuff right, then I don’t think I can build the trust with people,” she added.

Rogers, who has been CIO at the Justice Department since 2020 and was CISO within the agency for eight years prior to that, started her career with Bank of America and Equifax in the private sector. She has a bachelor’s degree in economics from George Mason University and an MBA focused on marketing and finance from Emory University.

As Justice recompetes its IT service desk contract, Rogers wants the resulting contract to be more intentional and broken apart into smaller pieces rather than “a lot of different services all sort of swept into one master vehicle, which is how we’ve typically done it.”

Leidos is the incumbent holding the current contractor to support Justice’s service desk work.

She added that some sub-departments or components within the Justice Department have had success in deploying AI or other cutting-edge technologies when choosing to work with a smaller contractor.

“We’ve often had success going with smaller firms in the Beltway because maybe they have a little bit more attention to detail and a little more skin in the game, they push for that good customer experience,” said Rogers.

“Sometimes with larger firms, it’s more of a body shop, right? They just want buttcheeks in seats. And for me, it’s not just cheeks in seats. You need to know who the VIPs are when they call our phones and the system has to have gold stars next to that person so they don’t have to say can you spell ‘Garland’ for me,” Rogers said referring to Attorney General Merrick Garland. “You can’t have that.”

Rogers pointed out a frustrating personal experience she had with the need to constantly change her network password within the DOJ a few years ago, which she said was tiresome, didn’t work and pushed her to make customer experience a top priority as CIO.

“I’ve had some not-pleasant user experiences internally. And I work in IT and I thought it was, you know, cumbersome, right? It was not elegant. So my desire is to take our service desk to a place of elegance,” Rogers said.

Justice isn’t the only federal agency making customer experience — whether that’s internal or external customers — a top priority. The White House issued an executive order in late 2021 directing agencies that provide high-impact public services to make CX a top priority.

Just this week, the Navy announced a new initiative by which it — similar to what Justice is planning to do — will use AI to power a chatbot to support its IT help desk.

The post Justice Department exploring generative AI to overhaul IT service desk appeared first on FedScoop.

]]>
https://fedscoop.com/justice-department-exploring-generative-ai-to-overhaul-it-service-desk/feed/ 0 69510
US-UK serious crime data access agreement comes into force https://fedscoop.com/us-uk-serious-crime-data-access-agreement-comes-into-force/ Tue, 04 Oct 2022 00:38:55 +0000 https://fedscoop.com/?p=61239 The pact allows law enforcement agencies to request data directly from IT service providers operating across the Atlantic.

The post US-UK serious crime data access agreement comes into force appeared first on FedScoop.

]]>
An agreement between the U.S. and U.K. governments to allow law enforcement agencies access to data held by IT service providers across the Atlantic has come into force.

The pact allows law enforcement agencies in one country to request data directly from service providers in the other country, without violating restrictions on cross-border disclosures.

It relates only to data being sought in order to counter serious crime, and the agreement is authorized by the Clarifying Lawful Overseas Use of Data (CLOUD) Act, which was enacted by Congress in 2018.

According to the Department of Justice, it is the first such agreement of its kind and will allow each country’s investigators to “gain better access to vital data to combat serious crime in a way that is consistent with privacy and civil liberties standards.”

Under terms of the pact, orders submitted by U.S. authorities must not target people located in the U.K. and must relate to a serious crime. Similarly, orders from U.K. authorities must not target people located in the U.S.

Both governments have selected designated authorities responsible for implementing the agreement for each country. In the U.S., the designated authority is the DOJ’s Office of International Affairs (OIA), and in the U.K., it is the Investigatory Powers Unit of the U.K. Home Office.

The DOJ’s Office of International Affairs has created a CLOUD team to review and certify orders that comply with the agreement on behalf of federal, state, local and territorial authorities in the U.S. It will transmit certified orders directly to U.K. service providers and arrange for the return of responsive data to request authorities.

DOJ added that the pact is intended to enhance the ability of both countries to prosecute serious crimes such as terrorism and child exploitation.

The post US-UK serious crime data access agreement comes into force appeared first on FedScoop.

]]>
61239
OPM hack class action plaintiffs win initial approval for $63M payout https://fedscoop.com/opm-hack-class-action-plaintiffs-win-initial-approval-for-63m-payout/ Wed, 08 Jun 2022 17:01:12 +0000 https://fedscoop.com/?p=53424 The settlement should bring to an end the long-running lawsuit brought against the agency in response to the 2014 and 2015 breaches.

The post OPM hack class action plaintiffs win initial approval for $63M payout appeared first on FedScoop.

]]>
A D.C. federal judge Tuesday gave preliminary approval for a $63 million settlement to go ahead in a class action brought by victims of the 2014 and 2015 Office of Personnel Management data breaches.

In a court order, U.S. district judge Amy Berman Jackson said the figure in the agreement was “fair, reasonable, and adequate, and in the best interest of named plaintiffs and class members.”

The $63 million payout remains subject to a fairness hearing scheduled for Oct. 14.

If it receives final approval, the settlement will bring to an end a long-running class action brought by the U.S. citizens and permanent residents whose personal information was compromised as a result of cyberattacks at OPM and through the breach of electronic information systems operated by contractor Peraton in 2013 and 2014.

The class action is open to citizens who had to spend money remedying issues directly related to the breach, such as paying for credit record monitoring services, and claims may be submitted until Dec. 22.

In 2015, OPM announced it was hit with a series of intrusions understood to be linked to two Chinese government-sponsored groups, which resulted in the compromise of personal information of around 22 million individuals.

A subsequent report by the House Committee on Oversight and Reform found that the earliest known data breach at the agency came in November 2013 but was not detected for years until a private cybersecurity firm was brought in to run forensics.

Before that, malware was found to be lurking on the organization’s data infrastructure dating back to 2012, according to the Department of Homeland Security’s U.S. Computer Emergency Readiness Team.

“The long-standing failure of OPM’s leadership to implement basic cyber hygiene, such as maintaining current authorities to operate and employing strong multi-factor authentication, despite years of warnings from the inspector general, represents a failure of culture and leadership, not technology,” the report stated at the time.

Following the breach, OPM contracted with credit monitoring company ID Experts to provide monitoring services to victims of the breach. According to federal government spending data, the agency has so far spent $248 million on the contract, which has an award ceiling of $416 million. 

In an online statement, Daniel Girard, lead counsel for the plaintiffs, said: “The settlement ends a seven-year legal effort to win compensation from the government.”

“The settlement will compensate victims who suffered a financial loss as a result of the hack, providing for minimum payments of $700, even for those with minor expenses,” he added. “The court’s order sets a deadline of December 22, 2022 for class members to submit a claim.”

An OPM spokesperson declined to comment and referred FedScoop to the Department of Justice.

The post OPM hack class action plaintiffs win initial approval for $63M payout appeared first on FedScoop.

]]>
53424
Oracle completes $28.3B acquisition of Cerner  https://fedscoop.com/oracle-completes-28-3b-acquisition-of-cerner/ Wed, 08 Jun 2022 13:26:31 +0000 https://fedscoop.com/?p=53293 The deal closes after Oracle’s offer to buy Cerner for $95 per share cleared scrutiny from global antitrust authorities.

The post Oracle completes $28.3B acquisition of Cerner  appeared first on FedScoop.

]]>
Cloud giant Oracle has completed its $28.3 billion acquisition of electronic health records company Cerner.

The deal concludes after the technology company’s tender offer to purchase all issued and outstanding Cerner shares for $95 per share expired after midnight, eastern time, on June 6.

It brings to an end scrutiny of the transaction from antitrust authorities including the FTC and the European Commission.

Cerner currently has contracts with the Coast Guard, Centers for Disease Control and Prevention, Department of Health and Human Services, Centers for Medicare and Medicaid Services, Department of Defense and Department of Veterans Affairs.

Oracle is hoping that the acquisition will allow it to use Cerner’s trove of health information to bring cloud-based data analytics and AI technologies to bear on the sector. It may also shift some business away from AWS, which Cerner named as a preferred cloud partner in 2019.

However, Oracle will have to respond to continued scrutiny from lawmakers, who have repeatedly raised concerns over Cerner’s role at the center of the VA’s troubled electronic health records (EHR) modernization program.

Late last month, Senate lawmakers unanimously passed legislation that would require the Department of Veterans Affairs to report the costs of its EHR modernization program more regularly and in greater detail.

The proposed bill has already passed the House of Representatives and will now be sent to President Biden to be signed into law.

VA has recently continued with the rollout of its EHR modernization program, which relies on Cerner’s Millennium platform, despite opposition from lawmakers.

Earlier this month, the department forged ahead with a go-live of the platform at the Central Ohio Healthcare System in Columbus, Ohio, following implementation of the system at sites in Spokane and Walla Walla, Washington.

In March, the department’s Office of Inspector General published a trio of reports that identified major concerns about care coordinationticketing and medication management associated with the EHR program launch.

Oracle Chairman and Chief Technology Officer Larry Ellison will discuss acquisition at an online event on June 9. Details of the event can be found here.

The post Oracle completes $28.3B acquisition of Cerner  appeared first on FedScoop.

]]>
53293
Antitrust review waiting period ends in Oracle-Cerner deal https://fedscoop.com/antitrust-review-waiting-period-ends-in-oracle-cerner-deal/ Thu, 24 Feb 2022 20:24:47 +0000 https://fedscoop.com/?p=47980 Oracle previously extended the transaction waiting period until Feb. 22.

The post Antitrust review waiting period ends in Oracle-Cerner deal appeared first on FedScoop.

]]>
The initial antitrust waiting period for Oracle’s $29.8 billion bid to acquire the main technology provider behind the Department of Veterans Affairs’ electronic health record modernization program expired Monday.

Oracle’s all-cash offer to acquire Cerner for $95 per share is scheduled to expire at midnight at the end of the day on March 16. In a press release, Oracle said the parties “anticipate extending the tender offer to allow additional time for the satisfaction of the remaining conditions to the tender offer.”

The two companies are seeking clearance for the transaction to proceed. The deal would be Oracle’s largest-ever acquisition, and earlier this month the waiting period was extended until midnight on Feb.22.

Under the Hart-Scott-Rodino Act, the Federal Trade Commission and the Department of Justice’s anti-trust division typically have 30 days to conduct a preliminary review of the deal.

At the time of the transaction announcement, both companies said the transaction would improve the availability of technologies such as cloud, artificial intelligence and machine learning to federal agencies, and that the goal would be to focus on delivering zero unplanned downtime for Cerner systems running on Oracle’s Gen2 cloud.

Oracle announced in December that it had signed an agreement to acquire Cerner and that following completion of the prospective deal, the medical records company would become a standalone business unit within Oracle.

Cerner’s Millennium platform makes up the backbone of the VA’s EHR modernization program, which has attracted scrutiny from lawmakers in response to escalating costs and concerns over the new medical records system raised by frontline doctors.

The DOJ has the power either to halt a transaction entirely or to require divestitures before a deal can proceed. It may also require an extension of a deal waiting period.

The post Antitrust review waiting period ends in Oracle-Cerner deal appeared first on FedScoop.

]]>
47980