Social Media Archives | FedScoop https://fedscoop.com/tag/social-media/ FedScoop delivers up-to-the-minute breaking government tech news and is the government IT community's platform for education and collaboration through news, events, radio and TV. FedScoop engages top leaders from the White House, federal agencies, academia and the tech industry both online and in person to discuss ways technology can improve government, and to exchange best practices and identify how to achieve common goals. Thu, 11 Apr 2024 18:13:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://fedscoop.com/wp-content/uploads/sites/5/2023/01/cropped-fs_favicon-3.png?w=32 Social Media Archives | FedScoop https://fedscoop.com/tag/social-media/ 32 32 Database of verified government social media accounts loses its teeth https://fedscoop.com/database-of-verified-government-social-media-accounts-loses-its-teeth/ Thu, 11 Apr 2024 18:13:11 +0000 https://fedscoop.com/?p=77155 As concerns about AI-fueled misinformation ahead of the 2024 election grow, agencies are no longer required to register their social media accounts.

The post Database of verified government social media accounts loses its teeth appeared first on FedScoop.

]]>
Back in 2016, the General Services Administration announced the launch of the U.S. Digital Registry, a database for tracking official government social media accounts, mobile websites, and apps. Part of the goal was to better track government social media efforts — and update an earlier federal social media registry meant to help people verify that government accounts were authentic

But today, agencies are no longer mandated to update the tool with their information, a spokesperson for GSA told FedScoop. The elimination of that requirement comes as election season draws closer and the threat of online impersonations of official government accounts grows.  Meanwhile, the Cybersecurity and Infrastructure Security Agency has said that it’s no longer communicating or coordinating with social media companies over potential disinformation campaigns, as CyberScoop reported last month. 

The tool came amid a series of Obama-era government digital transformation efforts. Back in November 2016, the White House circulated a memo that required agencies to “register their public-facing digital services such as social media, collaboration accounts, mobile apps and mobile websites” on the site within 60 days. The point, the memo said, was to “help confirm the validity of official U.S. Government digital platforms.” 

But a new memo, from September 2023, did not renew that requirement, GSA said. The agency told FedScoop that it is in the process of updating a page that still notes the requirement. Currently, there are 463 user accounts on the registry, according to GSA, though it’s not clear how actively it is used by the public. 

A spokesperson for the Office of Management and Budget directed FedScoop to GSA.

Still, many agencies don’t seem to be updating their accounts. NASA, for instance, is extremely prolific on social media, but does not upload its accounts to the tool because the requirement was rescinded. The Federal Bureau of Investigation, which maintains social media accounts for its many field offices, only has five registered accounts on the tool. 

The tool has also seen other issues: Back in 2017, a researcher at George Washington University flagged vulnerabilities in the tool, including the inclusion of suspended accounts that, at one point, were tweeting in Russian, as well as deleted accounts with usernames that could be taken over. 

The government continues to face issues identifying the authenticity of its accounts. Last year, FedScoop reported on how, after the U.S. Office of Personnel Management deleted a mobile app meant to help recruit people to federal jobs, a similar-sounding fake took its place. When FedScoop asked about the tool, Google removed the app from the Google Play Store.

The post Database of verified government social media accounts loses its teeth appeared first on FedScoop.

]]>
77155
FBI, DHS lack information-sharing strategies for domestic extremist threats online, GAO says https://fedscoop.com/fbi-dhs-domestic-extremist-violent-threats-social-media-gaming/ Thu, 29 Feb 2024 21:02:31 +0000 https://fedscoop.com/?p=76255 A new watchdog report finds that an absence of “strategy or goals” from the agencies in how they engage with social media and gaming companies on violent threats calls into question the effectiveness of their communications with those platforms.

The post FBI, DHS lack information-sharing strategies for domestic extremist threats online, GAO says appeared first on FedScoop.

]]>
The FBI and Department of Homeland Security’s information-sharing efforts on domestic extremist threats with social media and gaming companies lack an overarching strategy, a Government Accountability Office report found, raising questions about the effectiveness of the agencies’ communications to address violent warnings online.

In response to the proliferation in recent years of content on social media and gaming platforms that promote domestic violent extremism, the FBI and DHS have taken steps to increase the flow of information with those platforms. But “without a strategy or goals, the agencies may not be fully aware of how effective their communications are with companies, or how effectively their information-sharing mechanisms serve the agencies’ overall missions,” the GAO said.

For its report, the GAO requested interviews with 10 social media and gaming companies whose platforms were connected most frequently with domestic violent extremism terms, per article and report searches. Discord, Reddit and Roblox agreed to participate, as did a social media company and a game publisher, both of which asked to remain anonymous.

The platforms reported using a variety of measures to identify content that promotes domestic violent extremism, including machine learning tools to flag posts for review or automatic removal, reporting by users and trusted flaggers, reviews by human trust and safety teams, and design elements that discourage users from committing violations.

Once those companies have identified a violent threat, there are reporting mechanisms in place with both DHS and the FBI. “However, neither agency has a cohesive strategy that encompasses these mechanisms, nor overarching goals for its information-sharing efforts with companies about online content that promotes domestic violent extremism,” the GAO noted.

The agencies are engaged in multiple other efforts to stem the tide of domestic extremist threat content. The FBI, for example, is a participant in the Global Internet Forum to Counter Terrorism, and in the United Nations’ Tech Against Terrorism initiative. The agency also employs a program manager dedicated to communications with social media companies, conducts yearly meetings with private sector partners and operates the National Threat Operations Center, a centralized entity that processes tips.

DHS, meanwhile, has participated in a variety of non-governmental organizations aimed at bolstering information-sharing, in addition to providing briefings to social media and gaming companies through the agency’s Office of Intelligence and Analysis. 

There are also joint FBI-DHS efforts in progress, including the issuing of products tied to the online threat landscape, and a partnership in which the FBI delivers briefings, conducts webinars and distributes informational materials on various threats to Domestic Security Alliance Council member companies. 

Though the FBI and DHS are clearly engaged in myriad efforts to stem domestic extremist violent threats made on social media and gaming platforms, the GAO noted that implementing strategies and setting specific goals should be considered “a best practice” across agencies.

With that in mind, the GAO recommended that the FBI director and the I&A undersecretary both develop a strategy and goals for information-sharing on domestic violent extremism with social media and gaming companies. DHS said it expects to complete the strategy by June.

The post FBI, DHS lack information-sharing strategies for domestic extremist threats online, GAO says appeared first on FedScoop.

]]>
76255
AI watermarking could be exploited by bad actors to spread misinformation. But experts say the tech still must be adopted quickly https://fedscoop.com/ai-watermarking-misinformation-election-bad-actors-congress/ Wed, 03 Jan 2024 21:56:04 +0000 https://fedscoop.com/?p=75453 As Washington putters on AI watermarking legislation, TikTok and Adobe are leading the way with transparency standards.

The post AI watermarking could be exploited by bad actors to spread misinformation. But experts say the tech still must be adopted quickly appeared first on FedScoop.

]]>
By and large, government and private-sector technologists agree that the use of digital watermarking to verify AI-generated content should be a key component for tackling deepfakes and other forms of malicious misinformation and disinformation. 

But there is no clear consensus regarding what a digital watermark is, or what common standards and policies around it should be, leading many AI experts and policymakers to fear that the technology could fall short of its potential and even empower bad actors.

Industry groups and a handful of tech giants — most notably TikTok and Adobe — have been singled out by experts as leading the charge on AI watermarking and embracing a transparent approach to the technology. They’ll need all the help they can get during what promises to be an especially chaotic year in digital spaces. 

With over 2 billion people expected to vote in elections around the world in 2024, AI creators, scholars and politicians said in interviews with FedScoop that standards on the watermarking of AI-generated content must be tackled in the coming months — or else the proliferation of sophisticated, viral deepfakes and fake audio or video of politicians will continue unabated.

“This idea of authenticity, of having authentic trustworthy content, is at the heart of AI watermarking,” said Ramayya Krishnan, dean of Carnegie Mellon University’s information systems and public policy school and a member of President Joe Biden’s National Artificial Intelligence Advisory Committee. 

“Having a technological way of labeling how content was made and having an AI detection tool to go with that would help, and there’s a lot of interest in that, but it’s not a silver bullet,” he added. “There’s all sorts of enforcement issues.” 

Digital watermarking “a triage tool for harm reduction”

There are three main types of watermarks created by major tech companies and AI creators to reduce misinformation and build trust with users: visible watermarks added to images, videos or text by companies like Google, OpenAI or Getty to verify the authenticity of content; invisible watermarks that can only be detected through special algorithms or software; and cryptographic metadata, which details when a piece of content was created and how it has been edited or modified before someone consumes it.

Using watermarking to try and reduce AI-generated misinformation and disinformation can be helpful when the average consumer is viewing a piece of content, but it can also backfire. Bad actors can manipulate a watermark and create even more misinformation, AI experts focused on watermarking told FedScoop.

It’s like Olympic athletes — now that I know that you’re looking for this drug, I’ll just take another drug.

Senior senate independent staffer on how bad actors can manipulate watermarks

“Watermarking technology has to be taken with a grain of salt because it is not so hard for someone with a knowledge of watermarks and AI to being able to break it and remove the watermark or manufacture one,” said Siwei Lyu, a University at Buffalo computer science professor who studies deepfakes and digital forgeries. 

Lyu added that digital watermarking is “not foolproof” and invisible watermarks are often more effective, though not without their flaws. 

“I think watermarks mostly play on people’s unawareness of their existence. So if they know they can, they will find a way to break it.”

A senior Senate independent staffer deeply involved in drafting legislation related to AI  watermarking said the concern of bad actors using well-intentioned watermarks for manipulative purposes is “1,000% valid. It’s like Olympic athletes — now that I know that you’re looking for this drug, I’ll just take another drug. It’s like we need to try our best we can to keep pace with the bad actors.”

When it comes to AI watermarking, the Senate is currently in an “education and defining the problem” phase, the senior staffer said. Once the main problems with the technology are better defined, the staffer said they’ll begin to explore whether there is a legislative fix or an appropriations fix.

Senate Majority Leader Chuck Schumer said in September that ahead of the 2024 election, tackling issues around AI-generated content that is fake or deceptive and can lead to widespread misinformation and disinformation was an exceedingly time-sensitive problem.

“There’s the issue of actually having deepfakes, where people really believe … that a candidate is saying something when they’re totally a creation of AI,” the New York Democrat said after his first closed-door AI insight forum

“We talked about watermarking … that one has a quicker timetable maybe than some of the others, and it’s very important to do,” he added.

Another AI expert said that watermarking can be manipulated by bad actors in a small but highly consequential number of scenarios. Sam Gregory, executive director at the nonprofit WITNESS, which helps people use technology to promote human rights, said it’s best to think of AI watermarking as “almost a triage tool for harm reduction.” 

”You’re making available a greater range of signals on where content has come from that works for 95% of people’s communication,” he said. “But then you’ve got like 5% or 10% of situations where someone doesn’t use the watermark to conceal their identity or strip out information or perhaps they’re a bad actor. 

“It’s not a 100% solution,” Gregory added.

TikTok, Adobe leading the way on watermarking

Among major social media platforms, Chinese-owned TikTok has taken an early lead on watermarking, requiring users to be highly transparent when AI tools and effects are used within their content, three AI scholars told FedScoop. Furthermore, the company has created a culture of encouraging users to be comfortable with sharing the role that AI plays in altering their videos or photos in fun ways.

“TikTok shows you the audio track that was used, it shows you the stitch that was made, it shows you the AI effects used,” Gregory said. And as “the most commonly used platform by young people,” TikTok makes it “easy and comfortable to be transparent about how a piece of content was made with presence of AI in the mix.” 

TikTok recently announced new labels for disclosing AI-generated content. In a statement, the social media platform said that its policy “requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualize the video and prevent the potential spread of misleading content. Creators can now do this through the new label (or other types of disclosures, like a sticker or caption).”

We realized that we can’t keep getting out in front to determine if something is false, so we decided to flip it and say, ‘Let’s have everybody expect to say this is true.’

Jeffrey young, adobe senior solutions consultant manager, on the company’s approach content authenticity

Major AI developers, including Adobe and Microsoft, also support some forms of labeling AI in their products. Both tech giants are members of the Coalition for Content Provenance and Authenticity (C2PA), which addresses the prevalence of misinformation online through the development of technical standards for certifying the source and history of online content.

Jeffrey Young, a senior solutions consultant manager at Adobe, said the company has “had a big drive for the content authenticity initiative” due in large part to its awareness that bad actors use Photoshop to manipulate images “for nefarious reasons.” 

“We realized that we can’t keep getting out in front to determine if something is false, so we decided to flip it and say, ‘Let’s have everybody expect to say this is true,’” Young said. “So we’re working with camera manufacturers, working with websites on their end product, that they’re able to rollover that image and say, this was generated by [the Department of Homeland Security], they’ve signed it, and this is confirmed, and it hasn’t been manipulated since this publication.”

Most major tech companies are in favor of labeling AI content through watermarking and are working to create transparent watermarks, but the tech industry recognizes that it’s a simplistic solution, and other actions must be taken as well to comprehensively reduce AI-generated misinformation online. 

Paul Lekas, senior vice president for global public policy & government affairs at the Software & Information Industry Association, said the trade group — which represents Amazon, Apple and Google, among others — is “very supportive” of watermarking labeling and provenance authentication but acknowledges that those measures do “not solve all the issues that are out there.” 

“Ideally we’d have a system where everything would be clear and transparent, but we don’t have that yet,” Lekas said. “I think another thing that we are very supportive of is nontechnical, which is literacy — media literacy, digital literacy for people — because we can’t just rely on technology alone to solve all of our problems.”

In Washington, some momentum on AI watermarking

The White House, certain federal agencies and multiple prominent members of Congress have made watermarking and the reduction of AI-generated misinformation a high priority, pushing through a patchwork of suggested solutions to regulate AI and create policy safeguards around the technology when it comes to deepfakes and other manipulative content.

Through Biden’s October AI executive order, the Commerce Department’s National Institute of Standards and Technology has been charged with creating authentication and watermarking standards for generative AI systems — following up on discussions in the Senate about similar kinds of verification technologies

Alondra Nelson, the former White House Office of Science and Technology Policy chief, said in an interview with FedScoop that there is enough familiarity with watermarking that it is no longer “a completely foreign kind of technological intervention or risk mitigation tactic.”

“I think that we have enough early days experience with watermarking that people have to use,” she said. “You’ve got to use it in different kinds of sectors for different kinds of concerns, like child sexual abuse and these sorts of things.” 

Congress has also introduced several pieces of legislation related to AI misinformation and watermarking, such as a bill from Rep. Yvette Clarke, D-N.Y., to regulate deepfakes by requiring content creators to digitally watermark certain content and make it a crime to fail to identify malicious deepfakes that are related to criminal conduct, incite violence or interfere with elections.

In September, Sens. Amy Klobuchar, D-Minn., Josh Hawley, R-Mo., Chris Coons, D-Del., and Susan Collins, R-Maine, proposed new bipartisan legislation focused on banning the use of deceptive AI-generated content in elections. In October, Sens. Brian Schatz, D-Hawaii, and John Kennedy, R-La., introduced the bipartisan AI Labeling Act of 2023, which would require clear labeling and disclosure on AI-generated content and chatbots so consumers are aware when they’re interacting with any product powered by AI.

Meanwhile, the Federal Election Commission has been asked to establish a new rule requiring political campaigns and groups to disclose when their ads include AI-generated content.

In the absence of any AI legislation within Congress becoming law or garnering significant bipartisan consensus, the White House has pushed to get tech giants to sign voluntary commitments governing AI, which require steps such as watermarking AI-generated content. Adobe, IBM, Nvidia and others are on board. The private commitments backed by the Biden administration are seen as a stopgap. 

From Nelson’s point of view, NIST’s work on the creation of AI watermarking standards will “be taking it to another level.” 

“One hopes that CIOs and CTOs will take it up,” she said. “That remains to be seen.”

The post AI watermarking could be exploited by bad actors to spread misinformation. But experts say the tech still must be adopted quickly appeared first on FedScoop.

]]>
75453
Appeals Court pauses ban on federal agency contact with social media companies https://fedscoop.com/court-pauses-ban-on-agency-contact-with-social-media-companies/ https://fedscoop.com/court-pauses-ban-on-agency-contact-with-social-media-companies/#respond Fri, 14 Jul 2023 20:34:14 +0000 https://fedscoop.com/?p=70518 The DOJ asked the appeals court for a 10-day stay, at minimum, while it considers bringing the case to the Supreme Court.

The post Appeals Court pauses ban on federal agency contact with social media companies appeared first on FedScoop.

]]>
The Fifth Circuit Court of Appeals on Friday temporarily halted a lower court’s ruling that restricts Biden administration and federal government communications with social media companies in relation to controversial content online.

The Justice Department on Monday requested a stay to federal judge Terry Doughty’s ruling last week that U.S. Digital Services administrator Mina Hsiang, Cybersecurity and Infrastructure Security Agency chief Jen Easterly as well as a number of major federal agencies be restricted from interacting with social media firms for the purpose of discouraging or removing First Amendment-protected speech. 

Doughty, a Trump-appointed judge of the U.S. District Court for the Western District of Louisiana, denied the Justice Department’s prior request for a stay, arguing that his preliminary injunction ruling wasn’t as broad as it appeared and only prohibited contacting social media companies for the purposes of suppressing free speech. The fifth circuit ruling temporarily pauses Doughty’s ruling.

Doughty’s original judgement marked a victory for Republicans who have accused federal government and White House officials of censorship while Democrats pushed back on the ruling arguing that social media platforms have failed to address rampant cases of misinformation.

In its filing to the fifth circuit, the Justice Department warned that Doughty’s initial ruling could potentially restrict a broad and essential part of communications between the government and social media platforms, such as preventing the president from asking platforms to act responsibly regarding misinformation about a natural disaster circulating online.

The DOJ also said that Doughty’s ruling has the potential to stop communications between the government and social media platforms regarding national issues like the fentanyl crisis or the security of federal elections, warning that that the ruling could create legal ambiguity that could lead to “disastrous delays” in responding to misinformation online.

The Justice Department filing indicated that the DOJ would also consider bringing the case to the Supreme Court and therefore asked the appeals court for a 10-day stay, at minimum, for the highest court in the land to consider an application for a stay.

The Justice Department did not immediately respond to request for comment.

The post Appeals Court pauses ban on federal agency contact with social media companies appeared first on FedScoop.

]]>
https://fedscoop.com/court-pauses-ban-on-agency-contact-with-social-media-companies/feed/ 0 70518
DOJ petitions 5th Circuit to pause ban on federal agency contact with social media companies https://fedscoop.com/doj-petitions-for-doughty-ruling-pause/ https://fedscoop.com/doj-petitions-for-doughty-ruling-pause/#respond Mon, 10 Jul 2023 21:15:38 +0000 https://fedscoop.com/?p=70300 The DOJ has requested a 10 day pause, at minimum, so it can consider asking the Supreme Court to stay the ruling.

The post DOJ petitions 5th Circuit to pause ban on federal agency contact with social media companies appeared first on FedScoop.

]]>
The Department of Justice on Monday requested the U.S. Court of Appeals for the Fifth Circuit to pause a controversial legal ruling that puts aggressive restrictions on federal government communications with social media companies, arguing that the preliminary injunction could impede law enforcement efforts to protect national security interests.

The DOJ request for a stay is in response to federal judge Terry Doughty’s ruling last week that U.S. Digital Services administrator Mina Hsiang, Cybersecurity and Infrastructure Security Agency chief Jen Easterly as well as a number of major federal agencies are restricted from interacting with social media firms for the purpose of discouraging or removing First Amendment-protected speech. 

The judgement marked a victory for Republicans who have accused federal government and White House officials of censorship while Democrats pushed back on the ruling arguing that social media platforms have failed to address rampant cases of misinformation.

“May federal officials respond to a false story on influential social-media accounts with a public statement, or a statement to the platforms hosting the accounts, refuting the story? May they urge the public to trust neither the story nor the platforms that disseminate it?,” the Justice Department asked in its request for Doughty’s ruling to be paused for 10 days. 

“May they answer unsolicited questions from platforms about whether the story is false if the platforms’ policies call for the removal of falsehoods? No plausible interpretation of the First Amendment would prevent the government from taking such actions, but the injunction could be read to do so.”

Just hours earlier on Monday, Doughty, a Trump-appointed judge of the U.S. District Court for the Western District of Louisiana denied the Justice Department’s request for a stay arguing that his preliminary injunction ruling isn’t as broad as it appears and only prohibits contacting social media companies for the purposes of suppressing free speech. 

The Justice Department filing indicated that the DOJ would consider seeking emergency action by the Supreme Court if the 5th Circuit rejected its request and therefore asked for a 10 day stay, at minimum, for the highest court in the land to consider an application for a stay.

In its filing to the fifth circuit, the Justice Department warned that Doughty’s initial ruling could potentially restrict a broad and essential part of communications between the government and social media platforms, like preventing the president from asking platforms to act responsibly regarding misinformation about a natural disaster circulating online.

The DOJ also said that Doughty’s ruling has the potential to stop communications between the government and social media platforms regarding national issues like the fentanyl crisis or the security of federal elections, warning that that the ruling could create legal ambiguity that could lead to “disastrous delays” in responding to misinformation online.

The Justice Department didn’t immediately respond to a request for comment.

The post DOJ petitions 5th Circuit to pause ban on federal agency contact with social media companies appeared first on FedScoop.

]]>
https://fedscoop.com/doj-petitions-for-doughty-ruling-pause/feed/ 0 70300
New DOD social media policy highlights threat of imposter accounts https://fedscoop.com/new-dod-social-media-policy-highlights-threat-of-imposter-accounts/ Tue, 16 Aug 2022 19:07:55 +0000 https://fedscoop.com/?p=58174 The new policy comes amid growing concerns about the threat posed by online disinformation and misinformation.

The post New DOD social media policy highlights threat of imposter accounts appeared first on FedScoop.

]]>
The Pentagon’s new departmentwide social media policy spells out the need for public affairs officers and other personnel to combat adversaries’ efforts to impersonate DOD officials or hijack their accounts.

The wide-ranging policy document, released Monday, comes amid growing concerns about the threat posed by online disinformation and misinformation.

“Users, malign actors, and adversaries on social media platforms may attempt to impersonate DoD employees and Service members to disrupt online activity, distract audiences from official accounts, discredit DoD information, or manipulate audiences through disinformation campaigns. PA offices managing an [official DOD social media account] must address fake or imposter accounts,” the guidance states.

Steps that public affairs chiefs and social media managers are directed to take include reporting fake or imposter accounts through the social media platform’s reporting system; establishing local procedures to identify, review, and report fake or imposter accounts; and notifying operations security officials of fake or imposter accounts, as well as cyber operations, counterintelligence elements, and Military Department Counterintelligence Organization in accordance with DoDD 5240.06.

“PA chiefs and social media managers must record the reporting of fake or imposter accounts,” the policy states. “PA chiefs or social media managers may need to provide additional information as evidence that the identified account is fake or impersonating a DoD official.”

According to the Pentagon, telltale signs of an imposter account include, but are not limited to: the account is not registered as an official DOD account; it has very few photos that were recently uploaded and reflect the same date range; it has very few followers and comments; it sends friend requests to individual users on the platform; the account name and photos do not match; there are obvious grammatical or spelling errors; or key information is missing.

Many official DOD social media accounts include markings indicating that they have been verified, but some don’t. Notably, the new policy does not mandate that all accounts be verified going forward.

“While PA chiefs and social media managers should attempt to have an [official DOD account] recognized as a verified account by the social media platform for all account types, they are not required to do so,” the guidance states.

Another threat highlighted in the new policy is cyber vandalism, a tactic that adversaries could use to hijack an official social media account for nefarious purposes.

The warning comes as the military services, other DOD organizations and high-ranking Pentagon officials such as the secretary of defense, increasingly use social media platforms like Twitter to provide information to the public and get their messages out to a global audience. But their accounts can be hacked.

“Cyber-vandalism occurs when an outside party, regardless of identity or motive, takes control of an agency communication channel and misdirects it. Incidents may contain information misleading to the public or threatening to an agent of the United States,” according to a U.S. government cyber-vandalism response toolkit posted on Digital.gov.

For example, in 2015, people claiming to be affiliated with ISIS hacked into the social media accounts of U.S. Central Command and posted threatening messages and propaganda videos.

Responding to cyber-vandalism events involving official social media accounts is now the responsibility of multiple officials, according to the new DOD policy, including, but not limited to, public affairs officials, social media account managers, legal advisors, and IT security personnel.

“These key personnel form the response team that must establish incident response procedures, consistent with DoDIs 8500.01 and 8170.01. The response team must exercise and rehearse various scenarios to quickly assess, recover, and respond to an incident. The response team manages the process to ensure all elements of the incident are reported and addressed,” the guidance states.

The response team should deliver an after-action report and conduct an assessment to review, update, or draft procedural tasks, regulations, or policy, it added.

The post New DOD social media policy highlights threat of imposter accounts appeared first on FedScoop.

]]>
58174
Postal inspectors’ digital intelligence team sometimes acted outside of legal authorities, report says https://fedscoop.com/postal-inspectors-digital-intelligence-team-sometimes-acted-outside-of-legal-authorities-report-says/ Wed, 30 Mar 2022 17:40:37 +0000 https://fedscoop.com/?p=49669 USPS's internet analytics team occasionally used open-source intelligence tools beyond postal inspectors' law enforcement authorities, according to a watchdog.

The post Postal inspectors’ digital intelligence team sometimes acted outside of legal authorities, report says appeared first on FedScoop.

]]>
An internet intelligence and analytics support team for postal inspectors overstepped its legal authority in some cases, according to the inspector general for the U.S. Postal Service.

The Analytics Team, known until April 2021 as the Internet Covert Operations Program (iCOP), occasionally used open-source intelligence tools beyond the Postal Inspection Service’s legal authorities, and its record-keeping about some of that activity was inadequate, according to the March 25 report by the Office of the Inspector General for the USPS.

As part of their work assisting postal inspectors, the analysts conducted “proactive searches” for publicly available information online that could help root out postal crimes, the report says, but in some cases they used keywords that did not have a “postal nexus” — that is, “an identified connection to the mail, postal crimes, or the security of Postal Service facilities or personnel.”

Postal inspectors told the IG’s office that the keywords — such as “attack” or “destroy” — were meant to provide broad searches that could then be narrowed to a postal nexus. The IG report says the Postal Service’s Office of Counsel should have been more involved in vetting those search terms. Yahoo News first reported on the existence of iCOP in April 2021.

The IG office said it looked at a sampling of cases in early 2021 to reach its conclusions about the keywords. For other areas, it reviewed information available from October 2018 through June 2021. The report says it reviewed 434 instances where postal inspectors asked for analytical support from the team. Most of those — 72 percent — had a postal nexus.

The IG’s office also said postal inspectors should do more to document the process for requests made of the Analytics Team.

Leaders of the Postal Inspection Service said they “strongly disagree” with the specifics of the report, pointing to examples in federal case law that support its use of the Analytics Team and broadly authorize the kinds of activities cited by the IG’s office.

The IG’s office, in turn, noted that postal inspectors have agreed to many of the report’s recommendations for how the inspector-in-charge for analytics and the Inspection Service’s chief counsel can clarify the process for usage of open-source intelligence and bolster the record-keeping for those tasks.

“Therefore, the OIG considers management’s comments generally responsive to the recommendations in the report,” the IG’s office said.

The report lists several contracts that postal inspectors have with providers of open-source intelligence tools, but redacts the names of specific companies. Those activities include:

• Cryptocurrency blockchain analysis.
• Tools for gathering information about internet protocol (IP) addresses.
• Facial recognition tools.
• Monitoring social media for certain keywords.
• Searching social media for information about individuals.

As the IG’s report notes, the Analytics Team is part of the Postal Inspection Service’s Analytics and Cybercrime Program, which “provides investigative, forensic, and analytical support to field divisions and headquarters.”

Postal inspectors are sometimes involved in high-profile cybercrime cases, such as takedowns of dark web markets where customers pay in cryptocurrency for illicit goods that are then shipped through the mail.

The post Postal inspectors’ digital intelligence team sometimes acted outside of legal authorities, report says appeared first on FedScoop.

]]>
49669
DHS sued over social media surveillance of visa holders https://fedscoop.com/dhs-social-media-surveillance/ https://fedscoop.com/dhs-social-media-surveillance/#respond Tue, 19 Jan 2021 20:22:54 +0000 https://fedscoop.com/?p=39714 Naturalization attempts may have been denied based on posts from applicants' social media connections.

The post DHS sued over social media surveillance of visa holders appeared first on FedScoop.

]]>
The Department of Homeland Security is being sued for failing to release information on how it monitors social media and uses that data to deny immigration and naturalization attempts, the Center for Democracy & Technology announced Tuesday.

CDT filed the suit last week after Customs and Border Protection and U.S. Citizenship and Immigration Services both ignored its Freedom of Information Act requests for more than a year.

The requests followed reports that border officials denied entry to visa holders based on social media posts from their connections, and the suit follows one from the American Civil Liberties Union in December over DHS, CBP, and Immigration and Customs Enforcement‘s secret purchase of phone location data to track people.

“The public deserves to know how the government scrutinizes social media data when deciding who can enter or stay in the country,” said Avery Gardiner, general counsel for CDT, in the announcement. “Government surveillance has necessary limits, particularly constitutional ones.”

DHS did not respond to a request for comment by the time of publication.

CDT intends to pursue its case against the Trump administration, despite the transition to President-elect Joe Biden’s administration occurring Wednesday.

The nonpartisan nonprofit remains open to negotiating a “reasonable” schedule for fulfilling its FOIA requests, Mana Azarmi, policy counsel at CDT, said in a statement.

“Government monitoring of social media opens the door to discriminatory pretextual denials of benefits and may have the effect of chilling Americans’ speech,” said David Gossett, an attorney with Davis Wright Tremaine representing CDT. “We are holding the government to its FOIA obligations in order to better understand these constitutionally dubious practices.”

The post DHS sued over social media surveillance of visa holders appeared first on FedScoop.

]]>
https://fedscoop.com/dhs-social-media-surveillance/feed/ 0 39714
DARPA is looking for underground tunnels and Twitter has questions https://fedscoop.com/darpa-underground-tunnels-rfi-twitter/ https://fedscoop.com/darpa-underground-tunnels-rfi-twitter/#respond Thu, 29 Aug 2019 15:14:00 +0000 https://fedscoop.com/?p=33558 The agency went viral on Wednesday. Here's what those underground tunnels are all about.

The post DARPA is looking for underground tunnels and Twitter has questions appeared first on FedScoop.

]]>
The Defense Advanced Research Projects Agency (DARPA) went viral Wednesday.

The agency posted a tweet with a link to a request for information seeking — and this is where things start to get interesting — “underground urban tunnels and facilities that may be available to support research and experimentation associated with ongoing and future research initiatives.”

“The ideal space would be a human-made underground environment spanning several city blocks w/ complex layout & multiple stories, including atriums, tunnels & stairwells,” a second tweet read. “Spaces that are currently closed off from pedestrians or can be temporarily used for testing are of interest.”

Denizens of Twitter had thoughts, questions and, naturally, conspiracy theories. The tweet, as of this article’s publication, received over 700 retweets and 1,300 likes, which, judging by the engagement on the agency’s other recent tweets, is a lot for DARPA.

DARPA’s very good social media manager encouraged the interaction, too.

“We are definitely not looking for new places to keep all the Demogorgons,” one Twitter user tweeted at the agency, referring to the creatures from the Stranger Things series. “Please. Demogorgons are such a Department of Energy thing,” the agency’s account responded.

So what’s the deal with DARPA and the underground tunnels?

The RFI elaborates that the agency is interested in testing “state-of-the-art in innovative technologies” that can “map, navigate, and search unknown complex subterranean environments to locate objects of interest.” These technologies are relevant for “global security and disaster-related search and rescue missions,” the RFI further states.

Turns out, this is part of the agency’s ongoing Subterranean or “SubT” Challenge, which launched in September 2018 and is intended to wrap up in August 2021. Earlier this month, 11 teams competed in the “tunnel circuit” component of the challenge. Up next is the “urban circuit,” scheduled for February 2020.

“As teams prepare for the SubT Challenge Urban Circuit, the program recognizes it can be difficult for them to find locations suitable to test their systems and sensors,” a DARPA spokesperson told FedScoop in an email. “DARPA issued this RFI in part to help identify potential representative environments where teams may be able to test in advance of the upcoming event.”

If you’ve got a university-owned or commercially managed urban underground tunnel, responses to the RFI are due Aug. 30.

No Demogorgons. Yet.

The post DARPA is looking for underground tunnels and Twitter has questions appeared first on FedScoop.

]]>
https://fedscoop.com/darpa-underground-tunnels-rfi-twitter/feed/ 0 33558
Census Bureau hopes Siri and Alexa can help it reach hard-to-count populations https://fedscoop.com/digital-assistants-census-2020-siri-alexa-google/ https://fedscoop.com/digital-assistants-census-2020-siri-alexa-google/#respond Tue, 20 Aug 2019 15:10:21 +0000 https://fedscoop.com/?p=33458 The agency is teaming up with digital assistant companies to build custom 2020-related integrations.

The post Census Bureau hopes Siri and Alexa can help it reach hard-to-count populations appeared first on FedScoop.

]]>
Have you asked Alexa or Siri to recite the weather this week? The Census Bureau hopes you’ll ask your digital assistant of choice about the upcoming 2020 census, too.

As it gears up for the congressionally mandated count, Census is betting that the voice-activated technology may be a way to reach younger and more mobile people. These individuals are part of a hard-to-count population, but they’re also among the “millions” using digital assistants.

And so the bureau said it is teaming up with “market leaders” to program personalized 2020-related responses into Apple‘s Siri, Amazon’s Alexa, Google’s Google Home and the like. Census will provide the content, and the companies will provide the programming expertise.

The voice assistants will be able to answer high-level questions about the 2020 count, like information on how to respond or questions or about the security protocols in place for census data. The goal is to ensure that users get a full Census Bureau-approved answer instead of just a Wikipedia search result. The bureau wants provide information about its job openings, too, as hiring ramps up for 2020.

The integrations have yet to launch.

Digital assistant “skills” aren’t the only modern tech tools that the Census Bureau will leverage in 2020. The agency also plans to market the census on social media platforms. And then, of course, there is the much-anticipated roll out of internet self response.

“These collaborations are providing not only a better customer experience but are also giving the U.S. population the tools and knowledge they need to complete the 2020 Census successfully,” Zack Schwartz, deputy division chief of the bureau’s Center for New Media and Promotion, wrote in a blog post.

The post Census Bureau hopes Siri and Alexa can help it reach hard-to-count populations appeared first on FedScoop.

]]>
https://fedscoop.com/digital-assistants-census-2020-siri-alexa-google/feed/ 0 33458