How to verify information
Practical skills for checking what you see online before sharing it
Reverse image search
Find where an image originally appeared to detect reuse out of context or outright fakes.
TinEye
Check publication dates
Old stories recirculate constantly. Always verify when something was actually published.
Wayback Machine
Source triangulation
Cross-reference claims across multiple independent sources before accepting them as true.
Google Fact Check Explorer
Identifying edited screenshots
Look for font inconsistencies, odd spacing, pixel artifacts, or unusual timestamps that signal tampering.
Bellingcat how-tos
Tracking original sources
Follow the chain backward. Most viral claims trace to a single origin โ find it.
Reading beyond headlines
Headlines often contradict or exaggerate the article beneath them. Read fully before sharing.
Emotionally manipulative language
Content designed to make you angry or afraid is often engineered to bypass critical thinking.
Recognizing fake experts
Check credentials independently. Titles can be fabricated; institutional affiliations can be misrepresented.
Fact-checking organizations
Established operations with documented methodology and editorial standards
Pulitzer-winning
One of the largest U.S. political fact-checking operations. Run by the Poynter Institute with a transparent ratings system.
Annenberg
Annenberg Public Policy Center project. Strong documented methodology and rigorous sourcing standards.
Viral claims
The go-to for viral rumors, hoaxes, memes, and internet misinformation. Long-running and well-sourced.
IFCN member
Very strong for image and video verification. Reuters publicly documents its fact-checking standards.
UK nonprofit
Excellent UK-based independent nonprofit with transparent process documentation and editorial independence.
Influence ops
Tracks and documents pro-Kremlin disinformation narratives and influence operations across Europe.
Media bias & source reliability
Tools for understanding how outlets frame stories and where they sit on the political spectrum
Side-by-side
Shows left, center, and right coverage of the same story side by side. Useful for identifying framing differences.
Ownership data
Tracks media ownership, political lean, and story blind spots โ showing which outlets cover and which ignore a story.
Education
Strong educational resource for teaching and learning media literacy skills at all levels.
Poynter
Digital and media literacy initiative backed by the Poynter Institute, aimed at teens and young adults.
Misinformation & propaganda research
Organizations studying influence campaigns, bot networks, and coordinated inauthentic behavior
Top research
One of the best groups studying online influence campaigns and platform manipulation. Stanford FSI.
Bot networks
Deep research into influence networks, bot campaigns, and coordinated inauthentic behavior across platforms.
OSINT leader
The gold standard for open-source investigations and verification. Includes extensive free how-to guides.
Training
Major misinformation research and journalist training organization with extensive free resources.
Academic
University of Washington research focused on rumor propagation and online misinformation ecosystems.
Deepfake & AI-generated media detection
Tools for identifying synthetic or manipulated video, audio, and images
AI detection
AI-generated media detection platform for identifying synthetic content across video, audio, and images.
Public tool
Publicly accessible deepfake detection tools designed for journalists and everyday users.
Video auth
Focuses on authenticating video evidence and detecting manipulated media, originally for human rights work.
Verification & OSINT tools
Open-source intelligence tools for tracking, verifying, and investigating online claims
Browser plugin
Excellent browser plugin for verifying videos and images. Breaks videos into keyframes for reverse image searching.
Spread map
Visualizes how misinformation spreads online โ shows networks of sharing and bot amplification in real time.
Essential
Critical for retrieving archived evidence when pages are deleted or quietly edited after publication.
Site ownership
Track domain registration and site ownership โ useful for identifying anonymous or deceptive operators.
Comprehensive
A massive catalog of open-source intelligence investigative tools, organized by category and use case.
Image search
Reverse image search specialized for detecting where images first appeared and how they've been reused.
Academic & research sources
Peer-reviewed and institutional research on information ecosystems and public knowledge
Standards body
International Fact-Checking Network. Establishes transparency and ethics standards for fact-checkers globally.
Nonpartisan
Strong nonpartisan data and public opinion research on media consumption, trust, and information behavior.
Academic
Research on internet governance, misinformation, and digital rights from Harvard Law School.
Academic
Research on online information spread, network behavior, and how false news propagates compared to truth.
Legal, democracy & accountability
Organizations working on democratic institutions, disinformation law, and political accountability
Legal analysis
Serious legal and national-security analysis of democracy, information operations, and institutional threats.
Anti-authoritarian
Nonpartisan organization working to prevent the erosion of democratic institutions and norms.
Policy research
Research and advocacy on voting rights, democracy, legal accountability, and election security.
Known bad actors & documented operations
State-sponsored and non-state influence operations identified through government indictments, sanctions, and open-source research. All entries are based on publicly documented evidence.
๐
This section covers only publicly documented operations โ confirmed through government indictments, official sanctions, or peer-reviewed research by organizations like Graphika, Stanford Internet Observatory, Mandiant, and EU DisinfoLab. Attribution in influence operations is often complex; confidence levels are noted where relevant.
๐ท๐บ Russia
Internet Research Agency (IRA)
U.S. indicted
St. Petersburg-based "troll factory" funded by Yevgeny Prigozhin. Ran coordinated inauthentic social media campaigns targeting U.S. elections from 2014โ2017, operating fake accounts across Facebook, Twitter, Instagram, and YouTube. 13 Russian nationals and 3 entities were indicted by a U.S. grand jury in February 2018. The IRA was formally shut down in 2023 after the Wagner Group rebellion, but successor operations have continued under new structures.
Operation Doppelganger
EU/UK sanctioned
Run by Russian companies Struktura and the Social Design Agency (SDA), active from 2022 onward. Clones legitimate news websites โ including The Guardian, Der Spiegel, Le Monde, and The Washington Post โ to publish pro-Kremlin content. Has also spoofed government websites for NATO, Poland, Ukraine, France, and Germany. Sanctioned by the EU and UK in 2024; U.S. DOJ filed affidavits against it in September 2024.
Ghostwriter (UNC1151)
Belarus/Russia
Tracked by Mandiant since at least 2017. Linked to Belarusian military intelligence with alignment to Russian interests. Compromises journalist and publisher accounts to post fabricated articles as if written by real reporters, then amplifies them on social media. Has targeted Lithuania, Latvia, Poland, and Germany with anti-NATO narratives and fake quotes from military officials. The EU formally blamed Russia for Ghostwriter in September 2021.
Operation Secondary Infektion
Russia-linked
Documented by the EU DisinfoLab and researchers at the Atlantic Council. Used over 300 websites and platforms to spread fabricated documents and Kremlin-aligned narratives across regional blogging sites, forums, and social networks in Europe. Active for years with a very high volume of content but relatively low engagement โ suggesting an automated or semi-automated operation focused on quantity over quality.
Social Design Agency (SDA)
EU sanctioned
Kremlin-linked Russian PR firm behind Operation Doppelganger and other campaigns. Leaked internal documents from 2024 showed the SDA produced nearly 40,000 content units โ memes, images, and comments โ over four months, targeting France, Poland, Germany, and Ukraine. Sanctioned by the EU.
RT (formerly Russia Today) / Rybar
U.S. sanctioned
RT, Russia's state media outlet, was charged by the U.S. DOJ in 2024 for secretly funding U.S.-based social media influencers to spread Kremlin narratives without disclosure. Rybar, a Russian military-focused Telegram channel, was flagged by the U.S. State Department โ which offered up to $10 million for information on its operators โ for conducting malign influence operations targeting U.S. elections.
๐จ๐ณ China
Spamouflage (Dragonbridge)
PRC-linked
Described by Meta as the largest covert influence operation it has ever disrupted, and linked to Chinese law enforcement. Operates across a vast range of social media platforms and internet forums pushing pro-China content and attacking critics of Beijing. Named "Spamouflage" for its characteristic of burying propaganda in spam-like posting patterns to avoid detection. OpenAI also disrupted this network in 2024.
PRC election interference (2024)
ODNI assessed
The Office of the Director of National Intelligence assessed that Chinese actors focused influence efforts on U.S. down-ballot House and Senate races during the 2024 election cycle, amplifying conspiracy theories about specific candidates and undermining confidence in elections. Pro-PRC accounts also published falsified political documents ahead of Taiwan's 2024 elections, including fake DNA tests and fabricated military documents.
๐ฎ๐ท Iran
Storm-2035 / IUVM
DOJ charged
Iran's "Storm-2035" campaign built disinformation websites targeting both liberal and conservative U.S. political groups simultaneously โ designed to sow division rather than favor one side. The Iranian International Union of Virtual Media (IUVM) was identified by OpenAI as using AI tools to generate and translate content for influence operations. In September 2024, the U.S. DOJ charged three Iranian nationals as employees of Iran's Islamic Revolutionary Guard Corps for hacking and influence operations targeting the 2024 U.S. election.
๐ Non-state / Commercial
Indian Chronicles (Srivastava Group)
EU DisinfoLab
Documented by EU DisinfoLab as the largest influence network they had exposed at the time of publication. A 15-year operation run by the New Delhi-based Srivastava Group involving over 260 fake local news sites across 65 countries, resurrected defunct NGOs (including dead people's identities), and fake think tanks โ all designed to discredit Pakistan at the UN Human Rights Council and European Parliament. A notable example of a non-state commercial entity running state-scale influence operations.
Commercial "influence-as-a-service"
Emerging threat
A growing category of private firms that sell coordinated inauthentic behavior to any paying client โ government or private. Graphika and the Stanford Internet Observatory have documented multiple such operations. These firms operate troll farms, build fake follower networks, and run astroturfing campaigns for hire. The lines between state-sponsored and commercial operations are increasingly blurred.
BADBOX / BadBotnet โ preinfected consumer devices
A supply-chain botnet operation hiding malware inside cheap Android devices before they reach consumers. Confirmed by the FBI, Google, Trend Micro, HUMAN Security, and Shadowserver.
๐
BADBOX is not a traditional influence operation โ it is a hardware-level supply chain attack that turns ordinary consumer devices into criminal infrastructure. It matters for media literacy because the same botnet used for ad fraud is also used for fake account creation, residential proxying to launder malicious traffic, and potentially amplifying inauthentic online behavior at scale.
January 2023
The T95 TV box discovery
Security researcher Daniel Milisic purchased a T95 Android TV box from Amazon for $40 and discovered it came pre-loaded with persistent backdoor malware in the firmware โ before he had ever turned it on or connected it to the internet. Malwarebytes confirmed the finding: the device was phoning home to a command-and-control server and silently joining an ad fraud botnet. The malware was embedded at the factory level, below the operating system, making it nearly impossible to remove by ordinary users.
Malwarebytes analysis โ
May 2023
Lemon Group / Guerrilla malware โ 8.9 million devices
Trend Micro researchers, presenting at Black Hat Asia 2023, documented a criminal group called "Lemon Group" that had embedded malware known as "Guerrilla" into approximately 8.9 million Android devices across 180 countries โ including smartphones, smartwatches, TV boxes, and tablets โ spanning over 50 device brands. The malware could silently load additional payloads, intercept one-time passwords (OTPs) from SMS texts (enabling account takeover), set up reverse proxies from the infected device, and infiltrate WhatsApp sessions. Lemon Group operated it as a commercial service: selling access to infected devices, SMS interception, and ad fraud capacity to other criminal customers.
Trend Micro / Black Hat Asia report โ
2023โ2024
BADBOX 1.0 โ first disruption
HUMAN Security's Satori team formally named and documented the operation as "BADBOX" โ a botnet built on backdoored Android Open Source Project (AOSP) devices shipped globally. The German government sinkholed a significant portion of the infrastructure in December 2024, partially disrupting the operation. However, the threat actors adapted quickly.
HUMAN Security Satori report โ
March 2025
BADBOX 2.0 โ 1 million+ devices, four criminal groups
HUMAN Security's Satori team, working with Google, Trend Micro, and Shadowserver, uncovered BADBOX 2.0 โ a major expansion involving at least four distinct criminal groups: SalesTracker Group, MoYu Group, Lemon Group, and LongTV. The botnet had infected over 1 million devices globally, concentrated in Brazil (37.6%), the United States (18.2%), Mexico, Argentina, and Colombia, with traffic observed from 222 countries and territories. BADBOX 2.0 added a new infection vector: beyond factory-preinstalled malware, devices could now also be infected by downloading trojanized apps from unofficial app marketplaces. Google removed 24 Play Store apps found to be distributing the malware.
HUMAN Security BADBOX 2.0 report โ
June 5, 2025
FBI public warning issued
The FBI's Internet Crime Complaint Center (IC3) issued Public Service Announcement I-060525-PSA, formally warning the U.S. public about BADBOX 2.0. The FBI confirmed the botnet compromises TV streaming devices, digital projectors, aftermarket vehicle infotainment systems, digital picture frames, and other IoT products โ most manufactured in China. The announcement was coordinated with Google, HUMAN Security, Trend Micro, and the Shadowserver Foundation.
FBI IC3 Public Service Announcement โ
July 11, 2025
Google sues 25 Chinese entities โ 10 million devices confirmed
Google filed a federal lawsuit against 25 Chinese entities involved in the BADBOX 2.0 operation, revising the confirmed infection count to over 10 million devices. The lawsuit identified four specialized criminal groups within the operation: an Infrastructure Group managing command-and-control servers; a Backdoor Malware Group pre-installing malware at the factory; an Evil Twin Group creating fake versions of legitimate Google Play apps to serve hidden ads; and an Ad Games Group using fraudulent game apps to generate fraudulent ad revenue. Google updated Play Protect to automatically block BADBOX-related apps.
Google lawsuit coverage (The Hacker News) โ
What BADBOX-infected devices do
Ad fraud
Opens hidden browser windows and clicks ads in the background without user knowledge, generating fraudulent revenue for the operators.
Residential proxy
Sells access to your home IP address to other criminals, who route malicious traffic through your network โ making it look like ordinary residential internet use.
Fake account creation
Creates fraudulent accounts on social media, email platforms, and services โ potentially contributing to bot networks and influence operations.
OTP interception
Intercepts one-time passwords sent via SMS, enabling account takeover on platforms like WhatsApp, Facebook, and banking apps.
Silent app installs
Downloads and installs additional malware or apps in the background without user consent, inflating app install counts and enabling further exploitation.
DDoS & malware delivery
Infected devices can be directed to participate in distributed denial-of-service attacks or serve as distribution nodes for additional malware.
โ ๏ธ How to protect yourself
Only buy Android TV/streaming devices from established, recognizable brands sold through official retailers.
Avoid devices advertised as "unlocked," capable of streaming free content, or running apps from unofficial marketplaces.
Check that your Android device is Play Protect certified โ uncertified AOSP devices lack Google's security protections.
Never disable Google Play Protect โ legitimate apps never require this.
Monitor your home network traffic for unexplained outbound connections. Consider a router with traffic visibility.
If you suspect a device is compromised, disconnect it from your network. Reflashing the firmware may not remove factory-level backdoors.
Track this threat
Live data
The Shadowserver Foundation coordinates BADBOX 2.0 sinkholes and publishes infection data. Their dashboard tracks compromised device activity in near-real time by country. Shadowserver shares data with national CERTs and ISPs to enable takedowns.
Primary research
The definitive technical report on BADBOX 2.0 from the Satori Threat Intelligence team. Covers the four criminal groups, infection methods, device models, geographic distribution, and fraud mechanisms.
Official warning
The FBI's official public warning about BADBOX 2.0, including indicators of compromise, affected device categories, and recommended mitigations for home users.
Black Hat 2023
The original Black Hat Asia 2023 research exposing Lemon Group's 8.9 million preinfected device operation. Documents the Guerrilla malware, plugin architecture, SMS interception, and criminal business model.
Origin story
The 2023 report that started it all โ a $40 Amazon TV box discovered to contain factory-preinstalled backdoor malware. Accessible technical analysis useful for explaining the threat to non-expert audiences.