Bots

Russian news service formerly known as Russia Today. But what exactly is this network, and why does it matter in our global information landscape?

The Birth of a Propaganda Powerhouse

RT didn’t emerge out of nowhere. Back in 2005, the Russian government launched β€œRussia Today” with a substantial $30 million in state funding. The official mission? To counter what the Kremlin perceived as Western media dominance and improve Russia’s global image.

What’s fascinating is how they approached this mission. Margarita Simonyan, appointed as editor-in-chief at just 25 years old, strategically recruited foreign journalists to give the network an air of international credibility. By 2009, they rebranded to the sleeker β€œRT” β€” a deliberate move to distance themselves from their obvious Russian state origins.

While RT initially focused on cultural diplomacy (showcasing Russian culture and perspectives), its mission shifted dramatically after the 2008 Russia-Georgia war. The network increasingly pivoted toward anti-Western narratives β€” a strategy that continues to this day.

How RT Spreads Disinformation

RT’s playbook is both sophisticated and concerning. The network regularly promotes conspiracy theories about everything from COVID-19 origins to U.S. election fraud. It strategically amplifies divisive issues in Western societies, particularly racial tensions in America.

The coverage of the Ukraine war offers a perfect case study in RT’s propaganda techniques. Their reporting consistently and erroneously:

  • Blames NATO for the conflict
  • Denies Russian war crimes (despite Hague warrant for Putin’s arrest)
  • Frames the invasion as a β€œspecial operation” to β€œdenazify” Ukraine (led by a Jewish president)

What makes RT particularly effective is its tailored regional messaging. In Africa, they operate β€œAfrican Stream,” a covert platform promoting pro-Russian sentiment. In the Balkans, RT Balkan (based in Serbia) helps circumvent EU sanctions while spreading Kremlin-aligned content. Meanwhile, their Spanish-language expansion targets Latin American audiences with anti-Western narratives.

Beyond Media: Covert Operations

Perhaps most concerning is evidence suggesting RT extends far beyond conventional media operations. U.S. officials have alleged that RT funneled $10 million to pro-Trump influencers ahead of the 2024 election, leading to Department of Justice indictments of RT staff.

The network reportedly recruits social media influencers under fake accounts to obscure Russian involvement. More alarmingly, RT-associated platforms allegedly supply equipment (including drones, radios, and body armor) to Russian forces in Ukraine, with some materials sourced from China.

According to U.S. intelligence assessments, RT hosts a clandestine unit focused on global influence operations β€” blurring the line between media and intelligence work.

Money and Organization

As with any major operation, following the money tells an important story. RT’s annual funding has grown exponentially β€” from $30 million at its founding to $400 million by 2015. For the 2022-2024 period, the Russian government allocated a staggering 82 billion rubles.

The network’s organizational structure is deliberately complex. RT operates under ANO TV-Novosti (a nonprofit founded by RIA Novosti) and Rossiya Segodnya (a state media conglomerate established in 2013). Its subsidiaries include Ruptly (a video agency), Redfish, and Maffick (digital media platforms).

Staying One Step Ahead of Sanctions

Despite being banned in the EU and U.S. following Russia’s 2022 invasion of Ukraine, RT continues to expand its reach in Africa, Latin America, and Serbia. The network has proven remarkably adaptable at circumventing restrictions β€” using proxy outlets like β€œRed” in Germany and RT Balkan in Serbia to bypass sanctions.

The international response has been significant but inconsistent. The U.S. designated RT a foreign agent in 2017, the EU banned it in 2022, and Meta removed RT from its platforms in 2024. The U.S. has also launched campaigns to expose RT’s ties to Russian intelligence and limit its global operations.

Why This Matters

RT exemplifies modern hybrid warfare β€” blending traditional state media with covert influence operations and intelligence activities to advance Kremlin interests globally. Despite sanctions and increasing awareness of its true nature, RT’s adaptability and substantial funding ensure its continued reach.

For those of us concerned about information integrity and democratic resilience, understanding RT’s operations isn’t just academic β€” it’s essential for navigating our increasingly complex media landscape.

Read more

Meme Wars: The Untold Story of the Online Battles Upending Democracy in America,” researchers Joan Donovan, Emily Dreyfuss, and Brian Friedberg offer a chilling examination of how internet culture has been weaponized to undermine democratic institutions. Far from being a distant academic analysis, this book serves as an urgent warning about the very real dangers facing our democracy in the digital age.

When Internet Jokes Become Political Weapons

Remember when memes were just harmless internet jokes? Those days are long gone. β€œMeme Wars” meticulously documents how these seemingly innocent cultural artifacts have evolved into powerful weapons in a coordinated assault on American democracy β€” a form of information warfare that tears at our very ability to detect fantasy from reality at all, something that Hannah Arendt once warned of as a key tool of authoritarian regimes.

What makes this transformation particularly insidious is how easy it is to dismiss. After all, how could crudely drawn frogs and joke images possibly be a threat to democracy? Yet the authors convincingly demonstrate that this dismissive attitude is precisely what has allowed far-right operatives to wield memes so effectively.

The book reveals how figures like Alex Jones, Milo Yiannopoulos, Nick Fuentes, and Roger Stone have mastered the art of meme warfare. These digital provocateurs understand something that traditional political institutions have been slow to grasp: in today’s media environment, viral content can bypass established gatekeepers and directly shape public opinion at scale.

Meme Wars by Joan Donovan et al

The Digital Radicalization Pipeline

Perhaps the most disturbing aspect of β€œMeme Wars” is its detailed examination of what the authors call the β€œredpill right” and their techniques for radicalizing ordinary Americans. The process begins innocuously enoughβ€”a provocative meme shared by a friend, a YouTube video recommended by an algorithmβ€”but can quickly lead vulnerable individuals down increasingly extreme ideological paths.

This digital radicalization operates through sophisticated emotional manipulation. Content is carefully crafted to trigger outrage, fear, or a sense of belonging to an in-group that possesses hidden truths. Over time, these digital breadcrumbs lead users into alternative information ecosystems that gradually reshape their perception of political reality.

From Online Conspiracy to Capitol Insurrection

β€œMeme Wars” provides what may be the most comprehensive account to date of how online conspiracy theories materialized into physical violence on January 6th, 2021. The authors trace the evolution of the β€œStop the Steal” movement from fringe online forums to mainstream platforms, showing how digital organizing translated into real-world action.

The book presents the Capitol insurrection as the logical culmination of years of digital warfare. Participants like β€œElizabeth from Knoxville” exemplify this new realityβ€”simultaneously acting as insurrectionists and content creators, live-streaming their participation for online audiences even as they engaged in an attempt to overthrow democratic processes.

This fusion of digital performance and physical violence represents something genuinely new and dangerous in American politics. The insurrectionists weren’t just attacking the Capitol; they were creating content designed to inspire others to join their cause.

Inside the Digital War Rooms

What sets β€œMeme Wars” apart from other analyses of digital extremism is the unprecedented access the authors gained to the online spaces where anti-establishment actors develop their strategies. These digital war rooms function as laboratories where messaging is crafted, tested, and refined before being deployed more broadly.

The authors document how these spaces identify potential recruits, gradually expose them to increasingly extreme content, and eventually mobilize them toward political action. This sophisticated recruitment pipeline has proven remarkably effective at growing extremist movements and providing them with dedicated foot soldiers.

The Existential Threat to Democracy

At its core, β€œMeme Wars” is a book about the fundamental challenge digital manipulation poses to democratic governance. By deliberately stirring strong emotions and deepening partisan divides, meme warfare undermines the rational discourse and shared reality necessary for democratic deliberation.

The authors make a compelling case that these tactics represent an existential threat to American democracy. What’s more, the digital warfare techniques developed in American contexts are already being exported globally, representing a worldwide challenge to democratic institutions.

Confronting the Challenge

Perhaps the most important contribution of β€œMeme Wars” is its insistence that we recognize digital threats as real-world dangers. For too long, online extremism has been dismissed as merely virtualβ€”something separate from β€œreal” politics. The events of January 6th definitively shattered that illusion.

While the book doesn’t offer easy solutions, it makes clear that protecting democracy in the digital age will require new approaches from institutions, platforms, and citizens alike. We need digital literacy that goes beyond spotting fake news to understanding how emotional manipulation operates online. We need platforms that prioritize democratic values over engagement metrics. And we need institutions that can effectively counter extremist narratives without amplifying them.

A Must-Read for Democracy’s Defenders

β€œMeme Wars” is not just a political thriller, though it certainly reads like one at times. It is a rigorously researched warning about how extremist movements are reshaping American culture and politics through digital means. For anyone concerned with the preservation of democratic institutions, it should be considered essential reading.

The authors β€” including Joan Donovan, widely known and respected as a foremost scholar on disinformation β€” have performed a valuable service by illuminating the hidden mechanics of digital manipulation. Now it’s up to all of us to heed their warning and work to build democratic resilience in the digital age. The future of our democracy may depend on it.

Read more

AI accelerationism Dictionary illustration

artificial intelligence, rejecting calls for regulation and safety measures in favor of unchecked innovation. Proponents argue that AI holds the key to solving humanity’s greatest challengesβ€”climate change, poverty, diseaseβ€”and even envision a post-human future where intelligence transcends biological limits.

With strong libertarian leanings, the movement prioritizes market-driven progress, believing that government intervention would stifle AI’s transformative potential. Tech billionaires like legendary venture capitalist Marc Andreessen have embraced these ideas, elevating what was once a fringe philosophy into a driving force in the AI industry.

However, AI accelerationism faces fierce criticism for its disregard of ethical considerations, social consequences, and potential existential risks. Detractors warn that unregulated AI development could exacerbate inequality, destabilize economies, and lead to dangerous technological outcomes without proper safeguards.

The movement stands in stark opposition to cautious, ethical AI development advocated by groups like the effective altruism community, setting up a high-stakes ideological battle over the future of artificial intelligence. Whether one sees AI accelerationism as a path to utopia or a reckless gamble, its growing influence makes it a defining force in the ongoing debate over technology’s role in shaping humanity’s future.

This accelerationism dictionary should help get anyone up to speed on this emerging and dangerous ideology. We’ll keep adding to it over time as the field continues to evolve at breakneck pace.

A dystopian AI hellscape -- one of many potential outcomes of AI accelerationism ideology

Accelerationism Dictionary

A

Accelerate or die: A common slogan in the e/acc movement expressing the belief that technological acceleration is necessary for survival.

Accelerationism: A philosophical and political movement advocating for the acceleration of technological, social, and economic progress. Can exist in left-wing, right-wing, and politically neutral forms.

AGI (Artificial General Intelligence): An artificial intelligence system capable of performing any intellectual task that a human can do.

AI supremacy: The belief or fear that artificial intelligence will surpass human intelligence and capabilities, potentially dominating society, economies, and geopolitical power structures. It is often discussed in the context of global competition for technological dominance.

Continue reading Accelerationism Dictionary
Read more

AI woman with superintelligence

Silicon Valleyβ€˜s most influential and controversial ideological movements. At its core, it represents a radical optimism about artificial intelligence and its potential to reshape human civilization as we know it.

What is AI Accelerationism?

At its most basic, AI accelerationism advocates for the rapid and unrestricted development of artificial intelligence. Unlike those who call for careful regulation and safety measures, accelerationists believe that faster AI development is not just beneficial but crucial for humanity’s future. They reject what they see as excessive caution, often dismissing AI safety advocates as β€œdoomers.”

The Core Beliefs

Technological Solutions to Global Problems

Accelerationists believe that unrestricted technological progress, particularly in AI, holds the key to solving humanity’s greatest challenges. From their perspective, issues like climate change, poverty, and disease are problems that advanced AI could potentially solve if we develop it quickly enough.

Post-Human Future

Perhaps most ambitiously, many e/acc proponents envision a future where the line between human and machine blurs. They embrace the possibility of human-AI integration and the emergence of new forms of consciousness and intelligence.

an AI accelerationism vision of the future

Market-Driven Innovation

The movement has strong libertarian leanings, advocating for minimal government intervention in AI development. They believe that market forces, not regulation, should guide technological progress.

Continue reading What is AI accelerationism?
Read more

Elon Musk as a clown

Effective Altruism (EA), a social movement developed by philosophers Peter Singer and William MacAskill. It emphasizes the moral importance of trying to shape the far future, and adherents argue that the long-term consequences of our actions far outweigh their short-term effects because of the potential of vast numbers of future lives. In other words, future people will outnumber us at such a scale that, by comparison to this imaginary future universe, our current-day lives are not very important at all.

It has numerous and powerful adherents among the Silicon Valley elite including Trump bromance Elon Musk, tech billionaire Peter Thiel (who spoke at the RNC in 2016), indicted and disgraced crypto trader Sam Bankman-Fried, Twitter and Square founder Jack Dorsey (who is good friends with Elon), OpenAIβ€˜s CEO Sam Altman, Ethereum founder (and Thiel fellow) Vitalik Buterin, co-founder of Asana Dustin Moskovitz, and others.

Why longtermism resonates with tech oligarchs

The tech-industrial complex is steeped in the idea of longtermism in part because it aligns so well with so many of their values:

  • technological optimism / techno-utopianism β€” the belief that technology is the solution to all of humanity’s greatest challenges
  • risk-taking mindset β€” venture capital is famous for its high-risk, high-reward mentality
  • Greatness Thinking β€” unwavering devotion to an Ayn Randian worldview in which only two groups exist: a small group of otherworldly titans, and everyone else
  • atomized world β€” social groups and historical context don’t matter much, because one’s personal individualized contributions are what make real impact on the world

The dubious ethics of effective altruism

Although it positions itself high, high above the heady clouds of moral superiority, EA is yet another in a long line of elaborate excuses for ignoring urgent problems we actually face, in favor of β€œreallocating resources” towards some long-distant predictively β€œbetter” class of people that do not currently exist and will not exist for thousands, millions, or even billions of years. It’s an elaborate excuse framework for β€œbillionaires behaving badly” β€” who claim to be akin to saints or even gods who are doing the difficult work of β€œsaving humanity,” but in reality are navel-gazing into their vanity projects and stroking each others’ raging narcissism while completely ignoring large, looming actual dangers in the here and now like climate change, systemic inequality, and geopolitical instabillity to name a few.

Continue reading Effective Altruism and Longtermism: Twin ideologies driving tech billionaires
Read more

Bitcoin for President, by Midjourney

Kamala Harris should be proud of the race she ran, an almost flawless sprint through the tape at a scant 108 days’ worth of time to make her pitch to the American voters β€” many of whom complained that they did not know her very well as a candidate.

Disinformation continued relentlessly throughout the race β€” even doubling down when called out.

Not a Mandate

Trump’s lead keeps dropping as California and other western states finish counting their ballots after what seems like an eternity β€” mostly due to CA accepting ballots postmarked by election day, adding 7 days to the final count no matter what.

He dropped below 50% and never recovered β€” meaning that more people voted against him than voted for him.

As of the final count, his margin dropped below 1.5% β€” the 4th largest margin in any popular vote win in the past 100 years.

final vote tallies in the 2024 presidential election

Vote Predictors

  • Education
  • Media Sources
  • Urban vs. Rural

I haven’t had the energy to give to this piece and I just learned about this feature of Google’s NotebookLM that can generate a podcast between 2 hosts, from your uploaded assets. I tested it out with a combined corpus of some of my own thoughts and some of the resources I found insightful.

What NotebookLM came up with was uncannily compelling. It would be something I would consider useful, particularly as a tool for initiating some of those folks less steeped in politics as I am. So I’m posting it here, in part as a signpost regarding where we’re heading β€” whether we like it or not.

What comes next

Where do we go from here?

Continue reading Post-mortem Election 2024 thoughts
Read more

gamergate illustrated by midjourney

4chan, a breeding ground for anonymity and chaos. Here, the narrative morphed into a menacing campaign that took aim at Quinn and other women in the gaming industry. The escalation was not just rapidβ€”it was coordinated, a harbinger of the kind of internet and meme warfare that has since become all too familiar.

II. Targets of Harassment: The Human Cost of Online Fury

What followed was an onslaught of harassment against women at the heart of the gaming industry. Zoe Quinn wasn’t alone in this; Anita Sarkeesian and Brianna Wu also bore the brunt of this vicious campaign. This wasn’t just trolling or mean tweetsβ€”it was a barrage of rape threats, death threats, and doxing attempts, creating a reality where digital assault became a daily occurrence.

Others got caught in the crossfire, tooβ€”individuals like Jenn Frank and Mattie Brice, who dared to defend the victims or criticize Gamergate, found themselves subject to the same malevolent noise. Even Phil Fish, a game developer, saw his private data leaked in a cruel display of digital vigilantism.

III. Nature of the Harassment: When Digital Attacks Go Beyond the Screen

Gamergate painted a harrowing picture of the scope and scale of online harassment. Orchestrated attacks didn’t stop at vitriolic tweets; they extended to doxing, where victims’ personal information was broadcast publicly, and β€œswatting,” a dangerous β€œprank” that involves making false police reports to provoke a SWAT team response.

Platforms like Twitter, 4chan, and its notorious sibling 8chan were the stages upon which this drama played out. Here, an army of β€œsockpuppet” accounts created an overwhelming maelstrom, blurring the lines between dissent and digital terrorism.

Gamergate red-pilled right work to inflict pain, elect Trump

IV. Motivations and Ideology: Misogyny and Political Underpinnings

At its core, Gamergate was more than just a gamers’ revolt; it was a flashpoint in a broader cultural war, defined by misogyny and anti-feminism. This was a resistance against the shifting dynamics within the gaming worldβ€”a refusal to accept the increasing roles women were assuming.

Moreover, Gamergate was entangled with the burgeoning alt-right movement. Figures like Milo Yiannopoulos latched onto the controversy, using platforms like Breitbart News as megaphones for their ideas. Here, Gamergate served as both a symptom and a gateway, introducing many to the alt-right’s narrative of disenchantment and defiance against progressive change.

Gamergate’s Lasting Legacy and the β€œGreat Meme War”

Gamergate wasn’t just a flashpoint in the world of gaming; it was the breeding ground for a new kind of online warfare. The tactics honed during Gamergateβ€”coordinated harassment, the use of memes as cultural weapons, and the manipulation of platforms like Twitter and 4chanβ€”became the playbook for a much larger, more consequential battle: the so-called β€œGreat Meme War” that helped fuel Donald Trump’s 2016 presidential campaign.

The very same troll armies that harassed women in the gaming industry turned their attention toward mainstream politics, using the lessons learned in Gamergate to spread disinformation, amplify division, and create chaos. Memes became more than just jokes; they became political tools wielded with precision, reaching millions and shaping narratives in ways traditional media struggled to keep up with. What began as a seemingly insular controversy in the gaming world would go on to sow the seeds of a far more disruptive force, one that reshaped modern political discourse.

The influence of these tactics is still felt today, as the digital landscape continues to be a battleground where information warfare is waged daily. Gamergate was the first tremor in a cultural earthquake that has redefined how power, politics, and identity are contested in the digital age. As we move forward, understanding its origins and its impact on today’s sociopolitical environment is essential if we hope to navigateβ€”and counterβ€”the dark currents of digital extremism.

In retrospect, Gamergate wasn’t an isolated incident but a prelude, a trial run for the troll armies that would soon storm the gates of political power. Its legacy, while grim, offers critical insights into the fragility and volatility of our online spacesβ€”and the urgent need for vigilance in the face of future campaigns of digital manipulation.

Related topics

Read more

disinformation

right-wing media ecosystem and elsewhere often exploit existing dividesβ€”political, social, or culturalβ€”using these cracks in the foundation of society to achieve their aims. Whether the goal is political dominance, economic advantage, or simply the unraveling of trust, disinformation thrives in the chaos it creates. And in today’s digital landscape, it spreads like wildfire, fanning the flames of discord faster than ever before.

But disinformation isn’t just about fake news or conspiracy theories. It’s a full-blown strategy, weaponized by those who understand how to pull the levers of media, technology, and emotion to get what they want. It doesn’t need to be entirely false to do damageβ€”sometimes a well-placed half-truth or a twisted fact is all it takes. The aim is to make us question what’s real and undermine our ability to discern truth from fiction. And this is where vigilance and education come in, arming us with the tools to resist these tactics. In the following disinformation dictionary, in addition to the disinformation definition I’ll break down some of the key terms and tactics used to muddy the waters of truth.

Disinformation Dictionary of Psychological Warfare

The cat is well and truly out of the bag in terms of understanding how easily wide swaths of people can be misled into believing total falsehoods and even insane conspiracy theories that have no basis whatsoever in reality. In their passion for this self-righteous series of untruths, they can lose families, jobs, loved ones, respect, and may even be radicalized to commit violence on behalf of an authority figure. It starts with the dissemination of disinformation β€” a practice with a unique Orwellian lexicon all its own, collated in the below disinformation dictionary.

Disinformation is meant to confuse, throw off, distract, polarize, and otherwise create conflict within and between target populations. The spreading of falsehoods is a very old strategy β€” perhaps as old as humankind itself β€” but its mass dissemination through the media was pioneered in the 20th century by the Bolsheviks in the Soviet Union, the Nazis in Germany, Mussoliniβ€˜s Fascists in Italy, and other authoritarian regimes of the early 1900s through the 1940s.

After World War II and the Allies’ defeat of Hitler, the role of disinformation lived on during the Cold War. The Soviet KGB were infamous for their spycraft and covert infiltration of information flows, while the United States experienced waves of anti-Communist paranoia and hysteria fueled by the spread of conspiracist thinking. Psychologists, social scientists, and others did their best to unpack the horrors revealed by the reign of the Nazi regime with a wellspring of research and critical thought about authoritarian personalities and totalitarianism that continues to this day.

disinformation, illustrated

The John Birch Society rides again

In some ways, we haven’t really moved on yet from the Cold War β€” in fact, some appear not to have moved on since the New Deal and are hellbent on rolling its provisions back, almost 100 years later. The dregs of the John Birch Society β€” an organization famously too koo-koo even for William F. Buckley, who excommunicated them from the conservative wing of the Republican Party β€” live on today in a reconstituted form known as the CNP, or Council for National Policy.

Founded officially in 1981 after almost a decade down in the political trenches radicalizing the right, the CNP is the shadowy organization pulling the strings of many of the set pieces in puppets in today’s political play. In alliance with other powerful networks including the Koch empire, the NRA, and the Evangelical church, the CNP is the group behind the recent hysteria out of nowhere about Critical Race Theory in public schools (where it is not taught).

They are funneling the money of America’s billionaires into absurdist theatrical displays of performance artists who distract America with bread and circuses while the plutocrats make off with the cash in the form of tax cuts, tax breaks, tax carve outs, tax loopholes, tax policy, and other wealth-building sweetheart deals for themselves and their cronies.

A crowd of people consuming disinformation, by Midjourney

The CNP, in partnership with Charles Koch’s massive database of all American voters (and of course, his money), have managed to brainwash the Evangelical flock and various assorted MAGA groups into believing a raft of nonsense from climate change denial to anti-masking to the Big Lie about the 2020 election and much more.

They have leveraged new political technology in order to recruit and radicalize new cult members quickly and at now digital scale β€” via QAnon, Fox News, the even more extreme aggressively partisan coverage of Newsmax and OANN, and a fleet of β€œgrassroots” astroturf operations peddling their brand of seditious aspirational theocracy to ruralites like it was going out of style… on accounta it is.

US 2024 elections disinformation

As the U.S. now sees the 2024 elections in the rearview mirror, it’s ever more clear the impact of disinformation campaigns on American politics. These orchestrated fakeries are becoming more sophisticated and widespread, targeting voters across social media, messaging apps, and even AI-generated content. These efforts aim to confuse voters, suppress turnout, smear candidates, and undermine trust in the electoral system. In today’s highly polarized environment, disinformation is not just a tool of foreign interference but also a domestic weapon used to influence election outcomes. Understanding these tactics and how they operate is critical for protecting democracy and ensuring a fair election process.

Here is a guide to the main types of election interference disinformation campaigns in progress, so you can be forewarned and forearmed as much as possible:

  • Voter Suppression and Confusion
    False information is often spread about when, where, or how to vote, confusing voters about eligibility or tricking them with fake polling place closures (see: right-wing operative Jacob Wahl convicted for telecommunications fraud for a voter suppression campaign in MI, NY, PA, IL, and OH in 2020).
  • Candidate Smear Campaigns
    Bad actors fabricate scandals, use manipulated images or videos (β€œdeepfakes”), and spread false claims about candidates to damage their reputations.
  • Foreign Interference
    Nations like Russia, China, and Iran are expected to use fake social media accounts, amplify domestic conspiracy theories, and send targeted messages to influence U.S. elections.
  • Undermining Election Integrity
    Disinformation campaigns spread false claims of widespread voter fraud, misrepresent election security, and attempt to delegitimize results with premature victory declarations or β€œrigged” election claims.

Platforms and Methods

  • Social Media and Messaging Apps
    Disinformation spreads rapidly on platforms like Facebook, Twitter (X), TikTok, WhatsApp, and Telegram, where users share and amplify false narratives.
  • Fake News Websites
    Some websites pose as legitimate news sources but are created to deceive readers with false stories that push specific agendas.
  • AI-Generated Content
    The rise of AI allows for the creation of highly realistic but fake images, videos, and texts, making it harder to distinguish truth from falsehood.

Targeted Communities

  • Communities of Color
    Minority communities are often the focus of disinformation, with tactics designed to exploit shared traumas, concerns, and cultural connections. Misinformation is tailored to specific demographics, often in multiple languages.

Emerging Trends in Disinformation

  • AI-Generated Content
    AI tools are making it easier to create convincing but fake media, posing new challenges for detecting and countering disinformation.
  • Prebunking Efforts
    Governments and organizations are becoming more proactive, working to debunk false narratives before they spread.
  • Cross-Platform Coordination
    Disinformation is coordinated across different platforms, making it harder to detect and stop, as the false narratives hop from one space to another.

Countermeasures

  • Government Agencies
    Federal entities are focused on monitoring foreign interference to safeguard elections.
  • Social Media Content Moderation
    Platforms are increasingly using algorithms and human moderators to identify and remove disinformation.
  • Fact-Checking and Public Education
    Non-profits and independent groups work to fact-check false claims and educate voters on how to critically assess the information they encounter.
  • Media Literacy Initiatives
    Public awareness campaigns aim to teach people how to recognize and resist disinformation, helping voters make informed decisions.

Disinformation Definitions Dictionary

This disinformation definition dictionary covers (and uncovers) the terminology and techniques used by disinfo peddlers, hucksters, Zucksters, propagandists, foreign actors, FARA actors, and professional liars of all sorts β€” including confirmation bias, the bandwagon effect, and other psychological soft points they target, attack, and exploit. From trolling to active measures to β€œalternative facts,” we dig into the terminology that makes disinformation tick.

This resource will be added to over time as neologisms are coined to keep up with the shifting landscape of fakes, deep fakes, AI disinformation, and alternative timelines in our near and potentially far future.

To learn even more, be sure to check out the Disinformation Books List:

Read more

social media, texting, and websites, to harass, intimidate, or embarrass individuals. Unlike traditional bullying, its digital nature allows for anonymity and a wider audience. Cyberbullies employ various tactics such as sending threatening messages, spreading rumors online, posting sensitive or derogatory information, or impersonating someone to damage their reputation β€” on to more sinister and dangerous actions like doxxing.

Geopolitical impact of cyberbullying

In recent years, cyberbullying has transcended personal boundaries and infiltrated the realm of geopolitics. Nation-states or politically motivated groups have started using cyberbullying tactics to intimidate dissidents, manipulate public opinion, or disrupt political processes in other countries. Examples include spreading disinformation, launching smear campaigns against political figures, or using bots to amplify divisive content. This form of cyberbullying can have significant consequences, destabilizing societies and influencing elections.

Recognizing cyberbullying

Identifying cyberbullying involves looking for signs of digital harassment. This can include receiving repeated, unsolicited, and aggressive communications, noticing fake profiles spreading misinformation about an individual, or observing coordinated attacks against a person or group. In geopolitics, recognizing cyberbullying might involve identifying patterns of disinformation, noting unusual social media activity around sensitive political topics, or detecting state-sponsored troll accounts.

Responding to cyberbullying

The response to cyberbullying varies based on the context and severity. For individuals, it involves:

  1. Documentation: Keep records of all bullying messages or posts.
  2. Non-engagement: Avoid responding to the bully, as engagement often escalates the situation.
  3. Reporting: Report the behavior to the platform where it occurred and, if necessary, to law enforcement.
  4. Seeking Support: Reach out to friends, family, or professionals for emotional support.

For geopolitical cyberbullying, responses are more complex and involve:

  1. Public Awareness: Educate the public about the signs of state-sponsored cyberbullying and disinformation.
  2. Policy and Diplomacy: Governments can implement policies to counteract foreign cyberbullying and engage in diplomatic efforts to address these issues internationally.
  3. Cybersecurity Measures: Strengthening cybersecurity infrastructures to prevent and respond to cyberbullying at a state level.

Cyberbullying, in its personal and geopolitical forms, represents a significant challenge in the digital age. Understanding its nature, recognizing its signs, and knowing how to respond are crucial steps in mitigating its impact. For individuals, it means being vigilant online and knowing when to seek help. In the geopolitical arena, it requires a coordinated effort from governments, tech companies, and the public to defend against these insidious forms of digital aggression. By taking these steps, societies can work towards a safer, more respectful digital world.

Read more

social media. The somewhat vulgar term encapsulates a spectrum of online behavior ranging from harmless, humorous banter to malicious, divisive content.

Typically, a shit-post is defined by its lack of substantive content, its primary goal being to elicit attention and reactions β€” whether amusement, confusion, or irritation β€” from its intended audience. Closely related to trolling, shitposting is one aspect of a broader pantheon of bad faith behavior online.

Shit-poster motivations

The demographic engaging in shit-posting is diverse, cutting across various age groups, social strata, and political affiliations. However, it’s particularly prevalent among younger internet users who are well-versed in meme culture and online vernacular. The motivations for shit-posting can be as varied as its practitioners.

Some engage in it for humor and entertainment, seeing it as a form of digital performance art. Others may use it as a tool for social commentary or satire, while a more nefarious subset might employ it to spread disinformation and misinformation, sow discord, and/or harass individuals or groups.

Online trolls shitposting on the internet, by Midjourney

Context in US politics

In the realm of U.S. politics, shit-posting has assumed a significant role in recent elections, especially on platforms like Twitter / X, Reddit, and Facebook. Politicians, activists, and politically engaged individuals often use this tactic to galvanize supporters, mock opponents, or shape public perception. It’s not uncommon to see political shit-posts that are laden with irony, exaggeration, or out-of-context information, designed to inflame passions or reinforce existing biases β€” or exploit them.

Recognition and response

Recognizing shit-posting involves a discerning eye. Key indicators include the use of hyperbole, irony, non-sequiturs, and content that seems outlandishly out of place or context. The tone is often mocking or sarcastic. Visual cues, such as memes or exaggerated images, are common.

Responding to shit-posting is a nuanced affair. Engaging with it can sometimes amplify the message, which might be the poster’s intention. A measured approach is to assess the intent behind the post. If it’s harmless humor, it might warrant a light-hearted response or none at all.

For posts that are disinformation or border on misinformation or toxicity, countering with factual information, reporting the content, or choosing not to engage are viable strategies. The key is not to feed into the cycle of provocation and reaction that shit-posting often seeks to perpetuate.

Shitposting troll farms lurk in the shadows, beaming disinformation across the land -- by Midjourney

Fighting back

Shit-posting, in its many forms, is a complex phenomenon in the digital age. It straddles the line between being a form of modern-day satire and a tool for misinformation, propaganda, and/or cyberbullying. As digital communication continues to evolve, understanding the nuances of shit-posting – its forms, motivations, and impacts – becomes crucial, particularly in politically charged environments. Navigating this landscape requires a balanced approach, blending awareness, discernment, and thoughtful engagement.

This overview provides a basic understanding of shit-posting, but the landscape is ever-changing, with new forms and norms continually emerging. The ongoing evolution of online communication norms, including phenomena like shit-posting, is particularly fascinating and significant in the broader context of digital culture and political discourse.

Learn more

Read more

disinformation widely observed in various spheres today, including politics.

The 1953 meeting and the birth of the disinformation playbook

The origins of modern science denial can be traced back to a pivotal meeting in December 1953, involving the heads of the four largest American tobacco companies. This meeting was a response to emerging scientific research linking smoking to lung cancer β€” a serious existenstial threat to their business model.

Concerned about the potential impact on their business, these industry leaders collaborated with a public relations firm, Hill & Knowlton, to craft a strategy. This strategy was designed not only to dispute the growing evidence about the health risks of smoking, but also to manipulate public perception by creating doubt about the science itself. They created the Tobacco Industry Research Committee (TIRC) as an organization to cast doubt on the established science, and prevent the public from knowing about the lethal dangers of smoking.

And it worked β€” for over 40 years. The public never formed a consensus on the lethality and addictiveness of nicotine until well into the 1990s, when the jig was finally up and Big Tobacco had to pay a record-breaking $200 billion settlement over their 4 decades of mercilessly lying to the American people following the Tobacco Master Settlement Agreement (MSA) of 1998.

smoking and the disinformation campaign of Big Tobacco leading to science denialism, by Midjourney

Strategies of the disinformation playbook

This approach laid the groundwork for what is often referred to as the β€œdisinformation playbook.” The key elements of this playbook include creating doubt about scientific consensus, funding research that could contradict or cloud scientific understanding, using think tanks or other organizations to promote these alternative narratives, and influencing media and public opinion to maintain policy and regulatory environments favorable to their interests β€” whether profit, power, or both.

Over the next 7 decades β€” up to the present day β€” this disinformation playbook has been used by powerful special interests to cast doubt, despite scientific consensus, on acid rain, depletion of the ozone layer, the viability of Ronald Reaganβ€˜s Strategic Defense Initiative (SDI), and perhaps most notably: the man-made causes of climate change.

Adoption and adaptation in various industries

The tobacco industry’s tactics were alarmingly successful for decades, delaying effective regulation and public awareness of smoking’s health risks. These strategies were later adopted and adapted by various industries and groups facing similar scientific challenges to their products or ideologies. For instance, the fossil fuel industry used similar tactics to cast doubt on global warming β€” leading to the phenomenon of climate change denialism. Chemical manufacturers have disputed science on the harmful effects of certain chemicals like DDT and BPA.

What began as a PR exercise by Big Tobacco to preserve their fantastic profits once science discovered the deleterious health effects of smoking eventually evolved into a strategy of fomenting science denialism more broadly. Why discredit one single finding of the scientific community when you could cast doubt on the entire process of science itself β€” as a way of future-proofing any government regulation that might curtail your business interests?

Science denial in modern politics

In recent years, the tactics of science denial have become increasingly prevalent in politics. Political actors, often influenced by corporate interests or ideological agendas, have employed these strategies to challenge scientific findings that are politically inconvenient β€” despite strong and often overwhelming evidence. This is evident in manufactured β€œdebates” on climate change, vaccine safety, and COVID-19, where scientific consensus is often contested not based on new scientific evidence but through disinformation strategies aimed at sowing doubt and confusion.

The role of digital media and politicization

The rise of social media has accelerated the spread of science denial. The digital landscape allows for rapid dissemination of misinformation and the formation of echo chambers, where groups can reinforce shared beliefs or skepticism, often insulated from corrective or opposing information. Additionally, the politicization of science, where scientific findings are viewed through the lens of political allegiance rather than objective evidence, has further entrenched science denial in modern discourse β€” as just one aspect of the seeming politicization of absolutely everything in modern life and culture.

Strategies for combatting science denial

The ongoing impact of science denial is profound. It undermines public understanding of science, hampers informed decision-making, and delays action on critical issues like climate change, public health, and environmental protection. The spread of misinformation about vaccines, for instance, has led to a decrease in vaccination rates and a resurgence of diseases like measles.

scientific literacy, by Midjourney

To combat science denial, experts suggest several strategies. Promoting scientific literacy and critical thinking skills among the general public is crucial. This involves not just understanding scientific facts, but also developing an understanding of the scientific method and how scientific knowledge is developed and validated. Engaging in open, transparent communication about science, including the discussion of uncertainties and limitations of current knowledge, can also help build public trust in science.

Science denial, rooted in the strategies developed by the tobacco industry in the 1950s, has evolved into a significant challenge in contemporary society, impacting not just public health and environmental policy but also the very nature of public discourse and trust in science. Addressing this issue requires a multifaceted approach, including education, transparent communication, and collaborative efforts to uphold the integrity of scientific information.

Read more

trolls for deceptive and covert actions, avoiding culpability for abuse, aggression, death threats, doxxing, and other criminal acts against targets.

In the digital age, the battleground for political influence has extended beyond traditional media to the vast, interconnected realm of social media. Central to this new frontier are β€œsockpuppet” accounts – fake online personas created for deceptive purposes. These shadowy figures have become tools in the hands of authoritarian regimes, perhaps most notably Russia, to manipulate public opinion and infiltrate the political systems of countries like the UK, Ukraine, and the US.

What are sockpuppet accounts?

A sockpuppet account is a fake online identity used for purposes of deception. Unlike simple trolls or spam accounts, sockpuppets are more sophisticated. They mimic real users, often stealing photos and personal data to appear authentic. These accounts engage in activities ranging from posting comments to spreading disinformation, all designed to manipulate public opinion.

The Strategic Use of Sockpuppets

Sockpuppet accounts are a cog in the larger machinery of cyber warfare. They play a critical role in shaping narratives and influencing public discourse. In countries like Russia, where the state exerts considerable control over media, these accounts are often state-sponsored or affiliated with groups that align with government interests.

Case Studies: Russia’s global reach

  1. The United Kingdom: Investigations have revealed Russian interference in the Brexit referendum. Sockpuppet accounts spread divisive content to influence public opinion and exacerbate social tensions. Their goal was to weaken the European Union by supporting the UK’s departure.
  2. Ukraine: Russia’s geopolitical interests in Ukraine have been furthered through a barrage of sockpuppet accounts. These accounts disseminate pro-Russian propaganda and misinformation to destabilize Ukraine’s political landscape, particularly during times of crisis, elections, or β€” most notably β€” during its own current war of aggression against its neighbor nation.
  3. The United States: The 2016 US Presidential elections saw an unprecedented level of interference. Russian sockpuppets spread divisive content, fake news, and even organized real-life events, creating an environment of distrust and chaos. Their goal was to sow discord and undermine the democratic process.
Vladimir Putin with his sheep, by Midjourney

How sockpuppets operate

Sockpuppets often work in networks, creating an echo chamber effect. They amplify messages, create false trends, and give the illusion of widespread support for a particular viewpoint. Advanced tactics include deepfakes and AI-generated text, making it increasingly difficult to distinguish between real and fake content.

Detection and countermeasures

Detecting sockpuppets is challenging due to their evolving sophistication. Social media platforms are employing AI-based algorithms to identify and remove these accounts. However, the arms race between detection methods and evasion techniques continues. Governments and independent watchdogs also play a crucial role in exposing such operations.

Implications for democracy

The use of sockpuppet accounts by authoritarian regimes like Russia poses a significant threat to democratic processes. By influencing public opinion and political outcomes in other countries, they undermine the very essence of democracy – the informed consent of the governed. This digital interference erodes trust in democratic institutions and fuels political polarization.

As we continue to navigate the complex landscape of digital information, the challenge posed by sockpuppet accounts remains significant. Awareness and vigilance are key. Social media platforms, governments, and individuals must collaborate to safeguard the integrity of our political systems. As citizens, staying informed and critically evaluating online information is our first line of defense against this invisible but potent threat.

Read more

ParadoxBot is an adorable chatbot who will cheerfully inform you about the Dark Arts

AI chatbot, ParadoxBot.

Ask it about conspiracy theories, or narcissism, or cults, or authoritarianism, or fascism, or disinformation β€” to name a few. You can also ask it about things like dark money, economics, history, and many topics at the intersection of political psychology.

It doesn’t index what’s on Foundations (yet) but it has ingested this site and you can essentially chat with the site itself via the ChatGPT-like interface below. Enjoy! And if you love it or hate it, find me on BlueSky (as @doctorparadox) or Mastodon and let me know your thoughts:

Tips for using ParadoxBot

  • Follow general good practice regarding prompt engineering.
  • If you don’t get an answer right away, try rephrasing your question. Even omitting or adding one word sometimes produces good results.
  • Try broad broad and specific types of queries.
  • Dig deeper into any areas the bot turns up that sound interesting.
Read more

phobia indoctrination, illustrated

charismatic leader will lull potential followers into his thrall, by putting them into a state of perpetual fear and anxiety. They know, either instinctively or through training (or both), that people can be induced into a prolonged state of confusion easily, and that many people in states of confusion act quite irrationally. Abusers, cult leaders, and other controllers use demagoguery and other tricks to hide in plain sight and continue to accrue power while passing themselves off as harmless or extremely patriotic.

These chaos agents use emotional manipulation and other tactics of emotional predators as a tool of control. They whip followers up into a fear frenzy frequently enough to instill a set of phobia-like instinctual reactions to chosen stimuli. In addition to stoking fears of the enemies at the gates, they also inculcate irrational fears of the consequences of questioning their authority β€” invoking authoritarianism. Any doubts expressed about the leadership or its doctrine are subject to terrifying negative results. Cults use this formula to wield undue influence over followers, and prevent them from questioning or leaving the group.

Phobia indoctrination is a tool of cults

As part of a larger overall program of brainwashing or mind control, cults and destructive organizations use imaginary extremes (going to hell, being possessed by demons, failing miserably at life, race war, Leftist apocalypse, etc.) to shock followers into refusing to examine any evidence whatsoever. A form of unethical hypnosis, phobia indoctrination can now be carried out on a mass scale thanks to the internet and our massive media apparatus. Be sure to be on the lookout for any cult warning signs in groups and messaging all around you.

Sociopaths and other types of emotional predators are taking ample advantage of their advantage in time and distance over the slow pace of justice. The wielding of fear as a cudgel in American politics has reached a fever pitch, with anti-Critical Race Theory hysteria, anti-vaxxers, anti-government types, anti-science, Lost Cause-revival zombie MAGA footsoldiers screeching about the β€œfreedom!!!” they wish the government to provide them for persecuting their enemies, and other social horrors are merely the tip of the climate changing iceberg.

phobia indoctrination, illustrated

Phobia indoctrination tactics

Strategies of phobia indoctrination include Repetition and Conditioning, where fears are built through constant exposure; Misinformation and Propaganda, using false information to paint something as dangerous; Utilizing Existing Fears, exaggerating known fears or anxieties; and Social Pressure and Group Dynamics, leveraging social influences to convince others that irrational fears are common.

Other tactics include Authority and Expert Manipulation, where false credentials are used to lend legitimacy; Emotional Manipulation, appealing directly to emotions; Isolation and Control, where a person’s environment is manipulated; and Media Manipulation, using media to provoke fear.

Phobia indoctrination and cults book list:

Or, support local bookstores instead of Jeff Bezos:

Related to phobia indoctrination:

Cult Dictionary β†—

We had better get familiar with the lexicon and vocabulary of the coming era, so we can fight the creeping scourge of thought control roiling the land.

Jim Jones toasting his cult members with a cup of cyanide, by Midjourney

Disinformation Dictionary β†—

Disinformation is meant to confuse, throw off, distract, polarize, and otherwise create conflict within and between target populations.

Disinformation, by Midjourney

Cult Warning Signs: How to recognize cultish groups β†—

Recognizing cult warning signs can be vital in identifying and understanding the risk before getting involved with a group who may not have your best interests in mind.

cult warning signs, by Midjourney
Read more

Mike Flynn and Jared Kushner ran afoul of this law during their time in the US government.

History of FARA

The Foreign Agents Registration Act, often abbreviated as FARA, is a United States law passed in 1938. The purpose of FARA is to ensure that the U.S. government and the people of the United States are informed about the source of information (propaganda) and the identity of people trying to influence U.S. public opinion, policy, and laws on behalf of foreign principals.

The Act requires every person who acts as an agent of foreign principals in a political or quasi-political capacity to make periodic public disclosure of their relationship with the foreign principal. This includes activities, receipts, and disbursements in support of those activities. Disclosure of the required information facilitates evaluation by the government and the American people of the statements and activities of such persons.

The Act is administered and enforced by the FARA Unit of the National Security Division (NSD) of the United States Department of Justice.

FARA does not restrict publishing of materials or viewpoints; rather, it requires agents representing the interests of foreign powers to disclose their relationship with the foreign government and information about related activities and finances.

Originally, FARA was passed in 1938 in response to concerns about German propaganda agents in the United States in the years leading up to World War II, but its usage has evolved over time. The Act has been amended several times, most significantly in 1966 when its scope was narrowed to focus more specifically on agents working in a political context.

Non-compliance with FARA has become a more prominent issue in recent times, with several high-profile investigations and prosecutions related to the Act. The Act received significant media attention during and after the 2016 U.S. Presidential election, when it was invoked in investigations related to foreign interference in the election β€” particularly Russian election interference.

More on FARA

Learn more about FARA from the Department of Justice.

Read more