Twitter

A “filter bubble” is a concept in the realm of digital publishing, media, and web technology, particularly significant in understanding the dynamics of disinformation and political polarization. At its core, a filter bubble is a state of intellectual isolation that can occur when algorithms selectively guess what information a user would like to see based on past behavior and preferences. This concept is crucial in the digital age, where much of our information comes from the internet and online sources.

Origins and mechanics

The term was popularized by internet activist Eli Pariser around 2011. It describes how personalization algorithms in search engines and social media platforms can isolate users in cultural or ideological bubbles. These algorithms, driven by AI and machine learning, curate content – be it news, search results, or social media posts – based on individual user preferences, search histories, and previous interactions.

filter bubble, by DALL-E 3

The intended purpose is to enhance user experience by providing relevant and tailored content. However, this leads to a situation where users are less likely to encounter information that challenges or broadens their worldview.

Filter bubbles in the context of disinformation

In the sphere of media and information, filter bubbles can exacerbate the spread of disinformation and propaganda. When users are consistently exposed to a certain type of content, especially if it’s sensational or aligns with their pre-existing beliefs, they become more susceptible to misinformation. This effect is compounded on platforms where sensational content is more likely to be shared and become viral, often irrespective of its accuracy.

Disinformation campaigns, aware of these dynamics, often exploit filter bubbles to spread misleading narratives. By tailoring content to specific groups, they can effectively reinforce existing beliefs or sow discord, making it a significant challenge in the fight against fake news and propaganda.

Impact on political beliefs and US politics

The role of filter bubbles in shaping political beliefs is profound, particularly in the polarized landscape of recent US politics. These bubbles create echo chambers where one-sided political views are amplified without exposure to opposing viewpoints. This can intensify partisanship, as individuals within these bubbles are more likely to develop extreme views and less likely to understand or empathize with the other side.

Recent years in the US have seen a stark divide in political beliefs, influenced heavily by the media sources individuals consume. For instance, the right and left wings of the political spectrum often inhabit separate media ecosystems, with their own preferred news sources and social media platforms. This separation contributes to a lack of shared reality, where even basic facts can be subject to dispute, complicating political discourse and decision-making.

Filter bubbles in elections and political campaigns

Political campaigns have increasingly utilized data analytics and targeted advertising to reach potential voters within these filter bubbles. While this can be an effective campaign strategy, it also means that voters receive highly personalized messages that can reinforce their existing beliefs and psychological biases, rather than presenting a diverse range of perspectives.

Breaking out of filter bubbles

Addressing the challenges posed by filter bubbles involves both individual and systemic actions. On the individual level, it requires awareness and a conscious effort to seek out diverse sources of information. On a systemic level, it calls for responsibility from tech companies to modify their algorithms to expose users to a broader range of content and viewpoints.

Filter bubbles play a significant role in the dissemination and reception of information in today’s digital age. Their impact on political beliefs and the democratic process — indeed, on democracy itself — in the United States cannot be overstated. Understanding and mitigating the effects of filter bubbles is crucial in fostering a well-informed public, capable of critical thinking and engaging in healthy democratic discourse.

Read more

An echo chamber is a metaphorical description of a situation where an individual is encased in a bubble of like-minded information, reinforcing pre-existing views without exposure to opposing perspectives. This concept has gained prominence with the rise of digital and social media, where algorithms personalize user experiences, inadvertently isolating individuals from diverse viewpoints and enabling people to remain cloistered within a closed system that may contain misinformation and disinformation.

The role of digital media and algorithms

Digital platforms and social media leverage algorithms to tailor content that aligns with users’ past behaviors and preferences. This personalization, while enhancing engagement, fosters filter bubblesβ€”closed environments laden with homogeneous information.

Such settings are ripe for the unchecked proliferation of disinformation, as they lack the diversity of opinion necessary for critical scrutiny. The need for critical thinking is greatly diminished when we are only ever exposed to information and beliefs we already agree with.

Disinformation in echo chambers

Echo chambers serve as breeding grounds for disinformation, where false information is designed to mislead and manipulate. In these closed loops, disinformation finds little resistance and is readily accepted and amplified, bolstering existing biases and misconceptions.

We all have psychological traits that make us vulnerable to believing things that aren’t true. Whether sourced via deception, misinterpretation, conspiracy theories, propaganda, or other phenomena, false beliefs are made stickier and harder to debunk when one is surrounded by an echo chamber.

Political polarization exacerbated

Beyond the scale of lone individuals, the isolation facilitated by echo chambers significantly contributes to political polarization more broadly. As people become entrenched in their informational silos, the common ground necessary for democratic discourse dwindles. This division not only fosters extremism but also undermines the social cohesion essential for a functioning democracy.

The impact of confirmation bias

Within echo chambers, confirmation biasβ€”the tendency to favor information that corroborates existing beliefsβ€”becomes particularly pronounced. This cognitive bias solidifies ideological positions, making individuals resistant to changing their views, even in the face of contradictory evidence.

The real-world effects of echo chambers transcend digital boundaries as well, influencing real-world political landscapes. Political actors can exploit these dynamics to deepen divides, manipulate public opinion, and mobilize support based on misinformation, leading to a polarized and potentially radicalized electorate.

Strategies for mitigation

Combating the challenges posed by echo chambers and disinformation necessitates a comprehensive approach:

  • Media Literacy: Educating the public to critically assess information sources, understand content personalization, and identify sources of biases and disinformation.
  • Responsible Platform Design: Encouraging digital platforms to modify algorithms to promote diversity in content exposure and implement measures against disinformation.
  • Regulatory Interventions: Policymakers may need to step in to ensure digital environments foster healthy public discourse.

Echo chambers, particularly within the digital media landscape, significantly impact the spread of disinformation and political polarization. By reinforcing existing beliefs and isolating individuals from diverse perspectives, they contribute to a divided society. Addressing this issue is critical and requires efforts in education, platform design, and regulation to promote a more informed and cohesive public discourse.

Read more

The “lizard people” conspiracy theory is one of the more fantastical narratives that have found a niche within modern conspiracy culture. This theory suggests that shape-shifting reptilian aliens have infiltrated human society to gain power and control. They are often depicted as occupying high positions in government, finance, and industry, manipulating global events to serve their sinister agenda.

Origins and evolution

The roots of the reptilian conspiracy theory can be traced back to a mix of earlier science fiction, mythological tales, and conspiracy theories. However, it was British author David Icke who, in the 1990s, catapulted the idea into the mainstream of conspiracy culture. Icke’s theory combines elements of New Age philosophy, Vedic texts, and a wide array of conspiracy theories, proposing that these reptilian beings are part of a secret brotherhood that has controlled humanity for millennia — a variation on the global cabal conspiracy theory framework that shows up in a lot of places.

The Lizard People conspiracy theory, as illustrated by Midjourney

Icke’s initial ideas were presented in his book “The Biggest Secret” (1999), where he posits that these entities are from the Alpha Draconis star system, now hiding in underground bases and are capable of morphing their appearance to mimic human form. His theories incorporate a broad range of historical, religious, and cultural references, reinterpreting them to fit the narrative of reptilian manipulation.

Persistence and appeal

The persistence of the lizard people conspiracy can be attributed to several factors. First, it offers a simplistic explanation for the complexities and injustices of the world. By attributing the world’s evils to a single identifiable source, it provides a narrative that is emotionally satisfying for some, despite its utter lack of evidence.

Second, the theory thrives on the human tendency to distrust authority and the status quo. In times of social and economic upheaval, conspiracy theories offer a form of counter-narrative that challenges perceived power structures.

The Lizard People are bankers too

Third, the advent of the internet and social media has provided a fertile ground for the spread of such ideas. Online platforms allow for the rapid dissemination of conspiracy theories, connecting individuals across the globe who share these beliefs, thus reinforcing their validity within these communities.

Modern culture and society

In modern culture, the lizard people conspiracy theory occupies a peculiar niche. On one hand, it is often the subject of satire and parody, seen as an example of the most outlandish fringe beliefs. Shows, memes, and popular media references sometimes use the imagery of reptilian overlords as a humorous nod to the world of conspiracy theories.

On the other hand, the theory has been absorbed into the larger tapestry of global conspiracy culture, intersecting with other narratives about global elites, alien intervention, and secret societies. This blending of theories creates a complex and ever-evolving mythology that can be adapted to fit various personal and political agendas.

Despite its presence in the digital and cultural landscape, the reptilian conspiracy is widely discredited and rejected by mainstream society and experts. It’s critiqued for its lack of credible evidence, its reliance on anti-Semitic tropes (echoing age-old myths about blood libel and other global Jewish conspiracies), and its potential to fuel mistrust and paranoia.

Current status and impact

Today, the reptilian conspiracy theory exists on the fringes of conspiracy communities. While it has been somewhat overshadowed by newer and more politically charged conspiracies, it remains a staple within the conspiracy theory ecosystem. Its endurance can be seen as a testament to the human penchant for storytelling and the need to find meaning in an often chaotic world.

The Lizard People, young dapper and woke crowd, by Midjourney

The impact of such theories is a double-edged sword. While they can foster a sense of community among believers, they can also lead to social alienation and the erosion of trust in institutions. The spread of such unfounded theories poses challenges for societies, emphasizing the need for critical thinking and media literacy in navigating the complex landscape of modern information.

The lizard people conspiracy theory is a fascinating study in the power of narrative, belief, and the human desire to make sense of the unseen forces shaping our world. While it holds little sway in academic or scientific circles, its evolution and persistence in popular culture underscore the enduring allure of the mysterious and the unexplained.

Books about conspiracy theories

More conspiracy theories

Read more

A “meme” is a term first coined by British evolutionary biologist Richard Dawkins in his 1976 book “The Selfish Gene.” Originally, it referred to an idea, behavior, or style that spreads from person to person within a culture. However, in the digital age, the term has evolved to specifically denote a type of media – often an image with text, but sometimes a video or a hashtag – that spreads rapidly online, typically through social media platforms like Facebook, Twitter/X, Reddit, TikTok, and generally all extant platforms.

Memes on the digital savannah

In the context of the internet, memes are a form of digital content that encapsulates a concept, joke, or sentiment in a highly relatable and easily shareable format. They often consist of a recognizable image or video, overlaid with humorous or poignant text that pertains to current events, popular culture, or universal human experiences. Memes have become a cornerstone of online communication, offering a way for individuals to express opinions, share laughs, and comment on societal norms.

Grumpy Cat meme: "There are two types of people in this world... and I hate them"

Once primarily a tool of whimsy, amusement, and even uplifit, in recent years memes have become far more weaponized by trolls and bad actors as part of a broader shift in internet culture towards incivility and exploitation. The days of funny cats have been encroached upon by the racism and antisemitism of Pepe the Frog, beloved patron saint meme of the alt-right. The use of memes to project cynicism or thinly-veiled white supremacy into culture and politics is an unwelcome trend that throws cold water on the formerly more innocent days of meme yore online.

Memes as tools of disinformation and information warfare

While memes are still used for entertainment and social commentary, they have also become potent tools for disseminating disinformation and conducting information warfare, both domestically and abroad. This is particularly evident in political arenas where, for instance, American right-wing groups have leveraged memes to spread their ideologies, influence public opinion, and discredit opposition.

  1. Simplicity and Virality: Memes are easy to create and consume, making them highly viral. This simplicity allows for complex ideas to be condensed into easily digestible and shareable content, often bypassing critical analysis from viewers.
  2. Anonymity and Plausible Deniability: The often-anonymous nature of meme creation and sharing allows individuals or groups to spread disinformation without accountability. The humorous or satirical guise of memes also provides a shield of plausible deniability against accusations of spreading falsehoods.
  3. Emotional Appeal: Memes often evoke strong emotional responses, which can be more effective in influencing public opinion than presenting factual information. The American right-wing, among other groups, has adeptly used memes to evoke feelings of pride, anger, or fear, aligning such emotions with their political messages.
  4. Echo Chambers and Confirmation Bias: Social media algorithms tend to show users content that aligns with their existing beliefs, creating echo chambers. Memes that reinforce these beliefs are more likely to be shared within these circles, further entrenching ideologies and sometimes spreading misinformation.
  5. Manipulation of Public Discourse: Memes can be used to distract from important issues, mock political opponents, or oversimplify complex social and political problems. This can skew public discourse and divert attention from substantive policy discussions or critical events.
  6. Targeting the Undecided: Memes can be particularly effective in influencing individuals who are undecided or less politically engaged. Their simplicity and humor can be more appealing than traditional forms of political communication, making them a powerful tool for shaping opinions.

Memes in political campaigns

Memes have been used to discredit candidates or push particular narratives that favor right-wing ideologies. Memes have also been employed to foster distrust in mainstream media and institutions, promoting alternative, often unfounded narratives that align with right-wing agendas.

Trump QAnon meme: "The Storm is Coming" in Game of Thrones font, shared on Truth Social

While often benign and humorous, memes can also be wielded as powerful tools of disinformation and information warfare. The American right-wing, among other political groups globally, has harnessed the viral nature of memes to influence public opinion, manipulate discourse, and spread their ideologies. As digital media continues to evolve, the role of memes in political and social spheres is likely to grow, making it crucial for consumers to approach them with a critical eye.

Read more

Cyberbullying involves the use of digital technologies, like social media, texting, and websites, to harass, intimidate, or embarrass individuals. Unlike traditional bullying, its digital nature allows for anonymity and a wider audience. Cyberbullies employ various tactics such as sending threatening messages, spreading rumors online, posting sensitive or derogatory information, or impersonating someone to damage their reputation — on to more sinister and dangerous actions like doxxing.

Geopolitical impact of cyberbullying

In recent years, cyberbullying has transcended personal boundaries and infiltrated the realm of geopolitics. Nation-states or politically motivated groups have started using cyberbullying tactics to intimidate dissidents, manipulate public opinion, or disrupt political processes in other countries. Examples include spreading disinformation, launching smear campaigns against political figures, or using bots to amplify divisive content. This form of cyberbullying can have significant consequences, destabilizing societies and influencing elections.

Recognizing cyberbullying

Identifying cyberbullying involves looking for signs of digital harassment. This can include receiving repeated, unsolicited, and aggressive communications, noticing fake profiles spreading misinformation about an individual, or observing coordinated attacks against a person or group. In geopolitics, recognizing cyberbullying might involve identifying patterns of disinformation, noting unusual social media activity around sensitive political topics, or detecting state-sponsored troll accounts.

Responding to cyberbullying

The response to cyberbullying varies based on the context and severity. For individuals, it involves:

  1. Documentation: Keep records of all bullying messages or posts.
  2. Non-engagement: Avoid responding to the bully, as engagement often escalates the situation.
  3. Reporting: Report the behavior to the platform where it occurred and, if necessary, to law enforcement.
  4. Seeking Support: Reach out to friends, family, or professionals for emotional support.

For geopolitical cyberbullying, responses are more complex and involve:

  1. Public Awareness: Educate the public about the signs of state-sponsored cyberbullying and disinformation.
  2. Policy and Diplomacy: Governments can implement policies to counteract foreign cyberbullying and engage in diplomatic efforts to address these issues internationally.
  3. Cybersecurity Measures: Strengthening cybersecurity infrastructures to prevent and respond to cyberbullying at a state level.

Cyberbullying, in its personal and geopolitical forms, represents a significant challenge in the digital age. Understanding its nature, recognizing its signs, and knowing how to respond are crucial steps in mitigating its impact. For individuals, it means being vigilant online and knowing when to seek help. In the geopolitical arena, it requires a coordinated effort from governments, tech companies, and the public to defend against these insidious forms of digital aggression. By taking these steps, societies can work towards a safer, more respectful digital world.

Read more

The “repetition effect” is a potent psychological phenomenon and a common propaganda device. This technique operates on the principle that repeated exposure to a specific message or idea increases the likelihood of its acceptance as truth or normalcy by an individual or the public. Its effectiveness lies in its simplicity and its exploitation of a basic human cognitive bias: the more we hear something, the more likely we are to believe it.

Repetition effect, by Midjourney

Historical context

The repetition effect has been used throughout history, but its most notorious use was by Adolf Hitler and the Nazi Party in Germany. Hitler, along with his Propaganda Minister, Joseph Goebbels, effectively employed this technique to disseminate Nazi ideology and promote antisemitism. In his autobiography “Mein Kampf,” Hitler wrote about the importance of repetition in reinforcing the message and ensuring that it reached the widest possible audience. He believed that the constant repetition of a lie would eventually be accepted as truth.

Goebbels echoed this sentiment, famously stating, “If you tell a lie big enough and keep repeating it, people will eventually come to believe it.” The Nazi regime used this strategy in various forms, including in speeches, posters, films, and through controlled media. The relentless repetition of anti-Semitic propaganda, the glorification of the Aryan race, and the demonization of enemies played a crucial role in the establishment and maintenance of the Nazi regime.

Psychological basis

The effectiveness of the repetition effect is rooted in cognitive psychology. This bias is known as the “illusory truth effect,” where repeated exposure to a statement increases its perceived truthfulness. The phenomenon is tied to the ease with which familiar information is processed. When we hear something repeatedly, it becomes more fluent to process, and our brains misinterpret this fluency as a signal for truth.

Modern era usage

The transition into the modern era saw the repetition effect adapting to new media and communication technologies. In the age of television and radio, political figures and advertisers used repetition to embed messages in the public consciousness. The rise of the internet and social media has further amplified the impact of this technique. In the digital age, the speed and reach of information are unprecedented, making it easier for false information to be spread and for the repetition effect to be exploited on a global scale.

The repetition effect on screens and social media, by Midjourney

Political campaigns, especially in polarized environments, often use the repetition effect to reinforce their messages. The constant repetition of slogans, talking points, and specific narratives across various platforms solidifies these messages in the public’s mind, regardless of their factual accuracy.

Ethical considerations and countermeasures

The ethical implications of using the repetition effect are significant, especially when it involves spreading disinformation or harmful ideologies. It raises concerns about the manipulation of public opinion and the undermining of democratic processes.

To counteract the repetition effect, media literacy and critical thinking are essential. Educating the public about this psychological bias and encouraging skepticism towards repeated messages can help mitigate its influence. Fact-checking and the promotion of diverse sources of information also play a critical role in combating the spread of falsehoods reinforced by repetition.

Repetition effect: A key tool of propaganda

The repetition effect is a powerful psychological tool in the arsenal of propagandists and communicators. From its historical use by Hitler and the fascists to its continued relevance in the digital era, this technique demonstrates the profound impact of repeated messaging on public perception and belief.

While it can be used for benign purposes, such as in advertising or reinforcing positive social behaviors, its potential for manipulation and spreading misinformation cannot be understated. Understanding and recognizing the repetition effect is crucial in developing a more discerning and informed approach to the information we encounter daily.

Read more

Shitposting, a term that has seeped into the mainstream of internet culture, is often characterized by the act of posting deliberately provocative, off-topic, or nonsensical content in online communities and on social media. The somewhat vulgar term encapsulates a spectrum of online behavior ranging from harmless, humorous banter to malicious, divisive content.

Typically, a shit-post is defined by its lack of substantive content, its primary goal being to elicit attention and reactions — whether amusement, confusion, or irritation — from its intended audience. Closely related to trolling, shitposting is one aspect of a broader pantheon of bad faith behavior online.

Shit-poster motivations

The demographic engaging in shit-posting is diverse, cutting across various age groups, social strata, and political affiliations. However, it’s particularly prevalent among younger internet users who are well-versed in meme culture and online vernacular. The motivations for shit-posting can be as varied as its practitioners.

Some engage in it for humor and entertainment, seeing it as a form of digital performance art. Others may use it as a tool for social commentary or satire, while a more nefarious subset might employ it to spread disinformation and misinformation, sow discord, and/or harass individuals or groups.

Online trolls shitposting on the internet, by Midjourney

Context in US politics

In the realm of U.S. politics, shit-posting has assumed a significant role in recent elections, especially on platforms like Twitter / X, Reddit, and Facebook. Politicians, activists, and politically engaged individuals often use this tactic to galvanize supporters, mock opponents, or shape public perception. It’s not uncommon to see political shit-posts that are laden with irony, exaggeration, or out-of-context information, designed to inflame passions or reinforce existing biases — or exploit them.

Recognition and response

Recognizing shit-posting involves a discerning eye. Key indicators include the use of hyperbole, irony, non-sequiturs, and content that seems outlandishly out of place or context. The tone is often mocking or sarcastic. Visual cues, such as memes or exaggerated images, are common.

Responding to shit-posting is a nuanced affair. Engaging with it can sometimes amplify the message, which might be the poster’s intention. A measured approach is to assess the intent behind the post. If it’s harmless humor, it might warrant a light-hearted response or none at all.

For posts that are disinformation or border on misinformation or toxicity, countering with factual information, reporting the content, or choosing not to engage are viable strategies. The key is not to feed into the cycle of provocation and reaction that shit-posting often seeks to perpetuate.

Shitposting troll farms lurk in the shadows, beaming disinformation across the land -- by Midjourney

Fighting back

Shit-posting, in its many forms, is a complex phenomenon in the digital age. It straddles the line between being a form of modern-day satire and a tool for misinformation, propaganda, and/or cyberbullying. As digital communication continues to evolve, understanding the nuances of shit-posting – its forms, motivations, and impacts – becomes crucial, particularly in politically charged environments. Navigating this landscape requires a balanced approach, blending awareness, discernment, and thoughtful engagement.

This overview provides a basic understanding of shit-posting, but the landscape is ever-changing, with new forms and norms continually emerging. The ongoing evolution of online communication norms, including phenomena like shit-posting, is particularly fascinating and significant in the broader context of digital culture and political discourse.

Read more

Climate Change Denial: From Big Tobacco Tactics to Today’s Global Challenge

In the complex narrative of global climate change, one pervasive thread is the phenomenon of climate change denial. This denial isn’t just a refusal to accept the scientific findings around climate change; it is a systematic effort to discredit and cast doubt on environmental realities and the need for urgent action.

Remarkably, the roots of this denial can be traced back to the strategies used by the tobacco industry in the mid-20th century to obfuscate the link between smoking and lung cancer. This companies conspired to create a disinformation campaign against the growing scientific consensus on the manmade nature of climate change, to cast doubt about the link between the burning of fossil fuels and the destruction of the planet’s natural ecosystems — and they succeeded, for over half a century, beginning in 1953.

climate change and its denial, by Midjourney

Origins in big tobacco’s playbook

The origins of climate change denial lie in a well-oiled, public relations machine initially designed by the tobacco industry. When scientific studies began linking smoking to lung cancer in the 1950s, tobacco companies launched an extensive campaign to challenge these findings. Their strategy was not to disprove the science outright but to sow seeds of doubt, suggesting that the research was not conclusive and that more studies were needed. This strategy of manufacturing doubt proved effective in delaying regulatory and public action against tobacco products, for more than 5 decades.

Adoption by climate change deniers

This playbook was later adopted by those seeking to undermine climate science. In the late 20th century, as scientific consensus grew around the human impact on global warming, industries and political groups with a vested interest in maintaining the status quo began to employ similar tactics around lying at scale. They funded research to challenge or undermine climate science, supported think tanks and lobbyists to influence public opinion and policy, and used media outlets to spread a narrative of uncertainty and skepticism.

Political consequences

The political consequences of climate change denial have been profound. In the United States and other countries, it has polarized the political debate over environmental policy, turning what is fundamentally a scientific issue into a partisan one. This politicization has hindered comprehensive national and global policies to combat climate change, as legislative efforts are often stalled by ideological conflicts.

a burning forest of climate change, by Midjourney

Denial campaigns have also influenced public opinion, creating a significant segment of the population that is skeptical of climate science years after overwhelming scientific consensus has been reached, which further complicates efforts to implement wide-ranging environmental reforms.

Current stakes and global impact

Today, the stakes of climate change denial could not be higher. As the world faces increasingly severe consequences of global warming β€” including extreme weather events, rising sea levels, and disruptions to ecosystems β€” the need for decisive action becomes more urgent. Yet, climate change denial continues to impede progress. By casting doubt on scientific consensus, it hampers efforts to build the broad public support necessary for bold environmental policies that may help thwart or mitigate some of the worst disasters.

Moreover, climate change denial poses a significant risk to developing countries, which are often the most vulnerable to climate impacts but the least equipped to adapt. Denialism in wealthier nations can lead to a lack of global cooperation and support needed to address these challenges comprehensively.

Moving forward: acknowledging the science and embracing action

To effectively combat climate change, it is crucial to recognize the roots and ramifications of climate change denial. Understanding its origins in the Big Tobacco disinformation strategy helps demystify the tactics used to undermine environmental science. It’s equally important to acknowledge the role of political and economic interests in perpetuating this denial — oil tycoon Charles Koch alone spends almost $1 billion per election cycle, heavily to climate deniers.

A climate change desert, by Midjourney

However, there is a growing global movement acknowledging the reality of climate change and the need for urgent action. From international agreements like the Paris Accord to grassroots activism pushing for change, there is a mounting push against the tide of denial.

Climate change denial, with its roots in the Big Tobacco playbook, poses a significant obstacle to global efforts to address environmental challenges. Its political ramifications have stalled critical policy initiatives, and its ongoing impact threatens global cooperation. As we face the increasing urgency of climate change, acknowledging and countering this denial is crucial for paving the way towards a more sustainable and resilient future.

Read more

Sockpuppets are fake social media accounts used by trolls for deceptive and covert actions, avoiding culpability for abuse, aggression, death threats, doxxing, and other criminal acts against targets.

In the digital age, the battleground for political influence has extended beyond traditional media to the vast, interconnected realm of social media. Central to this new frontier are “sockpuppet” accounts – fake online personas created for deceptive purposes. These shadowy figures have become tools in the hands of authoritarian regimes, perhaps most notably Russia, to manipulate public opinion and infiltrate the political systems of countries like the UK, Ukraine, and the US.

What are sockpuppet accounts?

A sockpuppet account is a fake online identity used for purposes of deception. Unlike simple trolls or spam accounts, sockpuppets are more sophisticated. They mimic real users, often stealing photos and personal data to appear authentic. These accounts engage in activities ranging from posting comments to spreading disinformation, all designed to manipulate public opinion.

The Strategic Use of Sockpuppets

Sockpuppet accounts are a cog in the larger machinery of cyber warfare. They play a critical role in shaping narratives and influencing public discourse. In countries like Russia, where the state exerts considerable control over media, these accounts are often state-sponsored or affiliated with groups that align with government interests.

Case Studies: Russia’s global reach

  1. The United Kingdom: Investigations have revealed Russian interference in the Brexit referendum. Sockpuppet accounts spread divisive content to influence public opinion and exacerbate social tensions. Their goal was to weaken the European Union by supporting the UK’s departure.
  2. Ukraine: Russia’s geopolitical interests in Ukraine have been furthered through a barrage of sockpuppet accounts. These accounts disseminate pro-Russian propaganda and misinformation to destabilize Ukraine’s political landscape, particularly during times of crisis, elections, or — most notably — during its own current war of aggression against its neighbor nation.
  3. The United States: The 2016 US Presidential elections saw an unprecedented level of interference. Russian sockpuppets spread divisive content, fake news, and even organized real-life events, creating an environment of distrust and chaos. Their goal was to sow discord and undermine the democratic process.
Vladimir Putin with his sheep, by Midjourney

How sockpuppets operate

Sockpuppets often work in networks, creating an echo chamber effect. They amplify messages, create false trends, and give the illusion of widespread support for a particular viewpoint. Advanced tactics include deepfakes and AI-generated text, making it increasingly difficult to distinguish between real and fake content.

Detection and countermeasures

Detecting sockpuppets is challenging due to their evolving sophistication. Social media platforms are employing AI-based algorithms to identify and remove these accounts. However, the arms race between detection methods and evasion techniques continues. Governments and independent watchdogs also play a crucial role in exposing such operations.

Implications for democracy

The use of sockpuppet accounts by authoritarian regimes like Russia poses a significant threat to democratic processes. By influencing public opinion and political outcomes in other countries, they undermine the very essence of democracy – the informed consent of the governed. This digital interference erodes trust in democratic institutions and fuels political polarization.

As we continue to navigate the complex landscape of digital information, the challenge posed by sockpuppet accounts remains significant. Awareness and vigilance are key. Social media platforms, governments, and individuals must collaborate to safeguard the integrity of our political systems. As citizens, staying informed and critically evaluating online information is our first line of defense against this invisible but potent threat.

Read more

Deep fakes, a term derived from “deep learning” (a subset of AI) and “fake,” refer to highly realistic, AI-generated digital forgeries of real human beings. These sophisticated imitations can be videos, images, or audio clips where the person appears to say or do things they never actually did.

The core technology behind deep fakes is based on machine learning and neural network algorithms. Two competing AI systems work in tandem: one generates the fake content, while the other attempts to detect the forgeries. Over time, as the detection system identifies flaws, the generator learns from these mistakes, leading to increasingly convincing fakes.

Deep fakes in politics

However, as the technology has become more accessible, it’s been used for various purposes, not all of them benign. In the political realm, deep fakes have a potential for significant impact. They’ve been used to create false narratives or manipulate existing footage, making it appear as though a public figure has said or done something controversial or scandalous. This can be particularly damaging in democratic societies, where public opinion heavily influences political outcomes. Conversely, in autocracies, deep fakes can be a tool for propaganda or to discredit opposition figures.

How to identify deep fakes

Identifying deep fakes can be challenging, but there are signs to look out for:

  1. Facial discrepancies: Imperfections in the face-swapping process can result in blurred or fuzzy areas, especially where the face meets the neck and hair. Look for any anomalies in facial expressions or movements that don’t seem natural.
  2. Inconsistent lighting and shadows: AI can struggle to replicate the way light interacts with physical objects. If the lighting or shadows on the face don’t match the surroundings, it could be a sign of manipulation.
  3. Audiovisual mismatches: Often, the audio does not perfectly sync with the video in a deep fake. Watch for delays or mismatches between spoken words and lip movements.
  4. Unusual blinking and breathing patterns: AI can struggle to accurately mimic natural blinking and breathing, leading to unnatural patterns.
  5. Contextual anomalies: Sometimes, the content of the video or the actions of the person can be a giveaway. If it seems out of character or contextually odd, it could be fake.

In democratic societies, the misuse of deep fakes can erode public trust in media, manipulate electoral processes, and increase political polarization. Fake videos can quickly spread disinformation and misinformation, influencing public opinion and voting behavior. Moreover, they can be used to discredit political opponents with false accusations or fabricated scandals.

In autocracies, deep fakes can be a potent tool for state propaganda. Governments can use them to create a false image of stability, prosperity, or unity, or conversely, to produce disinformation campaigns against perceived enemies, both foreign and domestic. This can lead to the suppression of dissent and the manipulation of public perception to support the regime.

Deep fakes with Donald Trump, by Midjourney

Response to deep fakes

The response to the threat posed by deep fakes has been multifaceted. Social media platforms and news organizations are increasingly using AI-based tools to detect and flag deep fakes. There’s also a growing emphasis on digital literacy, teaching the public to critically evaluate the content they consume.

Legal frameworks are evolving to address the malicious use of deep fakes. Some countries are considering legislation that would criminalize the creation and distribution of harmful deep fakes, especially those targeting individuals or designed to interfere in elections.

While deep fakes represent a remarkable technological advancement, they also pose a significant threat to the integrity of information and democratic processes. As this technology evolves, so must our ability to detect and respond to these forgeries. It’s crucial for both individuals and institutions to stay informed and vigilant against the potential abuses of deep fakes, particularly in the political domain. As we continue to navigate the digital age, the balance between leveraging AI for innovation and safeguarding against its misuse remains a key challenge.

Read more

republican vs. democrat cage match boxing ring

Buckle up, we’re in for a wild ride. Many of the serious scholars of political history and authoritarian regimes are sounding the alarm bells that, although it is a very very good thing that we got the Trump crime family out of the Oval Office, it is still a very very bad thing for America to have so rapidly tilted towards authoritarianism. How did we get here?! How has hyper partisanship escalated to the point of an attempted coup by 126 sitting Republican House Representatives? How has political polarization gotten this bad?

These are some of the resources that have helped me continue grappling with that question, and with the rapidly shifting landscape of information warfare. How can we understand this era of polarization, this age of tribalism? This outline is a work in progress, and I’m planning to keep adding to this list as the tape keeps rolling.

Right-Wing Authoritarianism

Authoritarianism is both a personality type and a form of government — it operates at both the interpersonal and the societal level. The words authoritarian and fascist are often used interchangeably, but fascism is a more specific type of authoritarianism, and far more historically recent.

America has had flavors of authoritarianism since its founding, and when fascism came along the right-wing authoritarians ate it up — and deeply wanted the United States to be a part of it. Only after they became social pariahs did they change position to support American involvement in World War II — and some persisted even after the attack of Pearl Harbor.

With Project 2025, Trump now openly threatens fascism on America — and sadly, some are eager for it. The psychology behind both authoritarian leaders and followers is fascinating, overlooked, and misunderstood.

Scholars of authoritarianism

  • Hannah Arendt — The Origins of Totalitarianism
  • Bob Altemeyer — The Authoritarians
  • Derrida — the logic of the unconscious; performativity in the act of lying
  • ketman — Ketman is the psychological concept of concealing one’s true aims, akin to doublethink in Orwell’s 1984, that served as a central theme to Polish dissident CzesΕ‚aw MiΕ‚osz‘s book The Captive Mind about intellectual life under totalitarianism during the Communist post-WWII occupation.
  • Erich Fromm — coined the term “malignant narcissism” to describe the psychological character of the Nazis. He also wrote extensively about the mindset of the authoritarian follower in his seminal work, Escape from Freedom.
  • Eric Hoffer — his book The True Believers explores the mind of the authoritarian follower, and the appeal of losing oneself in a totalist movement
  • Fascism — elevation of the id as the source of truth; enthusiasm for political violence
  • Tyrants and dictators
  • John Dean — 3 types of authoritarian personality:
    • social dominators
    • authoritarian followers
    • double highs — social dominators who can “switch” to become followers in certain circumstances
  • Loyalty; hero worship
    • Freud = deeply distrustful of hero worship and worried that it indulged people’s needs for vertical authority. He found the archetype of the authoritarian primal father very troubling.
  • Ayn Rand
    • The Fountainhead (1943)
    • Atlas Shrugged (1957)
    • Objectivism ideology
  • Greatness Thinking; heroic individualism
  • Nietszche — will to power; the Uberman
  • Richard Hofstadter — The Paranoid Style
  • George Lakoff — moral framing; strict father morality
  • Neil Postman — Entertaining Ourselves to Death
  • Anti-Intellectualism
  • Can be disguised as hyper-rationalism (Communism)
  • More authoritarianism books
Continue reading Hyper Partisanship: How to understand American political polarization
Read more

phobia indoctrination, illustrated

Phobia indoctrination is one of the principle ways a charismatic leader will lull potential followers into his thrall, by putting them into a state of perpetual fear and anxiety. They know, either instinctively or through training (or both), that people can be induced into a prolonged state of confusion easily, and that many people in states of confusion act quite irrationally. Abusers, cult leaders, and other controllers use demagoguery and other tricks to hide in plain sight and continue to accrue power while passing themselves off as harmless or extremely patriotic.

These chaos agents use emotional manipulation and other tactics of emotional predators as a tool of control. They whip followers up into a fear frenzy frequently enough to instill a set of phobia-like instinctual reactions to chosen stimuli. In addition to stoking fears of the enemies at the gates, they also inculcate irrational fears of the consequences of questioning their authority — invoking authoritarianism. Any doubts expressed about the leadership or its doctrine are subject to terrifying negative results. Cults use this formula to wield undue influence over followers, and prevent them from questioning or leaving the group.

Phobia indoctrination is a tool of cults

As part of a larger overall program of brainwashing or mind control, cults and destructive organizations use imaginary extremes (going to hell, being possessed by demons, failing miserably at life, race war, Leftist apocalypse, etc.) to shock followers into refusing to examine any evidence whatsoever. A form of unethical hypnosis, phobia indoctrination can now be carried out on a mass scale thanks to the internet and our massive media apparatus. Be sure to be on the lookout for any cult warning signs in groups and messaging all around you.

Sociopaths and other types of emotional predators are taking ample advantage of their advantage in time and distance over the slow pace of justice. The wielding of fear as a cudgel in American politics has reached a fever pitch, with anti-Critical Race Theory hysteria, anti-vaxxers, anti-government types, anti-science, Lost Cause-revival zombie MAGA footsoldiers screeching about the “freedom!!!” they wish the government to provide them for persecuting their enemies, and other social horrors are merely the tip of the climate changing iceberg.

phobia indoctrination, illustrated

Phobia indoctrination tactics

Strategies of phobia indoctrination include Repetition and Conditioning, where fears are built through constant exposure; Misinformation and Propaganda, using false information to paint something as dangerous; Utilizing Existing Fears, exaggerating known fears or anxieties; and Social Pressure and Group Dynamics, leveraging social influences to convince others that irrational fears are common.

Other tactics include Authority and Expert Manipulation, where false credentials are used to lend legitimacy; Emotional Manipulation, appealing directly to emotions; Isolation and Control, where a person’s environment is manipulated; and Media Manipulation, using media to provoke fear.

Phobia indoctrination and cults book list:

Or, support local bookstores instead of Jeff Bezos:

Related to phobia indoctrination:

Cult Dictionary β†—

We had better get familiar with the lexicon and vocabulary of the coming era, so we can fight the creeping scourge of thought control roiling the land.

Jim Jones toasting his cult members with a cup of cyanide, by Midjourney

Disinformation Dictionary β†—

Disinformation is meant to confuse, throw off, distract, polarize, and otherwise create conflict within and between target populations.

Disinformation, by Midjourney

Cult Warning Signs: How to recognize cultish groups β†—

Recognizing cult warning signs can be vital in identifying and understanding the risk before getting involved with a group who may not have your best interests in mind.

cult warning signs, by Midjourney
Read more

Legal statute requiring those persons lobbying on behalf of a foreign government or other entity to register such with the U.S. government.

Folks like Mike Flynn and Jared Kushner ran afoul of this law during their time in the US government.

History of FARA

The Foreign Agents Registration Act, often abbreviated as FARA, is a United States law passed in 1938. The purpose of FARA is to ensure that the U.S. government and the people of the United States are informed about the source of information (propaganda) and the identity of people trying to influence U.S. public opinion, policy, and laws on behalf of foreign principals.

The Act requires every person who acts as an agent of foreign principals in a political or quasi-political capacity to make periodic public disclosure of their relationship with the foreign principal. This includes activities, receipts, and disbursements in support of those activities. Disclosure of the required information facilitates evaluation by the government and the American people of the statements and activities of such persons.

The Act is administered and enforced by the FARA Unit of the National Security Division (NSD) of the United States Department of Justice.

FARA does not restrict publishing of materials or viewpoints; rather, it requires agents representing the interests of foreign powers to disclose their relationship with the foreign government and information about related activities and finances.

Originally, FARA was passed in 1938 in response to concerns about German propaganda agents in the United States in the years leading up to World War II, but its usage has evolved over time. The Act has been amended several times, most significantly in 1966 when its scope was narrowed to focus more specifically on agents working in a political context.

Non-compliance with FARA has become a more prominent issue in recent times, with several high-profile investigations and prosecutions related to the Act. The Act received significant media attention during and after the 2016 U.S. Presidential election, when it was invoked in investigations related to foreign interference in the election — particularly Russian election interference.

More on FARA

Learn more about FARA from the Department of Justice.

Read more

Cancel culture refers to the practice of publicly calling out or boycotting individuals, companies, or institutions for behavior that is perceived to be offensive, controversial, or problematic. The goal is to hold these entities accountable for their actions and to pressure them to change their behavior.

This can manifest in various ways, such as social media campaigns, petitions, or protests. The aim of cancel culture is often to create social consequences for the perceived wrongdoing, such as loss of employment, loss of social status, or loss of financial support.

History of cancel culture

The term cancel culture emerged out of the earlier concept of political correctness, and gained popularity in the 2010s alongside the rise of social media. Some scholars and media theorists trace the concept of cancel culture back to even earlier phenomena, such as the boycotts and blacklists of the McCarthyism era in the United States on the right, or the call-out culture of feminist and anti-racist movements on the left.

Cancel culture and political correctness are related in that they both involve social and cultural pressure to conform to certain norms of language and behavior. Political correctness refers to the avoidance of language or actions that may be considered discriminatory, offensive, or insensitive, often with the aim of promoting inclusivity and social justice. Both tend to concern themselves with highlighting language, stereotypes, and assumptions rooted in racism, sexism, and other common forms of bigotry throughout history.

Cancel culture vs. political correctness

In some ways cancel culture can be seen as an extension of political correctness, in that it goes a step further by seeking to hold individuals and entities accountable for violating norms of respect and social justice. The collective power of Facebook, Twitter (aka “X”), and other social media outlets has helped activists organize around ethical, moral, and political issues, and provided new tools for achieving accountability goals, through activities such as public shaming, boycotts, or other forms of social and economic pressure.

In my opinion, the right-wing critique of so-called cancel culture is grounded in an erroneous conflation between governmental action and collective organizing by groups of individuals who are themselves often associated with political activism. Cancel culture is often mentioned in the same breath with censorship, whose definition connotes government tyranny and overreach.

Cancel culture vs. censorship

Typically, however, the government is not involved in actual instances of cancel culture — it is merely people exercising collective powers provided by private social media companies. In fact, it seems to me that right-wing policy tends to involve actual censorship — such as Florida governor and 2024 presidential hopeful Ron DeSantis’s “Don’t Say Gay” bill, or (also in FL) the Republican bill introduced which would require political bloggers to register with the state.

I think it’s important to be discerning, in these instances, about who is exercising power and why — is it really a case of the government overreaching (censorship), or is it simply a group of people reacting appropriately to the continued presence of structural racism, sexism, and many other -isms in modern society: and stubbornly so, after decades and centuries of collective social justice work?

Read more

hate speech in a town hall

Hate speech is a way of dominating & monopolizing the conversation:

  • It removes the possibility of polite, congenial dialogue.
  • No productive discussion can happen until it is removed, b/c one party is only pretending to be there for dialog but is only there for broadcasting.

Hate speech is a weapon being used to shut down political discourse — under the guise of promoting it.

It’s a kind of false flag operation — a strategy of war disguising itself as “legitimate political discourse.”
Putin and the American right-wing are using the exact same tactics — and this is no accident. It’s not a coincidence Elonely Muskrat is carrying water for Russian dictators and oligarchs — the right-wing as an ideological movement is now global.

It’s also no accident this whole Twitter takeover drama is happening just before the mid-terms. The right-wing needs to inject some juice into the splintering base, some of whom are wavering as the actual (intentionally) obscured vision of the GOP leaks out (i.e. destroy government altogether).

Continue reading GOTV: Elonely Muskrat hate speech edition
Read more