social media

The adrenochrome conspiracy theory is a complex and widely debunked claim that has its roots in various strands of mythology, pseudoscience, disinformation, and misinformation. It’s important to approach this topic with a critical thinking perspective, understanding that these claims are not supported by credible evidence or scientific understanding.

Origin and evolution of the adrenochrome theory

The origin of the adrenochrome theory can be traced back to the mid-20th century, but it gained notable prominence in the context of internet culture and conspiracy circles in the 21st century. Initially, adrenochrome was simply a scientific term referring to a chemical compound produced by the oxidation of adrenaline. However, over time, it became entangled in a web of conspiracy theories.

In fiction, the first notable reference to adrenochrome appears in Aldous Huxley’s 1954 work “The Doors of Perception,” where it’s mentioned in passing as a psychotropic substance. Its more infamous portrayal came with Hunter S. Thompson’s 1971 book “Fear and Loathing in Las Vegas,” where adrenochrome is depicted as a powerful hallucinogen. These fictional representations played a significant role in shaping the later conspiracy narratives around the substance.

The conspiracy theory, explained

The modern adrenochrome conspiracy theory posits that a global elite, often linked to high-profile figures in politics, entertainment, and finance, harvests adrenochrome from human victims, particularly children. According to the theory, this substance is used for its supposed anti-aging properties or as a psychedelic drug.

This theory often intertwines with other conspiracy theories, such as those related to satanic ritual abuse and global cabal elites. It gained significant traction on internet forums and through social media, particularly among groups inclined towards conspiratorial thinking. Adrenochrome theory fundamentally contains antisemitic undertones, given its tight similarity with the ancient blood libel trope — used most famously by the Nazi regime to indoctrinate ordinary Germans into hating the Jews.

Lack of scientific evidence

From a scientific perspective, adrenochrome is a real compound, but its properties are vastly different from what the conspiracy theory claims. It does not have hallucinogenic effects, nor is there any credible evidence to suggest it possesses anti-aging capabilities. The scientific community recognizes adrenochrome as a byproduct of adrenaline oxidation with limited physiological impact on the human body.

Impact and criticism

The adrenochrome conspiracy theory has been widely criticized for its baseless claims and potential to incite violence and harassment. Experts in psychology, sociology, and information science have pointed out the dangers of such unfounded theories, especially in how they can fuel real-world hostility and targeting of individuals or groups.

Furthermore, the theory diverts attention from legitimate issues related to child welfare and exploitation, creating a sensationalist and unfounded narrative that undermines genuine efforts to address these serious problems.

Psychological and social dynamics

Psychologists have explored why people believe in such conspiracy theories. Factors like a desire for understanding in a complex world, a need for control, and a sense of belonging to a group can drive individuals towards these narratives. Social media algorithms and echo chambers further reinforce these beliefs, creating a self-sustaining cycle of misinformation.

Various legal and social actions have been taken to combat the spread of the adrenochrome conspiracy and similar misinformation. Platforms like Facebook, Twitter, and YouTube have implemented policies to reduce the spread of conspiracy theories, including adrenochrome-related content. Additionally, educational initiatives aim to improve media literacy and critical thinking skills among the public to better discern fact from fiction.

Ultimately, the adrenochrome conspiracy theory is a baseless narrative that has evolved from obscure references in literature and pseudoscience to a complex web of unfounded claims, intertwined with other conspiracy theories. It lacks any credible scientific support and has been debunked by experts across various fields.

The theory’s prevalence serves as a case study in the dynamics of misinformation and the psychological underpinnings of conspiracy belief systems. Efforts to combat its spread are crucial in maintaining a well-informed and rational public discourse.

Read more

“Source amnesia” is a psychological phenomenon that occurs when an individual can remember information but cannot recall where the information came from. In the context of media and disinformation, source amnesia plays a crucial role in how misinformation spreads and becomes entrenched in people’s beliefs. This overview will delve into the nature of source amnesia, its implications for media consumption, and strategies for addressing it.

Understanding source amnesia

Source amnesia is part of the broader category of memory errors where the content of a memory is dissociated from its source. This dissociation can lead to a situation where individuals accept information as true without remembering or critically evaluating where they learned it. The human brain tends to remember facts or narratives more readily than it does the context or source of those facts, especially if the information aligns with pre-existing beliefs or emotions. This bias can lead to the uncritical acceptance of misinformation if the original source was unreliable but the content is memorable.

Source amnesia in the media landscape

The role of source amnesia in media consumption has become increasingly significant in the digital age. The vast amount of information available online and the speed at which it spreads mean that individuals are often exposed to news, facts, and narratives from myriad sources, many of which might be dubious or outright false. Social media platforms, in particular, exacerbate this problem by presenting information in a context where source credibility is often obscured or secondary to engagement.

Disinformation campaigns deliberately exploit source amnesia. They spread misleading or false information, knowing that once the information is detached from its dubious origins, it is more likely to be believed and shared. This effect is amplified by confirmation bias, where individuals are more likely to remember and agree with information that confirms their pre-existing beliefs, regardless of the source’s credibility.

Implications of source amnesia

The implications of source amnesia in the context of media and disinformation are profound. It can lead to the widespread acceptance of false narratives, undermining public discourse and trust in legitimate information sources. Elections, public health initiatives, and social cohesion can be adversely affected when disinformation is accepted as truth due to source amnesia.

The phenomenon also poses challenges for fact-checkers and educators, as debunking misinformation requires not just presenting the facts but also overcoming the emotional resonance and simplicity of the original, misleading narratives.

Addressing source amnesia

Combating source amnesia and its implications for disinformation requires a multi-pronged approach, focusing on education, media literacy, and critical thinking. Here are some strategies:

  1. Media Literacy Education: Teaching people to critically evaluate sources and the context of the information they consume can help mitigate source amnesia. This includes understanding the bias and reliability of different media outlets, recognizing the hallmarks of credible journalism, and checking multiple sources before accepting information as true.
  2. Critical Thinking Skills: Encouraging critical thinking can help individuals question the information they encounter, making them less likely to accept it uncritically. This involves skepticism about information that aligns too neatly with pre-existing beliefs or seems designed to elicit an emotional response.
  3. Source Citing: Encouraging the practice of citing sources in media reports and social media posts can help readers trace the origin of information. This practice can aid in evaluating the credibility of the information and combat the spread of disinformation.
  4. Digital Platforms’ Responsibility: Social media platforms and search engines play a crucial role in addressing source amnesia by improving algorithms to prioritize reliable sources and by providing clear indicators of source credibility. These platforms can also implement features that encourage users to evaluate the source before sharing information.
  5. Public Awareness Campaigns: Governments and NGOs can run public awareness campaigns highlighting the importance of source evaluation. These campaigns can include guidelines for identifying credible sources and the risks of spreading unverified information.

Source amnesia is a significant challenge in the fight against disinformation, making it easy for false narratives to spread unchecked. By understanding this phenomenon and implementing strategies to address it, society can better safeguard against the corrosive effects of misinformation.

It requires a concerted effort from individuals, educators, media outlets, and digital platforms to ensure that the public remains informed and critical in their consumption of information. This collective action can foster a more informed public, resilient against the pitfalls of source amnesia and the spread of disinformation.

Read more

The backfire effect is a cognitive phenomenon that occurs when individuals are presented with information that contradicts their existing beliefs, leading them not only to reject the challenging information but also to further entrench themselves in their original beliefs.

This effect is counterintuitive, as one might expect that presenting factual information would correct misconceptions. However, due to various psychological mechanisms, the opposite can occur, complicating efforts to counter misinformation, disinformation, and the spread of conspiracy theories.

Origin and mechanism

The term “backfire effect” was popularized by researchers Brendan Nyhan and Jason Reifler, who in 2010 conducted studies demonstrating that corrections to false political information could actually deepen an individual’s commitment to their initial misconception. This effect is thought to stem from a combination of cognitive dissonance (the discomfort experienced when holding two conflicting beliefs) and identity-protective cognition (wherein individuals process information in a way that protects their sense of identity and group belonging).

Relation to media, disinformation, echo chambers, and media bubbles

In the context of media and disinformation, the backfire effect is particularly relevant. The proliferation of digital media platforms has made it easier than ever for individuals to encounter information that contradicts their beliefs — but paradoxically, it has also made it easier for them to insulate themselves in echo chambers and media bubblesβ€”environments where their existing beliefs are constantly reinforced and rarely challenged.

Echo chambers refer to situations where individuals are exposed only to opinions and information that reinforce their existing beliefs, limiting their exposure to diverse perspectives. Media bubbles are similar, often facilitated by algorithms on social media platforms that curate content to match users’ interests and past behaviors, inadvertently reinforcing their existing beliefs and psychological biases.

Disinformation campaigns can exploit these dynamics by deliberately spreading misleading or false information, knowing that it is likely to be uncritically accepted and amplified within certain echo chambers or media bubbles. This can exacerbate the backfire effect, as attempts to correct the misinformation can lead to individuals further entrenching themselves in the false beliefs, especially if those beliefs are tied to their identity or worldview.

How the backfire effect happens

The backfire effect happens through a few key psychological processes:

  1. Cognitive Dissonance: When confronted with evidence that contradicts their beliefs, individuals experience discomfort. To alleviate this discomfort, they often reject the new information in favor of their pre-existing beliefs.
  2. Confirmation Bias: Individuals tend to favor information that confirms their existing beliefs and disregard information that contradicts them. This tendency towards bias can lead them to misinterpret or dismiss corrective information.
  3. Identity Defense: For many, beliefs are tied to their identity and social groups. Challenging these beliefs can feel like a personal attack, leading individuals to double down on their beliefs as a form of identity defense.

Prevention and mitigation

Preventing the backfire effect and its impact on public discourse and belief systems requires a multifaceted approach:

  1. Promote Media Literacy: Educating the public on how to critically evaluate sources and understand the mechanisms behind the spread of misinformation can empower individuals to think critically and assess the information they encounter.
  2. Encourage Exposure to Diverse Viewpoints: Breaking out of media bubbles and echo chambers by intentionally seeking out and engaging with a variety of perspectives can reduce the likelihood of the backfire effect by making conflicting information less threatening and more normal.
  3. Emphasize Shared Values: Framing challenging information in the context of shared values or goals can make it less threatening to an individual’s identity, reducing the defensive reaction.
  4. Use Fact-Checking and Corrections Carefully: Presenting corrections in a way that is non-confrontational and, when possible, aligns with the individual’s worldview or values can make the correction more acceptable. Visual aids and narratives that resonate with the individual’s experiences or beliefs can also be more effective than plain factual corrections.
  5. Foster Open Dialogue: Encouraging open, respectful conversations about contentious issues can help to humanize opposing viewpoints and reduce the instinctive defensive reactions to conflicting information.

The backfire effect presents a significant challenge in the fight against misinformation and disinformation, particularly in the context of digital media. Understanding the psychological underpinnings of this effect is crucial for developing strategies to promote a more informed and less polarized public discourse. By fostering critical thinking, encouraging exposure to diverse viewpoints, and promoting respectful dialogue, it may be possible to mitigate the impact of the backfire effect and create a healthier information ecosystem.

Read more

The “wallpaper effect” is a phenomenon in media, propaganda, and disinformation where individuals become influenced or even indoctrinated by being continuously exposed to a particular set of ideas, perspectives, or ideologies. This effect is akin to wallpaper in a room, which, though initially noticeable, becomes part of the unnoticed background over time.

The wallpaper effect plays a significant role in shaping public opinion and individual beliefs, often without the conscious awareness of the individuals affected.

Origins and mechanisms

The term “wallpaper effect” stems from the idea that constant exposure to a specific type of media or messaging can subconsciously influence an individual’s perception and beliefs, similar to how wallpaper in a room becomes a subtle but constant presence. This effect is potentiated by the human tendency to seek information that aligns with existing beliefs, known as confirmation bias. It leads to a situation where diverse viewpoints are overlooked, and a singular perspective dominates an individual’s information landscape.

The wallpaper effect, by DALL-E 3

Media and information bubbles

In the context of media, the wallpaper effect is exacerbated by the formation of information bubbles or echo chambers. These are environments where a person is exposed only to opinions and information that reinforce their existing beliefs.

The rise of digital media and personalized content algorithms has intensified this effect, as users often receive news and information tailored to their preferences, further entrenching their existing viewpoints. Even more insidiously, social media platforms tend to earn higher profits when they fill users’ feeds with ideological perspectives they already agree with. Even more profitable is the process of tilting them towards more extreme versions of those beliefs — a practice that in other contexts we call “radicalization.”

Role in propaganda and disinformation

The wallpaper effect is a critical tool in propaganda and disinformation campaigns. By consistently presenting a specific narrative or viewpoint, these campaigns can subtly alter the perceptions and beliefs of the target audience. Over time, the repeated exposure to these biased or false narratives becomes a backdrop to the individual’s understanding of events, issues, or groups, often leading to misconceptions or unwarranted biases.

Psychological impact

The psychological impact of the wallpaper effect is profound. It can lead to a narrowing of perspective, where individuals become less open to new information or alternative viewpoints. This effect can foster polarized communities and hyper partisan politics, where dialogue and understanding between differing viewpoints become increasingly difficult.

Case studies and examples

Historically, authoritarian regimes have used the wallpaper effect to control public opinion and suppress dissent. By monopolizing the media landscape and continuously broadcasting their propaganda, these regimes effectively shaped the public’s perception of reality.

In contemporary times, this effect is also seen in democracies, where partisan news outlets or social media algorithms create a similar, though more fragmented, landscape of information bubbles.

Counteracting the wallpaper effect

Counteracting the wallpaper effect involves a multifaceted approach. Media literacy education is crucial, as it empowers individuals to critically analyze and understand the sources and content of information they consume.

Encouraging exposure to a wide range of viewpoints and promoting critical thinking skills are also essential strategies. Additionally, reforms in digital media algorithms to promote diverse viewpoints and reduce the creation of echo chambers can help mitigate this effect.

Implications for democracy and society

The wallpaper effect has significant implications for democracy and society. It can lead to a polarized public, where consensus and compromise become challenging to achieve. The narrowing of perspective and entrenchment of beliefs can undermine democratic discourse, leading to increased societal divisions and decreased trust in media and institutions.

The wallpaper effect is a critical phenomenon that shapes public opinion and belief systems. Its influence is subtle yet profound, as constant exposure to a specific set of ideas can subconsciously mold an individual’s worldview. Understanding and addressing this effect is essential in promoting a healthy, informed, and open society. Efforts to enhance media literacy, promote diverse viewpoints, and reform digital media practices are key to mitigating the wallpaper effect and fostering a more informed and less polarized public.

Read more

A “filter bubble” is a concept in the realm of digital publishing, media, and web technology, particularly significant in understanding the dynamics of disinformation and political polarization. At its core, a filter bubble is a state of intellectual isolation that can occur when algorithms selectively guess what information a user would like to see based on past behavior and preferences. This concept is crucial in the digital age, where much of our information comes from the internet and online sources.

Origins and mechanics

The term was popularized by internet activist Eli Pariser around 2011. It describes how personalization algorithms in search engines and social media platforms can isolate users in cultural or ideological bubbles. These algorithms, driven by AI and machine learning, curate content – be it news, search results, or social media posts – based on individual user preferences, search histories, and previous interactions.

filter bubble, by DALL-E 3

The intended purpose is to enhance user experience by providing relevant and tailored content. However, this leads to a situation where users are less likely to encounter information that challenges or broadens their worldview.

Filter bubbles in the context of disinformation

In the sphere of media and information, filter bubbles can exacerbate the spread of disinformation and propaganda. When users are consistently exposed to a certain type of content, especially if it’s sensational or aligns with their pre-existing beliefs, they become more susceptible to misinformation. This effect is compounded on platforms where sensational content is more likely to be shared and become viral, often irrespective of its accuracy.

Disinformation campaigns, aware of these dynamics, often exploit filter bubbles to spread misleading narratives. By tailoring content to specific groups, they can effectively reinforce existing beliefs or sow discord, making it a significant challenge in the fight against fake news and propaganda.

Impact on political beliefs and US politics

The role of filter bubbles in shaping political beliefs is profound, particularly in the polarized landscape of recent US politics. These bubbles create echo chambers where one-sided political views are amplified without exposure to opposing viewpoints. This can intensify partisanship, as individuals within these bubbles are more likely to develop extreme views and less likely to understand or empathize with the other side.

Recent years in the US have seen a stark divide in political beliefs, influenced heavily by the media sources individuals consume. For instance, the right and left wings of the political spectrum often inhabit separate media ecosystems, with their own preferred news sources and social media platforms. This separation contributes to a lack of shared reality, where even basic facts can be subject to dispute, complicating political discourse and decision-making.

Filter bubbles in elections and political campaigns

Political campaigns have increasingly utilized data analytics and targeted advertising to reach potential voters within these filter bubbles. While this can be an effective campaign strategy, it also means that voters receive highly personalized messages that can reinforce their existing beliefs and psychological biases, rather than presenting a diverse range of perspectives.

Breaking out of filter bubbles

Addressing the challenges posed by filter bubbles involves both individual and systemic actions. On the individual level, it requires awareness and a conscious effort to seek out diverse sources of information. On a systemic level, it calls for responsibility from tech companies to modify their algorithms to expose users to a broader range of content and viewpoints.

Filter bubbles play a significant role in the dissemination and reception of information in today’s digital age. Their impact on political beliefs and the democratic process — indeed, on democracy itself — in the United States cannot be overstated. Understanding and mitigating the effects of filter bubbles is crucial in fostering a well-informed public, capable of critical thinking and engaging in healthy democratic discourse.

Read more

Cyberbullying involves the use of digital technologies, like social media, texting, and websites, to harass, intimidate, or embarrass individuals. Unlike traditional bullying, its digital nature allows for anonymity and a wider audience. Cyberbullies employ various tactics such as sending threatening messages, spreading rumors online, posting sensitive or derogatory information, or impersonating someone to damage their reputation — on to more sinister and dangerous actions like doxxing.

Geopolitical impact of cyberbullying

In recent years, cyberbullying has transcended personal boundaries and infiltrated the realm of geopolitics. Nation-states or politically motivated groups have started using cyberbullying tactics to intimidate dissidents, manipulate public opinion, or disrupt political processes in other countries. Examples include spreading disinformation, launching smear campaigns against political figures, or using bots to amplify divisive content. This form of cyberbullying can have significant consequences, destabilizing societies and influencing elections.

Recognizing cyberbullying

Identifying cyberbullying involves looking for signs of digital harassment. This can include receiving repeated, unsolicited, and aggressive communications, noticing fake profiles spreading misinformation about an individual, or observing coordinated attacks against a person or group. In geopolitics, recognizing cyberbullying might involve identifying patterns of disinformation, noting unusual social media activity around sensitive political topics, or detecting state-sponsored troll accounts.

Responding to cyberbullying

The response to cyberbullying varies based on the context and severity. For individuals, it involves:

  1. Documentation: Keep records of all bullying messages or posts.
  2. Non-engagement: Avoid responding to the bully, as engagement often escalates the situation.
  3. Reporting: Report the behavior to the platform where it occurred and, if necessary, to law enforcement.
  4. Seeking Support: Reach out to friends, family, or professionals for emotional support.

For geopolitical cyberbullying, responses are more complex and involve:

  1. Public Awareness: Educate the public about the signs of state-sponsored cyberbullying and disinformation.
  2. Policy and Diplomacy: Governments can implement policies to counteract foreign cyberbullying and engage in diplomatic efforts to address these issues internationally.
  3. Cybersecurity Measures: Strengthening cybersecurity infrastructures to prevent and respond to cyberbullying at a state level.

Cyberbullying, in its personal and geopolitical forms, represents a significant challenge in the digital age. Understanding its nature, recognizing its signs, and knowing how to respond are crucial steps in mitigating its impact. For individuals, it means being vigilant online and knowing when to seek help. In the geopolitical arena, it requires a coordinated effort from governments, tech companies, and the public to defend against these insidious forms of digital aggression. By taking these steps, societies can work towards a safer, more respectful digital world.

Read more

Science denialism has a complex and multifaceted history, notably marked by a significant event in 1953 that set a precedent for the tactics of disinformation widely observed in various spheres today, including politics.

The 1953 meeting and the birth of the disinformation playbook

The origins of modern science denial can be traced back to a pivotal meeting in December 1953, involving the heads of the four largest American tobacco companies. This meeting was a response to emerging scientific research linking smoking to lung cancer — a serious existenstial threat to their business model.

Concerned about the potential impact on their business, these industry leaders collaborated with a public relations firm, Hill & Knowlton, to craft a strategy. This strategy was designed not only to dispute the growing evidence about the health risks of smoking, but also to manipulate public perception by creating doubt about the science itself. They created the Tobacco Industry Research Committee (TIRC) as an organization to cast doubt on the established science, and prevent the public from knowing about the lethal dangers of smoking.

And it worked — for over 40 years. The public never formed a consensus on the lethality and addictiveness of nicotine until well into the 1990s, when the jig was finally up and Big Tobacco had to pay a record-breaking $200 billion settlement over their 4 decades of mercilessly lying to the American people following the Tobacco Master Settlement Agreement (MSA) of 1998.

smoking and the disinformation campaign of Big Tobacco leading to science denialism, by Midjourney

Strategies of the disinformation playbook

This approach laid the groundwork for what is often referred to as the “disinformation playbook.” The key elements of this playbook include creating doubt about scientific consensus, funding research that could contradict or cloud scientific understanding, using think tanks or other organizations to promote these alternative narratives, and influencing media and public opinion to maintain policy and regulatory environments favorable to their interests — whether profit, power, or both.

Over the next 7 decades — up to the present day — this disinformation playbook has been used by powerful special interests to cast doubt, despite scientific consensus, on acid rain, depletion of the ozone layer, the viability of Ronald Reagan‘s Strategic Defense Initiative (SDI), and perhaps most notably: the man-made causes of climate change.

Adoption and adaptation in various industries

The tobacco industry’s tactics were alarmingly successful for decades, delaying effective regulation and public awareness of smoking’s health risks. These strategies were later adopted and adapted by various industries and groups facing similar scientific challenges to their products or ideologies. For instance, the fossil fuel industry used similar tactics to cast doubt on global warming — leading to the phenomenon of climate change denialism. Chemical manufacturers have disputed science on the harmful effects of certain chemicals like DDT and BPA.

What began as a PR exercise by Big Tobacco to preserve their fantastic profits once science discovered the deleterious health effects of smoking eventually evolved into a strategy of fomenting science denialism more broadly. Why discredit one single finding of the scientific community when you could cast doubt on the entire process of science itself — as a way of future-proofing any government regulation that might curtail your business interests?

Science denial in modern politics

In recent years, the tactics of science denial have become increasingly prevalent in politics. Political actors, often influenced by corporate interests or ideological agendas, have employed these strategies to challenge scientific findings that are politically inconvenient — despite strong and often overwhelming evidence. This is evident in manufactured “debates” on climate change, vaccine safety, and COVID-19, where scientific consensus is often contested not based on new scientific evidence but through disinformation strategies aimed at sowing doubt and confusion.

The role of digital media and politicization

The rise of social media has accelerated the spread of science denial. The digital landscape allows for rapid dissemination of misinformation and the formation of echo chambers, where groups can reinforce shared beliefs or skepticism, often insulated from corrective or opposing information. Additionally, the politicization of science, where scientific findings are viewed through the lens of political allegiance rather than objective evidence, has further entrenched science denial in modern discourse — as just one aspect of the seeming politicization of absolutely everything in modern life and culture.

Strategies for combatting science denial

The ongoing impact of science denial is profound. It undermines public understanding of science, hampers informed decision-making, and delays action on critical issues like climate change, public health, and environmental protection. The spread of misinformation about vaccines, for instance, has led to a decrease in vaccination rates and a resurgence of diseases like measles.

scientific literacy, by Midjourney

To combat science denial, experts suggest several strategies. Promoting scientific literacy and critical thinking skills among the general public is crucial. This involves not just understanding scientific facts, but also developing an understanding of the scientific method and how scientific knowledge is developed and validated. Engaging in open, transparent communication about science, including the discussion of uncertainties and limitations of current knowledge, can also help build public trust in science.

Science denial, rooted in the strategies developed by the tobacco industry in the 1950s, has evolved into a significant challenge in contemporary society, impacting not just public health and environmental policy but also the very nature of public discourse and trust in science. Addressing this issue requires a multifaceted approach, including education, transparent communication, and collaborative efforts to uphold the integrity of scientific information.

Read more

Sockpuppets are fake social media accounts used by trolls for deceptive and covert actions, avoiding culpability for abuse, aggression, death threats, doxxing, and other criminal acts against targets.

In the digital age, the battleground for political influence has extended beyond traditional media to the vast, interconnected realm of social media. Central to this new frontier are “sockpuppet” accounts – fake online personas created for deceptive purposes. These shadowy figures have become tools in the hands of authoritarian regimes, perhaps most notably Russia, to manipulate public opinion and infiltrate the political systems of countries like the UK, Ukraine, and the US.

What are sockpuppet accounts?

A sockpuppet account is a fake online identity used for purposes of deception. Unlike simple trolls or spam accounts, sockpuppets are more sophisticated. They mimic real users, often stealing photos and personal data to appear authentic. These accounts engage in activities ranging from posting comments to spreading disinformation, all designed to manipulate public opinion.

The Strategic Use of Sockpuppets

Sockpuppet accounts are a cog in the larger machinery of cyber warfare. They play a critical role in shaping narratives and influencing public discourse. In countries like Russia, where the state exerts considerable control over media, these accounts are often state-sponsored or affiliated with groups that align with government interests.

Case Studies: Russia’s global reach

  1. The United Kingdom: Investigations have revealed Russian interference in the Brexit referendum. Sockpuppet accounts spread divisive content to influence public opinion and exacerbate social tensions. Their goal was to weaken the European Union by supporting the UK’s departure.
  2. Ukraine: Russia’s geopolitical interests in Ukraine have been furthered through a barrage of sockpuppet accounts. These accounts disseminate pro-Russian propaganda and misinformation to destabilize Ukraine’s political landscape, particularly during times of crisis, elections, or — most notably — during its own current war of aggression against its neighbor nation.
  3. The United States: The 2016 US Presidential elections saw an unprecedented level of interference. Russian sockpuppets spread divisive content, fake news, and even organized real-life events, creating an environment of distrust and chaos. Their goal was to sow discord and undermine the democratic process.
Vladimir Putin with his sheep, by Midjourney

How sockpuppets operate

Sockpuppets often work in networks, creating an echo chamber effect. They amplify messages, create false trends, and give the illusion of widespread support for a particular viewpoint. Advanced tactics include deepfakes and AI-generated text, making it increasingly difficult to distinguish between real and fake content.

Detection and countermeasures

Detecting sockpuppets is challenging due to their evolving sophistication. Social media platforms are employing AI-based algorithms to identify and remove these accounts. However, the arms race between detection methods and evasion techniques continues. Governments and independent watchdogs also play a crucial role in exposing such operations.

Implications for democracy

The use of sockpuppet accounts by authoritarian regimes like Russia poses a significant threat to democratic processes. By influencing public opinion and political outcomes in other countries, they undermine the very essence of democracy – the informed consent of the governed. This digital interference erodes trust in democratic institutions and fuels political polarization.

As we continue to navigate the complex landscape of digital information, the challenge posed by sockpuppet accounts remains significant. Awareness and vigilance are key. Social media platforms, governments, and individuals must collaborate to safeguard the integrity of our political systems. As citizens, staying informed and critically evaluating online information is our first line of defense against this invisible but potent threat.

Read more

Deep fakes, a term derived from “deep learning” (a subset of AI) and “fake,” refer to highly realistic, AI-generated digital forgeries of real human beings. These sophisticated imitations can be videos, images, or audio clips where the person appears to say or do things they never actually did.

The core technology behind deep fakes is based on machine learning and neural network algorithms. Two competing AI systems work in tandem: one generates the fake content, while the other attempts to detect the forgeries. Over time, as the detection system identifies flaws, the generator learns from these mistakes, leading to increasingly convincing fakes.

Deep fakes in politics

However, as the technology has become more accessible, it’s been used for various purposes, not all of them benign. In the political realm, deep fakes have a potential for significant impact. They’ve been used to create false narratives or manipulate existing footage, making it appear as though a public figure has said or done something controversial or scandalous. This can be particularly damaging in democratic societies, where public opinion heavily influences political outcomes. Conversely, in autocracies, deep fakes can be a tool for propaganda or to discredit opposition figures.

How to identify deep fakes

Identifying deep fakes can be challenging, but there are signs to look out for:

  1. Facial discrepancies: Imperfections in the face-swapping process can result in blurred or fuzzy areas, especially where the face meets the neck and hair. Look for any anomalies in facial expressions or movements that don’t seem natural.
  2. Inconsistent lighting and shadows: AI can struggle to replicate the way light interacts with physical objects. If the lighting or shadows on the face don’t match the surroundings, it could be a sign of manipulation.
  3. Audiovisual mismatches: Often, the audio does not perfectly sync with the video in a deep fake. Watch for delays or mismatches between spoken words and lip movements.
  4. Unusual blinking and breathing patterns: AI can struggle to accurately mimic natural blinking and breathing, leading to unnatural patterns.
  5. Contextual anomalies: Sometimes, the content of the video or the actions of the person can be a giveaway. If it seems out of character or contextually odd, it could be fake.

In democratic societies, the misuse of deep fakes can erode public trust in media, manipulate electoral processes, and increase political polarization. Fake videos can quickly spread disinformation and misinformation, influencing public opinion and voting behavior. Moreover, they can be used to discredit political opponents with false accusations or fabricated scandals.

In autocracies, deep fakes can be a potent tool for state propaganda. Governments can use them to create a false image of stability, prosperity, or unity, or conversely, to produce disinformation campaigns against perceived enemies, both foreign and domestic. This can lead to the suppression of dissent and the manipulation of public perception to support the regime.

Deep fakes with Donald Trump, by Midjourney

Response to deep fakes

The response to the threat posed by deep fakes has been multifaceted. Social media platforms and news organizations are increasingly using AI-based tools to detect and flag deep fakes. There’s also a growing emphasis on digital literacy, teaching the public to critically evaluate the content they consume.

Legal frameworks are evolving to address the malicious use of deep fakes. Some countries are considering legislation that would criminalize the creation and distribution of harmful deep fakes, especially those targeting individuals or designed to interfere in elections.

While deep fakes represent a remarkable technological advancement, they also pose a significant threat to the integrity of information and democratic processes. As this technology evolves, so must our ability to detect and respond to these forgeries. It’s crucial for both individuals and institutions to stay informed and vigilant against the potential abuses of deep fakes, particularly in the political domain. As we continue to navigate the digital age, the balance between leveraging AI for innovation and safeguarding against its misuse remains a key challenge.

Read more

the deep state

The “deep state” conspiracy theory, particularly as it has been emphasized by supporters of former President Donald Trump, alleges the existence of a hidden, powerful network within the U.S. government, working to undermine and oppose Trump’s presidency and agenda. In reality, the epithet is an elaborate way of discrediting the non-partisan civil service personnel who are brought in to government for their expertise and competence, who typically remain in their posts through Presidential transitions regardless of which party is occupying the White House.

The deep state gathers in front of the US Capitol, by Midjourney

Origin of the deep state meme

The term “deep state” originated in Turkey in the 1990s, referring to a clandestine network of military officers and their civilian allies who, it was believed, were manipulating Turkish politics. In the American context, the term was popularized during the Trump administration as a meme, evolving to imply a shadowy coalition — echoing other popular conspiracy theories such as the antisemitic global cabal theory — within the U.S. government, including intelligence agencies, the civil service, and other parts of the bureaucracy.

Main claims

  1. Bureaucratic opposition: The theory posits that career government officials, particularly in intelligence and law enforcement agencies, are systematically working against Trump’s interests. This includes alleged sabotage of his policies and leaking information to the media.
  2. Manipulation of information: Proponents believe that these officials manipulate or withhold information to influence government policy and public opinion against Trump.
  3. Alleged connections with other theories: The deep state theory often intersects with other conspiracy theories, like those surrounding the investigation of Russian interference in the 2016 election and the impeachment proceedings against Trump. It suggests these events were orchestrated by the deep state to discredit or destabilize his presidency.

Contextual factors

  1. Political polarization: The rise of the deep state theory is partly attributed to the increasing political polarization in the U.S. It serves as a narrative to explain and rally against perceived opposition within the government.
  2. Media influence: Certain media outlets and social media platforms have played a significant role in propagating this theory. It’s often amplified by commentators who support Trump, contributing to its widespread dissemination among his base.
  3. Trump’s endorsement: Trump himself has referenced the deep state, particularly when discussing investigations into his administration or when responding to criticism from within the government.

Criticism and counterarguments to deep state “theory”

  1. Lack of concrete evidence: Critics argue that the deep state theory lacks substantial evidence. They contend that routine government processes, checks and balances, and the separation of powers are mischaracterized as clandestine operations.
  2. Undermining trust in institutions: There’s concern that such theories undermine public trust in vital governmental institutions, particularly those responsible for national security and law enforcement.
  3. Political tool: Detractors view the deep state concept as a political tool used to dismiss or discredit legitimate investigation and opposition.
Deep state conspiracy theory, as illustrated by Midjourney

Impact on governance and society

  1. Influence on supporters: For many Trump supporters, the deep state theory provides an explanatory framework for understanding his political challenges and defeats. It galvanizes his base by portraying him as an outsider battling corrupt, entrenched interests.
  2. Public trust and conspiracism: The theory contributes to a broader erosion of trust in government and institutions, fostering a climate where conspiratorial thinking becomes more mainstream.
  3. Policy implications: Belief in the deep state can impact policy discussions and decisions, as it frames certain government actions and policies as inherently suspect or malicious.

Comparative perspective

Globally, similar theories exist in various forms, often reflecting local political and historical contexts. They typically emerge in situations where there is a distrust of the political establishment and are used to explain perceived injustices or power imbalances.

The deep state conspiracy theory as espoused by Trump’s MAGA movement plays a significant role in current American political discourse, impacting public perception of government, policy debates, and the broader social and political climate. Its lack of verifiable evidence and potential to undermine democratic institutions make it a dangerous propaganda prop applied recklessly by the current GOP frontrunner for the 2024 nomination.

Books on conspiracy theories

More conspiracy theories

Read more

angry fascist dads

Old Boomers like Donald Trump and Charles Koch just copied their fascist fathers. Donnie inherited racism and eugenics from Old Fred, while Charlie was indoctrinated in the extremist delusions of the John Birch Society and the pseudoscience economics of the Austrian School acolytes.

They are men with little imagination, who seek to exalt themselves by squishing everyone else down into a mass of un-individuated peons. One of many right-wing Big Lies is that fascism is the opposite of communism — not so. Both are forms of collectivism, in which the masses must be relegated to nothingness by the immense, overwhelming pressures of society — such that a few secular gods of Greatness Thinking may shine above all the rest.

Fascists are Dittoheads

The ethos of “copying” is a signature psychological trait of fundamentalist minds devoid of creativity. Both Trump and Koch have fashioned themselves as carbon copies of Daddy — in true Strict Father Morality style. Thus they feel completely anachronistic in modern times — where children are falling farther and farther from the proverbial trees, ideologically speaking.

Continue reading Fascist fathers are pissed
Read more

Cancel culture refers to the practice of publicly calling out or boycotting individuals, companies, or institutions for behavior that is perceived to be offensive, controversial, or problematic. The goal is to hold these entities accountable for their actions and to pressure them to change their behavior.

This can manifest in various ways, such as social media campaigns, petitions, or protests. The aim of cancel culture is often to create social consequences for the perceived wrongdoing, such as loss of employment, loss of social status, or loss of financial support.

History of cancel culture

The term cancel culture emerged out of the earlier concept of political correctness, and gained popularity in the 2010s alongside the rise of social media. Some scholars and media theorists trace the concept of cancel culture back to even earlier phenomena, such as the boycotts and blacklists of the McCarthyism era in the United States on the right, or the call-out culture of feminist and anti-racist movements on the left.

Cancel culture and political correctness are related in that they both involve social and cultural pressure to conform to certain norms of language and behavior. Political correctness refers to the avoidance of language or actions that may be considered discriminatory, offensive, or insensitive, often with the aim of promoting inclusivity and social justice. Both tend to concern themselves with highlighting language, stereotypes, and assumptions rooted in racism, sexism, and other common forms of bigotry throughout history.

Cancel culture vs. political correctness

In some ways cancel culture can be seen as an extension of political correctness, in that it goes a step further by seeking to hold individuals and entities accountable for violating norms of respect and social justice. The collective power of Facebook, Twitter (aka “X”), and other social media outlets has helped activists organize around ethical, moral, and political issues, and provided new tools for achieving accountability goals, through activities such as public shaming, boycotts, or other forms of social and economic pressure.

In my opinion, the right-wing critique of so-called cancel culture is grounded in an erroneous conflation between governmental action and collective organizing by groups of individuals who are themselves often associated with political activism. Cancel culture is often mentioned in the same breath with censorship, whose definition connotes government tyranny and overreach.

Cancel culture vs. censorship

Typically, however, the government is not involved in actual instances of cancel culture — it is merely people exercising collective powers provided by private social media companies. In fact, it seems to me that right-wing policy tends to involve actual censorship — such as Florida governor and 2024 presidential hopeful Ron DeSantis’s “Don’t Say Gay” bill, or (also in FL) the Republican bill introduced which would require political bloggers to register with the state.

I think it’s important to be discerning, in these instances, about who is exercising power and why — is it really a case of the government overreaching (censorship), or is it simply a group of people reacting appropriately to the continued presence of structural racism, sexism, and many other -isms in modern society: and stubbornly so, after decades and centuries of collective social justice work?

Read more

Ghost skin is short for ghost skinhead. It refers to a white supremacist who hides his racist beliefs in order to blend into wider society, with the goal of remaining undetected and covertly continuing his Nazi agenda.

Over at least the past 2 decades, white supremacists have been specifically infiltrating law enforcement. Going into the police is an attractive career option to begin with, for someone with controlling tendencies, an interest in power, and often an incentive to use the badge as a shield for one’s own crime schemes.

Ghost skins are rebranded Nazis

White supremacy has a long and brutal history in America. This original stain not only hasn’t been washed clean, but is seeping outwards once again. It’s expanding outward through QAnon and conspiracist fear-mongering on Facebook, Twitter, Parler, the -chans, etc. It’s what resulted in the January 6 deadly siege on the Capitol building where Congress was counting the electoral college votes and certifying Joe Biden’s presidency.

Part of the larger swell of fascism in this country, ghost skins have allowed neo-Nazis and a motley assortment of thugs, violent criminals, and authoritarian personalities to hide out in the fabric of society, waiting to strike when the time is right. When The Storm comes. When it is time, the President tells them, to take their country back.

Read more

Before we dive into the perils of issue policing, I have to say that it’s heartening to see so many new faces and hear many new voices who may in the past have not explicitlyΒ considered themselves “activists,” or who have felt a greater call to stand up against a political administration whose ideologies show every indication of running counter to a constitutional democratic framework.Β 

If that describes you: THANK YOU! You are awesome. And if you’re an Old Hat at this sort of thing, this post is for you too — by way of initiating a civil dialogue with some of the fresh faces you see in your timeline or in your local community who may be exhibiting the following behavior:

Making claims that issue X, Y, or Z is “not important” or “not as important” as issue A, B, or C — which is what we should really be discussing right now.

Here’s why this behavior tends to do more harm than good:

Continue reading Activists: How (and why) to avoid issue policing (especially on Twitter)
Read more

are not merely empowered to separate us from discerning fact and fiction.
They separate us from debate; civic discourse; meaningful conflict;
From coalition-building; compromise; concession.
They separate us from each other.

Communities seem quaint
Common ground, a shifting place
Quicksand beneath one’s feet
We are all swamp things now
The eyes ogle, waiting for us to falter — for sport

Our shelf lives grow ever shorter
While billionaires transfuse the blood of the young
The youth don’t want my mid-life crisis
It bores them so
My tone grates on America’s next greats

Ideologies wage the fifth world war out on the vast placeless social media savannah 
Faux fantastical beasts feast upon felled paper tigers
One can only hope the most outsized egos
Are the biggest dinosaurs
When the meteor comes

Read more