Technology

Malware, short for “malicious software,” is any software intentionally designed to cause damage to a computer, server, client, or computer network. This cybersecurity threat encompasses a variety of software types, including viruses, worms, trojan horses, ransomware, spyware, adware, and more. Each type has a different method of infection and damage.

Who uses malware and what for

Malware is utilized by a wide range of actors, from amateur hackers to sophisticated cybercriminals, and even nation-states. The motives can vary greatly:

  • Cybercriminals often deploy malware to steal personal, financial, or business information, which can be used for financial gain through fraud or direct theft.
  • Hacktivists use malware to disrupt services or bring attention to political or social causes.
  • Nation-states and state-sponsored actors might deploy sophisticated malware for espionage and intelligence, to gain strategic advantage, sabotage, or influence geopolitical dynamics.
Malware, illustrated by DALL-E 3

Role in disinformation and geopolitical espionage

Malware plays a significant role in disinformation campaigns and geopolitical espionage. State-sponsored actors might use malware to infiltrate the networks of other nations, steal sensitive information (hacked emails perhaps?), and manipulate or disrupt critical infrastructure. In terms of disinformation, malware can be used to gain unauthorized access to media outlets or social media accounts, spreading false information to influence public opinion or destabilize political situations.

Preventing malware

Preventing malware involves multiple layers of security measures:

  • Educate Users: The first line of defense is often the users themselves. Educating them about the dangers of phishing emails, not to click on suspicious links, and the importance of not downloading or opening files from unknown sources can significantly reduce the risk of malware infections.
  • Regular Software Updates: Keeping all software up to date, including operating systems and antivirus programs, can protect against known vulnerabilities that malware exploits.
  • Use Antivirus Software: A robust antivirus program can detect and remove many types of malware. Regular scans and real-time protection features are crucial.
  • Firewalls: Both hardware and software firewalls can block unauthorized access to your network, which can help prevent malware from spreading.
  • Backups: Regularly backing up important data ensures that, in the event of a malware attack, the lost data can be recovered without paying ransoms or losing critical information.

Famous malware incidents in foreign affairs

Several high-profile malware incidents have had significant implications in the realm of foreign affairs:

  • Stuxnet: Discovered in 2010, Stuxnet was a highly sophisticated worm that targeted supervisory control and data acquisition (SCADA) systems and was believed to be designed to damage Iran’s nuclear program. It is widely thought to be a cyberweapon developed by the United States and Israel, though neither has confirmed involvement.
  • WannaCry: In May 2017, the WannaCry ransomware attack affected over 200,000 computers across 150 countries, with the UK’s National Health Service, Spain’s TelefΓ³nica, FedEx, and Deutsche Bahn among those impacted. The attack exploited a vulnerability in Microsoft Windows, and North Korea was widely blamed for the attack.
  • NotPetya: Initially thought to be ransomware, NotPetya emerged in 2017 and caused extensive damage, particularly in Ukraine. It later spread globally, affecting businesses and causing billions of dollars in damages. It is believed to have been a state-sponsored attack originating from Russia, designed as a geopolitical tool under the guise of ransomware.
  • SolarWinds: Uncovered in December 2020, the SolarWinds hack was a sophisticated supply chain attack that compromised the Orion software suite used by numerous US government agencies and thousands of private companies. It allowed the attackers, believed to be Russian state-sponsored, to spy on the internal communications of affected organizations for months.

In conclusion, malware is a versatile and dangerous tool in the hands of cybercriminals and state actors alike, used for everything from financial theft to sophisticated geopolitical maneuvers. The proliferation of malware in global affairs underscores the need for robust cybersecurity practices at all levels, from individual users to national governments. Awareness, education, and the implementation of comprehensive security measures are key to defending against the threats posed by malware.

Read more

Below is a list of the covert gang of folks trying to take down the US government — the anti-government oligarchs who think they run the place. The Koch network of megarich political operatives has been anointing itself the true (shadowy) leaders of American politics for several decades.

Spearheaded by Charles Koch, the billionaire fossil fuel magnate who inherited his father Fred Koch’s oil business, the highly active and secretive Koch network — aka the “Kochtopus” — features a sprawling network of donors, think tanks, non-profits, political operatives, PR hacks, and other fellow travelers who have come to believe that democracy is incompatible with their ability to amass infinite amounts of wealth.

Despite their obvious and profligate success as some of the world’s richest people, they whine that the system of US government is very unfair to them and their ability to do whatever they want to keep making a buck — the environment, the people, and even the whole planet be damned. Part of an ever larger wealth cult of individuals spending unprecedented amounts of cash to kneecap the US government from any ability to regulate business or create a social safety net for those exploited by concentrated (and to a large extent inherited) wealth, the Koch network is the largest and most formidable group within the larger project of US oligarchy.

The Kochtopus

By 2016 the Koch network of private political groups had a paid staff of 1600 people in 35 states — a payroll larger than that of the Republican National Committee (RNC) itself. They managed a pool of funds from about 400 or so of the richest people in the United States, whose goal was to capture the government and run it according to their extremist views of economic and social policy. They found convenient alignment with the GOP, which has been the party of Big Business ever since it succeeded in first being the party of the Common Man in the 1850s and 60s.

Are we to be just a wholly-owned subsidiary of Koch Industries? Who will help stand and fight for our independence from oligarchy?

  • Philip Anschutz — Founder of Qwest Communications. Colorado oil and entertainment magnate and billionaire dubbed the world’s “greediest executive” by Fortune Magazine in 2002.
  • American Energy Alliance — Koch-funded tax-exempt nonprofit lobbying for corporate-friendly energy policies
  • American Enterprise Institute — The American Enterprise Institute (AEI) is a public policy think tank based in Washington, D.C. Established in 1938, it is one of the oldest and most influential think tanks in the United States. AEI is primarily known for its conservative and free-market-oriented policy research and advocacy.
  • Americans for Prosperity
  • Harry and Lynde Bradley — midwestern defense contractors and Koch donors
  • Michael Catanzaro
  • Cato Institute
  • Center to Protect Patient Rights — The Koch network’s fake front group for fighting against Obama‘s Affordable Care Act.
  • CGCN Group — right-wing lobbying group
  • Citizens for a Sound Economy
  • Club for Growth
  • Competitive Enterprise Institute — Right-wing think tank funded by the Kochs and other oil and gas barons
  • Continental Resources — Harold Hamm’s shale-oil company
  • Joseph Coors — Colorado beer magnate
  • Betsy and Dick DeVos — founders of the Amway MLM empire, and one of the richest families in Michigan
  • Myron Ebell — Outspoken client change denier picked to head Trump’s EPA transition team who previously worked at the Koch-funded Competitive Enterprise Institute.
  • Richard Farmer — Chairman of the Cintas Corporation in Cincinnati, the nation’s largest uniform supply company. Legal problems against him included an employee’s gruesome death thanks to violating safety laws.
  • Freedom Partners — the Koch donor group
  • Freedom School — the all-white CO private school funded by Charles Koch in the 1960s
  • FreedomWorks
  • Richard Gilliam — Head of Virginia coal mining company Cumberland Resources, and Koch network donor.
  • Harold Hamm — Oklahoma fracking king and charter member of the Koch donors’ circle, Hamm became a billionaire founding the Continental Resources shale-oil company
  • Diane Hendricks — $3.6 billion building supply company owner and Trump inaugural committee donor, and the wealthiest woman in Wisconsin.
  • Charles Koch — CEO of Koch Industries and patriarch of the Koch empire following his father and brother’s death, and estrangement from his other younger brother. Former member of the John Birch Society, a group so far to the right that even arch-conservative William F. Buckley excommunicated them from the mainstream party in the 1950s.
  • The Charles Koch Foundation
  • (David Koch) — deceased twin brother of Bill Koch and younger brother to Charles who ran a failed campaign in 1980 as the vice presidential nominee of the Libertarian Party — netting 1% of the popular vote. In 2011 he echoed spurious claims from conservative pundit Dinesh D’Souza that Obama got his “radical” political outlook from his African father.
  • The Leadership Institute
  • Michael McKenna — president of the lobbying firm MWR Strategies, whose clients include Koch Industries, picked by Trump to serve on the Department of Energy transition team
  • Rebekah Mercer — daughter of hedge fund billionaire and right-wing Koch donor Robert Mercer, she worked with Steve Bannon on several projects including Breitbart News, Cambridge Analytica, and Gab.
  • Robert Mercer — billionaire NY hedge fund manager and next largest donor after the Kochs themselves, sometimes even surpassing them
  • MWR Strategies — lobbying firm for the energy industry whose clients include Koch Industries, whose president Michael McKenna served on the Trump energy transition team
  • John M. Olin — chemical and munitions magnate and Koch donor
  • George Pearson — Former head of the Koch Foundation
  • Mike Pence — Charles Koch’s number one pick for president in 2012.
  • Mike Pompeo — former Republican Kansas Congressman who got picked first to lead the CIA, then later as Secretary of State under Trump. He was the single largest recipient of Koch money in Congress as of 2017. The Kochs had been investors and partners in Pompeo’s business ventures before he got into politics.
  • The Reason Foundation
  • Richard Mellon Scaife — heir to the Mellon banking and Gulf Oil fortunes
  • David Schnare — self-described “free-market environmentalist” on Trump’s EPA transition team
  • Marc Short — ran the Kochs’ secretive donor club, Freedom Partners, before becoming a senior advisor to vice president Mike Pence during the Trump transition
  • State Policy Network
  • The Tax Foundation
  • Tea Party

Koch Network Mind Map

This mind map shows the intersections between the Koch network and the larger network of GOP donors, reactionaries, and evil billionaires who feel entitled to control American politics via the fortunes they’ve made or acquired.

Read more

An echo chamber is a metaphorical description of a situation where an individual is encased in a bubble of like-minded information, reinforcing pre-existing views without exposure to opposing perspectives. This concept has gained prominence with the rise of digital and social media, where algorithms personalize user experiences, inadvertently isolating individuals from diverse viewpoints and enabling people to remain cloistered within a closed system that may contain misinformation and disinformation.

The role of digital media and algorithms

Digital platforms and social media leverage algorithms to tailor content that aligns with users’ past behaviors and preferences. This personalization, while enhancing engagement, fosters filter bubblesβ€”closed environments laden with homogeneous information.

Such settings are ripe for the unchecked proliferation of disinformation, as they lack the diversity of opinion necessary for critical scrutiny. The need for critical thinking is greatly diminished when we are only ever exposed to information and beliefs we already agree with.

Disinformation in echo chambers

Echo chambers serve as breeding grounds for disinformation, where false information is designed to mislead and manipulate. In these closed loops, disinformation finds little resistance and is readily accepted and amplified, bolstering existing biases and misconceptions.

We all have psychological traits that make us vulnerable to believing things that aren’t true. Whether sourced via deception, misinterpretation, conspiracy theories, propaganda, or other phenomena, false beliefs are made stickier and harder to debunk when one is surrounded by an echo chamber.

Political polarization exacerbated

Beyond the scale of lone individuals, the isolation facilitated by echo chambers significantly contributes to political polarization more broadly. As people become entrenched in their informational silos, the common ground necessary for democratic discourse dwindles. This division not only fosters extremism but also undermines the social cohesion essential for a functioning democracy.

The impact of confirmation bias

Within echo chambers, confirmation biasβ€”the tendency to favor information that corroborates existing beliefsβ€”becomes particularly pronounced. This cognitive bias solidifies ideological positions, making individuals resistant to changing their views, even in the face of contradictory evidence.

The real-world effects of echo chambers transcend digital boundaries as well, influencing real-world political landscapes. Political actors can exploit these dynamics to deepen divides, manipulate public opinion, and mobilize support based on misinformation, leading to a polarized and potentially radicalized electorate.

Strategies for mitigation

Combating the challenges posed by echo chambers and disinformation necessitates a comprehensive approach:

  • Media Literacy: Educating the public to critically assess information sources, understand content personalization, and identify sources of biases and disinformation.
  • Responsible Platform Design: Encouraging digital platforms to modify algorithms to promote diversity in content exposure and implement measures against disinformation.
  • Regulatory Interventions: Policymakers may need to step in to ensure digital environments foster healthy public discourse.

Echo chambers, particularly within the digital media landscape, significantly impact the spread of disinformation and political polarization. By reinforcing existing beliefs and isolating individuals from diverse perspectives, they contribute to a divided society. Addressing this issue is critical and requires efforts in education, platform design, and regulation to promote a more informed and cohesive public discourse.

Read more

The chemtrails conspiracy theory emerged in the late 1990s. It posits that the long-lasting trails left by aircraft, conventionally known as contrails (short for condensation trails), are actually “chemical trails” (chemtrails). These chemtrails, according to believers, consist of chemical or biological agents deliberately sprayed at high altitudes by government or other agencies for purposes unknown to the general public. This theory gained momentum with the rise of the internet, allowing for widespread dissemination of disinformation, misinformation, and speculation.

Contrails of a Boeing 747-438 from Qantas at 11,000 m (36,000 ft), by Sergey Kustov

The roots of this theory can be traced back to a 1996 report by the United States Air Force titled “Weather as a Force Multiplier: Owning the Weather in 2025.” This document speculated on future weather modification technologies for military purposes. Conspiracy theorists misinterpreted this as evidence of ongoing weather manipulation. The theory was further fueled by a 1997 petition titled “Chemtrails – Ban High Altitude Aerial Spraying” and a 1999 broadcast by investigative journalist William Thomas, who claimed widespread spraying for unknown purposes.

Why people believe in chemtrails

  1. Distrust in Authority: A significant driver of belief in the chemtrail conspiracy is a general mistrust of governments and authoritative bodies. For some, it’s easier to believe in a malevolent secretive plot (which is often some kind of variation on the global cabal theory) than to trust official explanations.
  2. Cognitive Bias: Confirmation bias plays a crucial role. Individuals who believe in chemtrails often interpret ambiguous evidence as confirmation of their beliefs. The sight of a contrail, for instance, is perceived as direct evidence of chemtrail activity.
  3. Scientific Misunderstanding: Many chemtrail believers lack an understanding of atmospheric science. Contrails are formed when the hot humid exhaust from jet engines condenses in the cold, high-altitude air, forming ice crystals. This scientific process is often misunderstood or overlooked by proponents of the chemtrail theory.
  4. Social and Psychological Factors: Belief in conspiracies can be psychologically comforting for some, providing simple explanations for complex phenomena and a sense of control or understanding in a seemingly chaotic world. Social networks, both online as social media and offline as “meatspace” connections, also play a significant role in reinforcing these beliefs.

Chemtrails in the broader context of conspiracy thinking

The chemtrail conspiracy is part of a larger pattern of conspiratorial thinking that includes a range of other theories, from the relatively benign to the dangerously outlandish. This pattern often involves beliefs in a powerful, malevolent group controlling significant world events or possessing hidden knowledge.

  1. Relation to Other Theories: Chemtrail beliefs often intersect with other conspiracy theories. For example, some chemtrail believers also subscribe to New World Order or global depopulation theories like the white supremacist Great Replacement Theory.
  2. Impact on Public Discourse and Policy: The belief in chemtrails has occasionally influenced public discourse and policy. Local governments and councils have been petitioned to stop these perceived practices, reflecting the tangible impact of such beliefs.
  3. Challenges for Science and Education: Confronting the chemtrail conspiracy presents challenges for educators and scientists. Addressing scientific illiteracy and promoting critical thinking are key in combating the spread of such disinformation and misinformation.
  4. A Reflection of Societal Fears: The persistence of the chemtrail theory reflects broader societal fears and anxieties, particularly about government control, environmental destruction, and health concerns.
Contrails (but not chemtrails!) in the sky, by Midjourney

Chemtrails as part of a broader science denialism

The chemtrail conspiracy theory is a multifaceted phenomenon rooted in mistrust, scientific misunderstanding, and psychological factors. It is emblematic of a broader pattern of conspiracy thinking and science denialism that poses challenges to public understanding of science and rational discourse. Addressing these challenges requires a nuanced approach that includes education, transparent communication from authorities, and fostering critical thinking skills among the public.

This theory, while lacking credible scientific evidence, serves as a case study in how misinformation can spread and take root in society. It underscores the need for vigilance in how information is consumed and shared, especially in an age where digital media can amplify fringe theories with unprecedented speed and scale. Ultimately, understanding and addressing the underlying causes of belief in such theories is crucial in promoting a more informed and rational public discourse.

Read more

A “meme” is a term first coined by British evolutionary biologist Richard Dawkins in his 1976 book “The Selfish Gene.” Originally, it referred to an idea, behavior, or style that spreads from person to person within a culture. However, in the digital age, the term has evolved to specifically denote a type of media – often an image with text, but sometimes a video or a hashtag – that spreads rapidly online, typically through social media platforms like Facebook, Twitter/X, Reddit, TikTok, and generally all extant platforms.

Memes on the digital savannah

In the context of the internet, memes are a form of digital content that encapsulates a concept, joke, or sentiment in a highly relatable and easily shareable format. They often consist of a recognizable image or video, overlaid with humorous or poignant text that pertains to current events, popular culture, or universal human experiences. Memes have become a cornerstone of online communication, offering a way for individuals to express opinions, share laughs, and comment on societal norms.

Grumpy Cat meme: "There are two types of people in this world... and I hate them"

Once primarily a tool of whimsy, amusement, and even uplifit, in recent years memes have become far more weaponized by trolls and bad actors as part of a broader shift in internet culture towards incivility and exploitation. The days of funny cats have been encroached upon by the racism and antisemitism of Pepe the Frog, beloved patron saint meme of the alt-right. The use of memes to project cynicism or thinly-veiled white supremacy into culture and politics is an unwelcome trend that throws cold water on the formerly more innocent days of meme yore online.

Memes as tools of disinformation and information warfare

While memes are still used for entertainment and social commentary, they have also become potent tools for disseminating disinformation and conducting information warfare, both domestically and abroad. This is particularly evident in political arenas where, for instance, American right-wing groups have leveraged memes to spread their ideologies, influence public opinion, and discredit opposition.

  1. Simplicity and Virality: Memes are easy to create and consume, making them highly viral. This simplicity allows for complex ideas to be condensed into easily digestible and shareable content, often bypassing critical analysis from viewers.
  2. Anonymity and Plausible Deniability: The often-anonymous nature of meme creation and sharing allows individuals or groups to spread disinformation without accountability. The humorous or satirical guise of memes also provides a shield of plausible deniability against accusations of spreading falsehoods.
  3. Emotional Appeal: Memes often evoke strong emotional responses, which can be more effective in influencing public opinion than presenting factual information. The American right-wing, among other groups, has adeptly used memes to evoke feelings of pride, anger, or fear, aligning such emotions with their political messages.
  4. Echo Chambers and Confirmation Bias: Social media algorithms tend to show users content that aligns with their existing beliefs, creating echo chambers. Memes that reinforce these beliefs are more likely to be shared within these circles, further entrenching ideologies and sometimes spreading misinformation.
  5. Manipulation of Public Discourse: Memes can be used to distract from important issues, mock political opponents, or oversimplify complex social and political problems. This can skew public discourse and divert attention from substantive policy discussions or critical events.
  6. Targeting the Undecided: Memes can be particularly effective in influencing individuals who are undecided or less politically engaged. Their simplicity and humor can be more appealing than traditional forms of political communication, making them a powerful tool for shaping opinions.

Memes in political campaigns

Memes have been used to discredit candidates or push particular narratives that favor right-wing ideologies. Memes have also been employed to foster distrust in mainstream media and institutions, promoting alternative, often unfounded narratives that align with right-wing agendas.

Trump QAnon meme: "The Storm is Coming" in Game of Thrones font, shared on Truth Social

While often benign and humorous, memes can also be wielded as powerful tools of disinformation and information warfare. The American right-wing, among other political groups globally, has harnessed the viral nature of memes to influence public opinion, manipulate discourse, and spread their ideologies. As digital media continues to evolve, the role of memes in political and social spheres is likely to grow, making it crucial for consumers to approach them with a critical eye.

Read more

Cyberbullying involves the use of digital technologies, like social media, texting, and websites, to harass, intimidate, or embarrass individuals. Unlike traditional bullying, its digital nature allows for anonymity and a wider audience. Cyberbullies employ various tactics such as sending threatening messages, spreading rumors online, posting sensitive or derogatory information, or impersonating someone to damage their reputation — on to more sinister and dangerous actions like doxxing.

Geopolitical impact of cyberbullying

In recent years, cyberbullying has transcended personal boundaries and infiltrated the realm of geopolitics. Nation-states or politically motivated groups have started using cyberbullying tactics to intimidate dissidents, manipulate public opinion, or disrupt political processes in other countries. Examples include spreading disinformation, launching smear campaigns against political figures, or using bots to amplify divisive content. This form of cyberbullying can have significant consequences, destabilizing societies and influencing elections.

Recognizing cyberbullying

Identifying cyberbullying involves looking for signs of digital harassment. This can include receiving repeated, unsolicited, and aggressive communications, noticing fake profiles spreading misinformation about an individual, or observing coordinated attacks against a person or group. In geopolitics, recognizing cyberbullying might involve identifying patterns of disinformation, noting unusual social media activity around sensitive political topics, or detecting state-sponsored troll accounts.

Responding to cyberbullying

The response to cyberbullying varies based on the context and severity. For individuals, it involves:

  1. Documentation: Keep records of all bullying messages or posts.
  2. Non-engagement: Avoid responding to the bully, as engagement often escalates the situation.
  3. Reporting: Report the behavior to the platform where it occurred and, if necessary, to law enforcement.
  4. Seeking Support: Reach out to friends, family, or professionals for emotional support.

For geopolitical cyberbullying, responses are more complex and involve:

  1. Public Awareness: Educate the public about the signs of state-sponsored cyberbullying and disinformation.
  2. Policy and Diplomacy: Governments can implement policies to counteract foreign cyberbullying and engage in diplomatic efforts to address these issues internationally.
  3. Cybersecurity Measures: Strengthening cybersecurity infrastructures to prevent and respond to cyberbullying at a state level.

Cyberbullying, in its personal and geopolitical forms, represents a significant challenge in the digital age. Understanding its nature, recognizing its signs, and knowing how to respond are crucial steps in mitigating its impact. For individuals, it means being vigilant online and knowing when to seek help. In the geopolitical arena, it requires a coordinated effort from governments, tech companies, and the public to defend against these insidious forms of digital aggression. By taking these steps, societies can work towards a safer, more respectful digital world.

Read more

The “repetition effect” is a potent psychological phenomenon and a common propaganda device. This technique operates on the principle that repeated exposure to a specific message or idea increases the likelihood of its acceptance as truth or normalcy by an individual or the public. Its effectiveness lies in its simplicity and its exploitation of a basic human cognitive bias: the more we hear something, the more likely we are to believe it.

Repetition effect, by Midjourney

Historical context

The repetition effect has been used throughout history, but its most notorious use was by Adolf Hitler and the Nazi Party in Germany. Hitler, along with his Propaganda Minister, Joseph Goebbels, effectively employed this technique to disseminate Nazi ideology and promote antisemitism. In his autobiography “Mein Kampf,” Hitler wrote about the importance of repetition in reinforcing the message and ensuring that it reached the widest possible audience. He believed that the constant repetition of a lie would eventually be accepted as truth.

Goebbels echoed this sentiment, famously stating, “If you tell a lie big enough and keep repeating it, people will eventually come to believe it.” The Nazi regime used this strategy in various forms, including in speeches, posters, films, and through controlled media. The relentless repetition of anti-Semitic propaganda, the glorification of the Aryan race, and the demonization of enemies played a crucial role in the establishment and maintenance of the Nazi regime.

Psychological basis

The effectiveness of the repetition effect is rooted in cognitive psychology. This bias is known as the “illusory truth effect,” where repeated exposure to a statement increases its perceived truthfulness. The phenomenon is tied to the ease with which familiar information is processed. When we hear something repeatedly, it becomes more fluent to process, and our brains misinterpret this fluency as a signal for truth.

Modern era usage

The transition into the modern era saw the repetition effect adapting to new media and communication technologies. In the age of television and radio, political figures and advertisers used repetition to embed messages in the public consciousness. The rise of the internet and social media has further amplified the impact of this technique. In the digital age, the speed and reach of information are unprecedented, making it easier for false information to be spread and for the repetition effect to be exploited on a global scale.

The repetition effect on screens and social media, by Midjourney

Political campaigns, especially in polarized environments, often use the repetition effect to reinforce their messages. The constant repetition of slogans, talking points, and specific narratives across various platforms solidifies these messages in the public’s mind, regardless of their factual accuracy.

Ethical considerations and countermeasures

The ethical implications of using the repetition effect are significant, especially when it involves spreading disinformation or harmful ideologies. It raises concerns about the manipulation of public opinion and the undermining of democratic processes.

To counteract the repetition effect, media literacy and critical thinking are essential. Educating the public about this psychological bias and encouraging skepticism towards repeated messages can help mitigate its influence. Fact-checking and the promotion of diverse sources of information also play a critical role in combating the spread of falsehoods reinforced by repetition.

Repetition effect: A key tool of propaganda

The repetition effect is a powerful psychological tool in the arsenal of propagandists and communicators. From its historical use by Hitler and the fascists to its continued relevance in the digital era, this technique demonstrates the profound impact of repeated messaging on public perception and belief.

While it can be used for benign purposes, such as in advertising or reinforcing positive social behaviors, its potential for manipulation and spreading misinformation cannot be understated. Understanding and recognizing the repetition effect is crucial in developing a more discerning and informed approach to the information we encounter daily.

Read more

Shitposting, a term that has seeped into the mainstream of internet culture, is often characterized by the act of posting deliberately provocative, off-topic, or nonsensical content in online communities and on social media. The somewhat vulgar term encapsulates a spectrum of online behavior ranging from harmless, humorous banter to malicious, divisive content.

Typically, a shit-post is defined by its lack of substantive content, its primary goal being to elicit attention and reactions — whether amusement, confusion, or irritation — from its intended audience. Closely related to trolling, shitposting is one aspect of a broader pantheon of bad faith behavior online.

Shit-poster motivations

The demographic engaging in shit-posting is diverse, cutting across various age groups, social strata, and political affiliations. However, it’s particularly prevalent among younger internet users who are well-versed in meme culture and online vernacular. The motivations for shit-posting can be as varied as its practitioners.

Some engage in it for humor and entertainment, seeing it as a form of digital performance art. Others may use it as a tool for social commentary or satire, while a more nefarious subset might employ it to spread disinformation and misinformation, sow discord, and/or harass individuals or groups.

Online trolls shitposting on the internet, by Midjourney

Context in US politics

In the realm of U.S. politics, shit-posting has assumed a significant role in recent elections, especially on platforms like Twitter / X, Reddit, and Facebook. Politicians, activists, and politically engaged individuals often use this tactic to galvanize supporters, mock opponents, or shape public perception. It’s not uncommon to see political shit-posts that are laden with irony, exaggeration, or out-of-context information, designed to inflame passions or reinforce existing biases — or exploit them.

Recognition and response

Recognizing shit-posting involves a discerning eye. Key indicators include the use of hyperbole, irony, non-sequiturs, and content that seems outlandishly out of place or context. The tone is often mocking or sarcastic. Visual cues, such as memes or exaggerated images, are common.

Responding to shit-posting is a nuanced affair. Engaging with it can sometimes amplify the message, which might be the poster’s intention. A measured approach is to assess the intent behind the post. If it’s harmless humor, it might warrant a light-hearted response or none at all.

For posts that are disinformation or border on misinformation or toxicity, countering with factual information, reporting the content, or choosing not to engage are viable strategies. The key is not to feed into the cycle of provocation and reaction that shit-posting often seeks to perpetuate.

Shitposting troll farms lurk in the shadows, beaming disinformation across the land -- by Midjourney

Fighting back

Shit-posting, in its many forms, is a complex phenomenon in the digital age. It straddles the line between being a form of modern-day satire and a tool for misinformation, propaganda, and/or cyberbullying. As digital communication continues to evolve, understanding the nuances of shit-posting – its forms, motivations, and impacts – becomes crucial, particularly in politically charged environments. Navigating this landscape requires a balanced approach, blending awareness, discernment, and thoughtful engagement.

This overview provides a basic understanding of shit-posting, but the landscape is ever-changing, with new forms and norms continually emerging. The ongoing evolution of online communication norms, including phenomena like shit-posting, is particularly fascinating and significant in the broader context of digital culture and political discourse.

Read more

Science denialism has a complex and multifaceted history, notably marked by a significant event in 1953 that set a precedent for the tactics of disinformation widely observed in various spheres today, including politics.

The 1953 meeting and the birth of the disinformation playbook

The origins of modern science denial can be traced back to a pivotal meeting in December 1953, involving the heads of the four largest American tobacco companies. This meeting was a response to emerging scientific research linking smoking to lung cancer — a serious existenstial threat to their business model.

Concerned about the potential impact on their business, these industry leaders collaborated with a public relations firm, Hill & Knowlton, to craft a strategy. This strategy was designed not only to dispute the growing evidence about the health risks of smoking, but also to manipulate public perception by creating doubt about the science itself. They created the Tobacco Industry Research Committee (TIRC) as an organization to cast doubt on the established science, and prevent the public from knowing about the lethal dangers of smoking.

And it worked — for over 40 years. The public never formed a consensus on the lethality and addictiveness of nicotine until well into the 1990s, when the jig was finally up and Big Tobacco had to pay a record-breaking $200 billion settlement over their 4 decades of mercilessly lying to the American people following the Tobacco Master Settlement Agreement (MSA) of 1998.

smoking and the disinformation campaign of Big Tobacco leading to science denialism, by Midjourney

Strategies of the disinformation playbook

This approach laid the groundwork for what is often referred to as the “disinformation playbook.” The key elements of this playbook include creating doubt about scientific consensus, funding research that could contradict or cloud scientific understanding, using think tanks or other organizations to promote these alternative narratives, and influencing media and public opinion to maintain policy and regulatory environments favorable to their interests — whether profit, power, or both.

Over the next 7 decades — up to the present day — this disinformation playbook has been used by powerful special interests to cast doubt, despite scientific consensus, on acid rain, depletion of the ozone layer, the viability of Ronald Reagan‘s Strategic Defense Initiative (SDI), and perhaps most notably: the man-made causes of climate change.

Adoption and adaptation in various industries

The tobacco industry’s tactics were alarmingly successful for decades, delaying effective regulation and public awareness of smoking’s health risks. These strategies were later adopted and adapted by various industries and groups facing similar scientific challenges to their products or ideologies. For instance, the fossil fuel industry used similar tactics to cast doubt on global warming — leading to the phenomenon of climate change denialism. Chemical manufacturers have disputed science on the harmful effects of certain chemicals like DDT and BPA.

What began as a PR exercise by Big Tobacco to preserve their fantastic profits once science discovered the deleterious health effects of smoking eventually evolved into a strategy of fomenting science denialism more broadly. Why discredit one single finding of the scientific community when you could cast doubt on the entire process of science itself — as a way of future-proofing any government regulation that might curtail your business interests?

Science denial in modern politics

In recent years, the tactics of science denial have become increasingly prevalent in politics. Political actors, often influenced by corporate interests or ideological agendas, have employed these strategies to challenge scientific findings that are politically inconvenient — despite strong and often overwhelming evidence. This is evident in manufactured “debates” on climate change, vaccine safety, and COVID-19, where scientific consensus is often contested not based on new scientific evidence but through disinformation strategies aimed at sowing doubt and confusion.

The role of digital media and politicization

The rise of social media has accelerated the spread of science denial. The digital landscape allows for rapid dissemination of misinformation and the formation of echo chambers, where groups can reinforce shared beliefs or skepticism, often insulated from corrective or opposing information. Additionally, the politicization of science, where scientific findings are viewed through the lens of political allegiance rather than objective evidence, has further entrenched science denial in modern discourse — as just one aspect of the seeming politicization of absolutely everything in modern life and culture.

Strategies for combatting science denial

The ongoing impact of science denial is profound. It undermines public understanding of science, hampers informed decision-making, and delays action on critical issues like climate change, public health, and environmental protection. The spread of misinformation about vaccines, for instance, has led to a decrease in vaccination rates and a resurgence of diseases like measles.

scientific literacy, by Midjourney

To combat science denial, experts suggest several strategies. Promoting scientific literacy and critical thinking skills among the general public is crucial. This involves not just understanding scientific facts, but also developing an understanding of the scientific method and how scientific knowledge is developed and validated. Engaging in open, transparent communication about science, including the discussion of uncertainties and limitations of current knowledge, can also help build public trust in science.

Science denial, rooted in the strategies developed by the tobacco industry in the 1950s, has evolved into a significant challenge in contemporary society, impacting not just public health and environmental policy but also the very nature of public discourse and trust in science. Addressing this issue requires a multifaceted approach, including education, transparent communication, and collaborative efforts to uphold the integrity of scientific information.

Read more

Climate Change Denial: From Big Tobacco Tactics to Today’s Global Challenge

In the complex narrative of global climate change, one pervasive thread is the phenomenon of climate change denial. This denial isn’t just a refusal to accept the scientific findings around climate change; it is a systematic effort to discredit and cast doubt on environmental realities and the need for urgent action.

Remarkably, the roots of this denial can be traced back to the strategies used by the tobacco industry in the mid-20th century to obfuscate the link between smoking and lung cancer. This companies conspired to create a disinformation campaign against the growing scientific consensus on the manmade nature of climate change, to cast doubt about the link between the burning of fossil fuels and the destruction of the planet’s natural ecosystems — and they succeeded, for over half a century, beginning in 1953.

climate change and its denial, by Midjourney

Origins in big tobacco’s playbook

The origins of climate change denial lie in a well-oiled, public relations machine initially designed by the tobacco industry. When scientific studies began linking smoking to lung cancer in the 1950s, tobacco companies launched an extensive campaign to challenge these findings. Their strategy was not to disprove the science outright but to sow seeds of doubt, suggesting that the research was not conclusive and that more studies were needed. This strategy of manufacturing doubt proved effective in delaying regulatory and public action against tobacco products, for more than 5 decades.

Adoption by climate change deniers

This playbook was later adopted by those seeking to undermine climate science. In the late 20th century, as scientific consensus grew around the human impact on global warming, industries and political groups with a vested interest in maintaining the status quo began to employ similar tactics around lying at scale. They funded research to challenge or undermine climate science, supported think tanks and lobbyists to influence public opinion and policy, and used media outlets to spread a narrative of uncertainty and skepticism.

Political consequences

The political consequences of climate change denial have been profound. In the United States and other countries, it has polarized the political debate over environmental policy, turning what is fundamentally a scientific issue into a partisan one. This politicization has hindered comprehensive national and global policies to combat climate change, as legislative efforts are often stalled by ideological conflicts.

a burning forest of climate change, by Midjourney

Denial campaigns have also influenced public opinion, creating a significant segment of the population that is skeptical of climate science years after overwhelming scientific consensus has been reached, which further complicates efforts to implement wide-ranging environmental reforms.

Current stakes and global impact

Today, the stakes of climate change denial could not be higher. As the world faces increasingly severe consequences of global warming β€” including extreme weather events, rising sea levels, and disruptions to ecosystems β€” the need for decisive action becomes more urgent. Yet, climate change denial continues to impede progress. By casting doubt on scientific consensus, it hampers efforts to build the broad public support necessary for bold environmental policies that may help thwart or mitigate some of the worst disasters.

Moreover, climate change denial poses a significant risk to developing countries, which are often the most vulnerable to climate impacts but the least equipped to adapt. Denialism in wealthier nations can lead to a lack of global cooperation and support needed to address these challenges comprehensively.

Moving forward: acknowledging the science and embracing action

To effectively combat climate change, it is crucial to recognize the roots and ramifications of climate change denial. Understanding its origins in the Big Tobacco disinformation strategy helps demystify the tactics used to undermine environmental science. It’s equally important to acknowledge the role of political and economic interests in perpetuating this denial — oil tycoon Charles Koch alone spends almost $1 billion per election cycle, heavily to climate deniers.

A climate change desert, by Midjourney

However, there is a growing global movement acknowledging the reality of climate change and the need for urgent action. From international agreements like the Paris Accord to grassroots activism pushing for change, there is a mounting push against the tide of denial.

Climate change denial, with its roots in the Big Tobacco playbook, poses a significant obstacle to global efforts to address environmental challenges. Its political ramifications have stalled critical policy initiatives, and its ongoing impact threatens global cooperation. As we face the increasing urgency of climate change, acknowledging and countering this denial is crucial for paving the way towards a more sustainable and resilient future.

Read more

Sockpuppets are fake social media accounts used by trolls for deceptive and covert actions, avoiding culpability for abuse, aggression, death threats, doxxing, and other criminal acts against targets.

In the digital age, the battleground for political influence has extended beyond traditional media to the vast, interconnected realm of social media. Central to this new frontier are “sockpuppet” accounts – fake online personas created for deceptive purposes. These shadowy figures have become tools in the hands of authoritarian regimes, perhaps most notably Russia, to manipulate public opinion and infiltrate the political systems of countries like the UK, Ukraine, and the US.

What are sockpuppet accounts?

A sockpuppet account is a fake online identity used for purposes of deception. Unlike simple trolls or spam accounts, sockpuppets are more sophisticated. They mimic real users, often stealing photos and personal data to appear authentic. These accounts engage in activities ranging from posting comments to spreading disinformation, all designed to manipulate public opinion.

The Strategic Use of Sockpuppets

Sockpuppet accounts are a cog in the larger machinery of cyber warfare. They play a critical role in shaping narratives and influencing public discourse. In countries like Russia, where the state exerts considerable control over media, these accounts are often state-sponsored or affiliated with groups that align with government interests.

Case Studies: Russia’s global reach

  1. The United Kingdom: Investigations have revealed Russian interference in the Brexit referendum. Sockpuppet accounts spread divisive content to influence public opinion and exacerbate social tensions. Their goal was to weaken the European Union by supporting the UK’s departure.
  2. Ukraine: Russia’s geopolitical interests in Ukraine have been furthered through a barrage of sockpuppet accounts. These accounts disseminate pro-Russian propaganda and misinformation to destabilize Ukraine’s political landscape, particularly during times of crisis, elections, or — most notably — during its own current war of aggression against its neighbor nation.
  3. The United States: The 2016 US Presidential elections saw an unprecedented level of interference. Russian sockpuppets spread divisive content, fake news, and even organized real-life events, creating an environment of distrust and chaos. Their goal was to sow discord and undermine the democratic process.
Vladimir Putin with his sheep, by Midjourney

How sockpuppets operate

Sockpuppets often work in networks, creating an echo chamber effect. They amplify messages, create false trends, and give the illusion of widespread support for a particular viewpoint. Advanced tactics include deepfakes and AI-generated text, making it increasingly difficult to distinguish between real and fake content.

Detection and countermeasures

Detecting sockpuppets is challenging due to their evolving sophistication. Social media platforms are employing AI-based algorithms to identify and remove these accounts. However, the arms race between detection methods and evasion techniques continues. Governments and independent watchdogs also play a crucial role in exposing such operations.

Implications for democracy

The use of sockpuppet accounts by authoritarian regimes like Russia poses a significant threat to democratic processes. By influencing public opinion and political outcomes in other countries, they undermine the very essence of democracy – the informed consent of the governed. This digital interference erodes trust in democratic institutions and fuels political polarization.

As we continue to navigate the complex landscape of digital information, the challenge posed by sockpuppet accounts remains significant. Awareness and vigilance are key. Social media platforms, governments, and individuals must collaborate to safeguard the integrity of our political systems. As citizens, staying informed and critically evaluating online information is our first line of defense against this invisible but potent threat.

Read more

Deep fakes, a term derived from “deep learning” (a subset of AI) and “fake,” refer to highly realistic, AI-generated digital forgeries of real human beings. These sophisticated imitations can be videos, images, or audio clips where the person appears to say or do things they never actually did.

The core technology behind deep fakes is based on machine learning and neural network algorithms. Two competing AI systems work in tandem: one generates the fake content, while the other attempts to detect the forgeries. Over time, as the detection system identifies flaws, the generator learns from these mistakes, leading to increasingly convincing fakes.

Deep fakes in politics

However, as the technology has become more accessible, it’s been used for various purposes, not all of them benign. In the political realm, deep fakes have a potential for significant impact. They’ve been used to create false narratives or manipulate existing footage, making it appear as though a public figure has said or done something controversial or scandalous. This can be particularly damaging in democratic societies, where public opinion heavily influences political outcomes. Conversely, in autocracies, deep fakes can be a tool for propaganda or to discredit opposition figures.

How to identify deep fakes

Identifying deep fakes can be challenging, but there are signs to look out for:

  1. Facial discrepancies: Imperfections in the face-swapping process can result in blurred or fuzzy areas, especially where the face meets the neck and hair. Look for any anomalies in facial expressions or movements that don’t seem natural.
  2. Inconsistent lighting and shadows: AI can struggle to replicate the way light interacts with physical objects. If the lighting or shadows on the face don’t match the surroundings, it could be a sign of manipulation.
  3. Audiovisual mismatches: Often, the audio does not perfectly sync with the video in a deep fake. Watch for delays or mismatches between spoken words and lip movements.
  4. Unusual blinking and breathing patterns: AI can struggle to accurately mimic natural blinking and breathing, leading to unnatural patterns.
  5. Contextual anomalies: Sometimes, the content of the video or the actions of the person can be a giveaway. If it seems out of character or contextually odd, it could be fake.

In democratic societies, the misuse of deep fakes can erode public trust in media, manipulate electoral processes, and increase political polarization. Fake videos can quickly spread disinformation and misinformation, influencing public opinion and voting behavior. Moreover, they can be used to discredit political opponents with false accusations or fabricated scandals.

In autocracies, deep fakes can be a potent tool for state propaganda. Governments can use them to create a false image of stability, prosperity, or unity, or conversely, to produce disinformation campaigns against perceived enemies, both foreign and domestic. This can lead to the suppression of dissent and the manipulation of public perception to support the regime.

Deep fakes with Donald Trump, by Midjourney

Response to deep fakes

The response to the threat posed by deep fakes has been multifaceted. Social media platforms and news organizations are increasingly using AI-based tools to detect and flag deep fakes. There’s also a growing emphasis on digital literacy, teaching the public to critically evaluate the content they consume.

Legal frameworks are evolving to address the malicious use of deep fakes. Some countries are considering legislation that would criminalize the creation and distribution of harmful deep fakes, especially those targeting individuals or designed to interfere in elections.

While deep fakes represent a remarkable technological advancement, they also pose a significant threat to the integrity of information and democratic processes. As this technology evolves, so must our ability to detect and respond to these forgeries. It’s crucial for both individuals and institutions to stay informed and vigilant against the potential abuses of deep fakes, particularly in the political domain. As we continue to navigate the digital age, the balance between leveraging AI for innovation and safeguarding against its misuse remains a key challenge.

Read more

PizzaGate originated in 2016 from the hacked emails of John Podesta, Hillary Clinton‘s campaign manager, published by WikiLeaks. Internet users on platforms like 4chan and Reddit began to interpret these emails, focusing on those that mentioned pizza and other food items. They falsely claimed these were code words for a child sex trafficking ring operated by high-ranking Democratic Party members and associated with a Washington, D.C., pizzeria named Comet Ping Pong.

The theory was fueled by various coincidences and misinterpretations. For instance, references to pizza were interpreted as part of a secret code, and the pizzeria’s quirky artwork was misconstrued as sinister symbolism. Despite the lack of credible evidence, these interpretations quickly gained traction online.

PizzaGate conspiracy theory, imagined by Midjourney

The broader political context

PizzaGate should be understood within the broader political context of the 2016 U.S. presidential election. This period was marked by intense partisanship and the proliferation of disinformation and fake news, with social media acting as a catalyst. The theory emerged against the backdrop of a highly contentious election, with Hillary Clinton as a polarizing figure. In such a climate, conspiracy theories found fertile ground to grow, particularly among those predisposed to distrust the political establishment.

Impact and aftermath

The most immediate and dangerous impact of PizzaGate was an incident in December 2016, when Edgar Maddison Welch, motivated by the conspiracy theory, fired a rifle inside Comet Ping Pong. Fortunately, there were no injuries. This incident highlighted the real-world consequences of online conspiracy theories and underscored the potential for online rhetoric to inspire violent actions.

In the aftermath, social media platforms faced criticism for allowing the spread of baseless allegations. This led to discussions about the role of these platforms in disseminating fake news and the balance between free speech and the prevention of harm.

Lasting effects

PizzaGate had several lasting effects:

  1. Polarization and distrust: It exacerbated political polarization and distrust towards mainstream media and political figures, particularly among certain segments of the population.
  2. Conspiracy culture: The incident became a significant part of the modern conspiracy culture, linking it to other conspiracy theories and contributing to a growing skepticism of official narratives.
  3. Social media policies: It influenced how social media companies manage content, leading to stricter policies against misinformation and the promotion of conspiracy theories.
  4. Public awareness: On a positive note, it raised public awareness about the dangers of misinformation and the importance of critical thinking in the digital age.
  5. Legitimacy of investigations: The theory, though baseless, led some people to question the legitimacy of genuine investigations into sexual misconduct and abuse, potentially undermining efforts to address these serious issues.

Caveat, Internet

PizzaGate serves as a stark reminder of the power of the internet to spread misinformation and the real-world consequences that can ensue. It reflects the complexities of the digital age, where information, regardless of its veracity, can be disseminated widely and rapidly. As we continue to navigate this landscape, understanding phenomena like PizzaGate becomes crucial in fostering a more informed and discerning online community — as well as thwarting the march of fascism.

Read more

ParadoxBot is an adorable chatbot who will cheerfully inform you about the Dark Arts

Sure, you could use the site search. Or, you could have a bot — try having a conversation with my blog via the following AI chatbot, ParadoxBot.

Ask it about conspiracy theories, or narcissism, or cults, or authoritarianism, or fascism, or disinformation — to name a few. You can also ask it about things like dark money, economics, history, and many topics at the intersection of political psychology.

It doesn’t index what’s on Foundations (yet) but it has ingested this site and you can essentially chat with the site itself via the ChatGPT-like interface below. Enjoy! And if you love it or hate it, find me on BlueSky (as @doctorparadox) or Mastodon and let me know your thoughts:

Tips for using ParadoxBot

  • Follow general good practice regarding prompt engineering.
  • If you don’t get an answer right away, try rephrasing your question. Even omitting or adding one word sometimes produces good results.
  • Try broad broad and specific types of queries.
  • Dig deeper into any areas the bot turns up that sound interesting.
Read more

republican vs. democrat cage match boxing ring

Buckle up, we’re in for a wild ride. Many of the serious scholars of political history and authoritarian regimes are sounding the alarm bells that, although it is a very very good thing that we got the Trump crime family out of the Oval Office, it is still a very very bad thing for America to have so rapidly tilted towards authoritarianism. How did we get here?! How has hyper partisanship escalated to the point of an attempted coup by 126 sitting Republican House Representatives? How has political polarization gotten this bad?

These are some of the resources that have helped me continue grappling with that question, and with the rapidly shifting landscape of information warfare. How can we understand this era of polarization, this age of tribalism? This outline is a work in progress, and I’m planning to keep adding to this list as the tape keeps rolling.

Right-Wing Authoritarianism

Authoritarianism is both a personality type and a form of government — it operates at both the interpersonal and the societal level. The words authoritarian and fascist are often used interchangeably, but fascism is a more specific type of authoritarianism, and far more historically recent.

America has had flavors of authoritarianism since its founding, and when fascism came along the right-wing authoritarians ate it up — and deeply wanted the United States to be a part of it. Only after they became social pariahs did they change position to support American involvement in World War II — and some persisted even after the attack of Pearl Harbor.

With Project 2025, Trump now openly threatens fascism on America — and sadly, some are eager for it. The psychology behind both authoritarian leaders and followers is fascinating, overlooked, and misunderstood.

Scholars of authoritarianism

  • Hannah Arendt — The Origins of Totalitarianism
  • Bob Altemeyer — The Authoritarians
  • Derrida — the logic of the unconscious; performativity in the act of lying
  • ketman — Ketman is the psychological concept of concealing one’s true aims, akin to doublethink in Orwell’s 1984, that served as a central theme to Polish dissident CzesΕ‚aw MiΕ‚osz‘s book The Captive Mind about intellectual life under totalitarianism during the Communist post-WWII occupation.
  • Erich Fromm — coined the term “malignant narcissism” to describe the psychological character of the Nazis. He also wrote extensively about the mindset of the authoritarian follower in his seminal work, Escape from Freedom.
  • Eric Hoffer — his book The True Believers explores the mind of the authoritarian follower, and the appeal of losing oneself in a totalist movement
  • Fascism — elevation of the id as the source of truth; enthusiasm for political violence
  • Tyrants and dictators
  • John Dean — 3 types of authoritarian personality:
    • social dominators
    • authoritarian followers
    • double highs — social dominators who can “switch” to become followers in certain circumstances
  • Loyalty; hero worship
    • Freud = deeply distrustful of hero worship and worried that it indulged people’s needs for vertical authority. He found the archetype of the authoritarian primal father very troubling.
  • Ayn Rand
    • The Fountainhead (1943)
    • Atlas Shrugged (1957)
    • Objectivism ideology
  • Greatness Thinking; heroic individualism
  • Nietszche — will to power; the Uberman
  • Richard Hofstadter — The Paranoid Style
  • George Lakoff — moral framing; strict father morality
  • Neil Postman — Entertaining Ourselves to Death
  • Anti-Intellectualism
  • Can be disguised as hyper-rationalism (Communism)
  • More authoritarianism books
Continue reading Hyper Partisanship: How to understand American political polarization
Read more