Technology

The “wallpaper effect” is a phenomenon in media, propaganda, and disinformation where individuals become influenced or even indoctrinated by being continuously exposed to a particular set of ideas, perspectives, or ideologies. This effect is akin to wallpaper in a room, which, though initially noticeable, becomes part of the unnoticed background over time.

The wallpaper effect plays a significant role in shaping public opinion and individual beliefs, often without the conscious awareness of the individuals affected.

Origins and mechanisms

The term “wallpaper effect” stems from the idea that constant exposure to a specific type of media or messaging can subconsciously influence an individual’s perception and beliefs, similar to how wallpaper in a room becomes a subtle but constant presence. This effect is potentiated by the human tendency to seek information that aligns with existing beliefs, known as confirmation bias. It leads to a situation where diverse viewpoints are overlooked, and a singular perspective dominates an individual’s information landscape.

The wallpaper effect, by DALL-E 3

Media and information bubbles

In the context of media, the wallpaper effect is exacerbated by the formation of information bubbles or echo chambers. These are environments where a person is exposed only to opinions and information that reinforce their existing beliefs.

The rise of digital media and personalized content algorithms has intensified this effect, as users often receive news and information tailored to their preferences, further entrenching their existing viewpoints. Even more insidiously, social media platforms tend to earn higher profits when they fill users’ feeds with ideological perspectives they already agree with. Even more profitable is the process of tilting them towards more extreme versions of those beliefs — a practice that in other contexts we call “radicalization.”

Role in propaganda and disinformation

The wallpaper effect is a critical tool in propaganda and disinformation campaigns. By consistently presenting a specific narrative or viewpoint, these campaigns can subtly alter the perceptions and beliefs of the target audience. Over time, the repeated exposure to these biased or false narratives becomes a backdrop to the individual’s understanding of events, issues, or groups, often leading to misconceptions or unwarranted biases.

Psychological impact

The psychological impact of the wallpaper effect is profound. It can lead to a narrowing of perspective, where individuals become less open to new information or alternative viewpoints. This effect can foster polarized communities and hyper partisan politics, where dialogue and understanding between differing viewpoints become increasingly difficult.

Case studies and examples

Historically, authoritarian regimes have used the wallpaper effect to control public opinion and suppress dissent. By monopolizing the media landscape and continuously broadcasting their propaganda, these regimes effectively shaped the public’s perception of reality.

In contemporary times, this effect is also seen in democracies, where partisan news outlets or social media algorithms create a similar, though more fragmented, landscape of information bubbles.

Counteracting the wallpaper effect

Counteracting the wallpaper effect involves a multifaceted approach. Media literacy education is crucial, as it empowers individuals to critically analyze and understand the sources and content of information they consume.

Encouraging exposure to a wide range of viewpoints and promoting critical thinking skills are also essential strategies. Additionally, reforms in digital media algorithms to promote diverse viewpoints and reduce the creation of echo chambers can help mitigate this effect.

Implications for democracy and society

The wallpaper effect has significant implications for democracy and society. It can lead to a polarized public, where consensus and compromise become challenging to achieve. The narrowing of perspective and entrenchment of beliefs can undermine democratic discourse, leading to increased societal divisions and decreased trust in media and institutions.

The wallpaper effect is a critical phenomenon that shapes public opinion and belief systems. Its influence is subtle yet profound, as constant exposure to a specific set of ideas can subconsciously mold an individual’s worldview. Understanding and addressing this effect is essential in promoting a healthy, informed, and open society. Efforts to enhance media literacy, promote diverse viewpoints, and reform digital media practices are key to mitigating the wallpaper effect and fostering a more informed and less polarized public.

Read more

Election denialism, the refusal to accept credible election outcomes, has significantly impacted U.S. history, especially in recent years. This phenomenon is not entirely new; election denial has roots that stretch back through various periods of American history. However, its prevalence and intensity have surged in the contemporary digital and political landscape, influencing public trust, political discourse, and the very fabric of democracy.

Historical context

Historically, disputes over election outcomes are as old as the U.S. electoral system itself. For instance, the fiercely contested 1800 election between Thomas Jefferson and John Adams resulted in a constitutional amendment (the 12th Amendment) to prevent similar confusion in the future. The 1876 election between Rutherford B. Hayes and Samuel J. Tilden was resolved through the Compromise of 1877, which effectively ended Reconstruction and had profound effects on the Southern United States.

Yet these instances, while contentious, were resolved within the framework of existing legal and political mechanisms, without denying the legitimacy of the electoral process itself. Over time, claims of election fraud would come to be levied against the electoral and political system itself — with dangerous implications for the peaceful transfer of power upon which democracy rests.

Voting box in an election, by Midjourney

The 21st century and digital influence

Fast forward to the 21st century, and election denialism has taken on new dimensions, fueled by the rapid dissemination of disinformation (and misinformation) through digital media and a polarized political climate. The 2000 Presidential election, with its razor-thin margins and weeks of legal battles over Florida’s vote count, tested the country’s faith in the electoral process.

Although the Supreme Court‘s decision in Bush v. Gore was deeply controversial, Al Gore’s concession helped to maintain the American tradition of peaceful transitions of power.

The 2020 Election: A flashpoint

The 2020 election, marked by the COVID-19 pandemic and an unprecedented number of mail-in ballots, became a flashpoint for election denialism. Claims of widespread voter fraud and electoral malfeasance were propagated at the highest levels of government, despite a lack of evidence substantiated by multiple recounts, audits, and legal proceedings across several states.

The refusal to concede by President Trump and the storming of the U.S. Capitol on January 6, 2021, marked a watershed moment in U.S. history, where election denialism moved from the fringes to the center of political discourse, challenging the norms of democratic transition. Widely referred to as The Big Lie, the baseless claims of election fraud that persist in the right-wing to this day are considered themselves to be a form of election fraud by justice officials, legal analysts, and a host of concerned citizens worried about ongoing attempts to overthrow democracy in the United States.

Implications, public trust, and voter suppression

The implications of this recent surge in election denialism are far-reaching. It has eroded public trust in the electoral system, with polls indicating a significant portion of the American populace doubting the legitimacy of election results. This skepticism is not limited to the national level but has trickled down to local elections, with election officials facing threats and harassment. The spread of misinformation, propaganda, and conspiracy theories about electoral processes and outcomes has become a tool for political mobilization, often exacerbating divisions within the American society.

Moreover, election denialism has prompted legislative responses at the state level, with numerous bills introduced to restrict voting access in the name of election security. These measures have sparked debates about voter suppression and the balance between securing elections and ensuring broad electoral participation. The challenge lies in addressing legitimate concerns about election integrity while avoiding the disenfranchisement of eligible voters.

Calls for reform and strengthening democracy

In response to these challenges, there have been calls for reforms to strengthen the resilience of the U.S. electoral system. These include measures to enhance the security and transparency of the voting process, improve the accuracy of voter rolls, and counter misinformation about elections. There’s also a growing emphasis on civic education to foster a more informed electorate capable of critically evaluating electoral information.

The rise of election denialism in recent years highlights the fragility of democratic norms and the crucial role of trust in the electoral process. While disputes over election outcomes are not new, the scale and impact of recent episodes pose unique challenges to American democracy. Addressing these challenges requires a multifaceted approach, including legal, educational, and technological interventions, to reinforce the foundations of democratic governance and ensure that the will of the people is accurately and fairly represented.

Read more

A “filter bubble” is a concept in the realm of digital publishing, media, and web technology, particularly significant in understanding the dynamics of disinformation and political polarization. At its core, a filter bubble is a state of intellectual isolation that can occur when algorithms selectively guess what information a user would like to see based on past behavior and preferences. This concept is crucial in the digital age, where much of our information comes from the internet and online sources.

Origins and mechanics

The term was popularized by internet activist Eli Pariser around 2011. It describes how personalization algorithms in search engines and social media platforms can isolate users in cultural or ideological bubbles. These algorithms, driven by AI and machine learning, curate content – be it news, search results, or social media posts – based on individual user preferences, search histories, and previous interactions.

filter bubble, by DALL-E 3

The intended purpose is to enhance user experience by providing relevant and tailored content. However, this leads to a situation where users are less likely to encounter information that challenges or broadens their worldview.

Filter bubbles in the context of disinformation

In the sphere of media and information, filter bubbles can exacerbate the spread of disinformation and propaganda. When users are consistently exposed to a certain type of content, especially if it’s sensational or aligns with their pre-existing beliefs, they become more susceptible to misinformation. This effect is compounded on platforms where sensational content is more likely to be shared and become viral, often irrespective of its accuracy.

Disinformation campaigns, aware of these dynamics, often exploit filter bubbles to spread misleading narratives. By tailoring content to specific groups, they can effectively reinforce existing beliefs or sow discord, making it a significant challenge in the fight against fake news and propaganda.

Impact on political beliefs and US politics

The role of filter bubbles in shaping political beliefs is profound, particularly in the polarized landscape of recent US politics. These bubbles create echo chambers where one-sided political views are amplified without exposure to opposing viewpoints. This can intensify partisanship, as individuals within these bubbles are more likely to develop extreme views and less likely to understand or empathize with the other side.

Recent years in the US have seen a stark divide in political beliefs, influenced heavily by the media sources individuals consume. For instance, the right and left wings of the political spectrum often inhabit separate media ecosystems, with their own preferred news sources and social media platforms. This separation contributes to a lack of shared reality, where even basic facts can be subject to dispute, complicating political discourse and decision-making.

Filter bubbles in elections and political campaigns

Political campaigns have increasingly utilized data analytics and targeted advertising to reach potential voters within these filter bubbles. While this can be an effective campaign strategy, it also means that voters receive highly personalized messages that can reinforce their existing beliefs and psychological biases, rather than presenting a diverse range of perspectives.

Breaking out of filter bubbles

Addressing the challenges posed by filter bubbles involves both individual and systemic actions. On the individual level, it requires awareness and a conscious effort to seek out diverse sources of information. On a systemic level, it calls for responsibility from tech companies to modify their algorithms to expose users to a broader range of content and viewpoints.

Filter bubbles play a significant role in the dissemination and reception of information in today’s digital age. Their impact on political beliefs and the democratic process — indeed, on democracy itself — in the United States cannot be overstated. Understanding and mitigating the effects of filter bubbles is crucial in fostering a well-informed public, capable of critical thinking and engaging in healthy democratic discourse.

Read more

The concept of a “honeypot” in the realms of cybersecurity and information warfare is a fascinating and complex one, straddling the line between deception and defense. At its core, a honeypot is a security mechanism designed to mimic systems, data, or resources to attract and detect unauthorized users or attackers, essentially acting as digital bait. By engaging attackers, honeypots serve multiple purposes: they can distract adversaries from more valuable targets, gather intelligence on attack methods, and help in enhancing security measures.

Origins and Usage

The use of honeypots dates back to the early days of computer networks, evolving significantly with the internet‘s expansion. Initially, they were simple traps set to detect anyone probing a network. However, as cyber threats grew more sophisticated, so did honeypots, transforming into complex systems designed to emulate entire networks, applications, or databases to lure in cybercriminals.

A honeypot illustration with a circuit board beset by a bee, by Midjourney

Honeypots are used by a variety of entities, including corporate IT departments, cybersecurity firms, government agencies, and even individuals passionate about cybersecurity. Their versatility means they can be deployed in almost any context where digital security is a concern, from protecting corporate data to safeguarding national security.

Types and purposes

There are several types of honeypots, ranging from low-interaction honeypots, which simulate only the services and applications attackers might find interesting, to high-interaction honeypots, which are complex and fully-functional systems designed to engage attackers more deeply. The type chosen depends on the specific goals of the deployment, whether it’s to gather intelligence, study attack patterns, or improve defensive strategies.

In the context of information warfare, honeypots serve as a tool for deception and intelligence gathering. They can be used to mislead adversaries about the capabilities or intentions of a state or organization, capture malware samples, and even identify vulnerabilities in the attacker’s strategies. By analyzing the interactions attackers have with these traps, defenders can gain insights into their techniques, tools, and procedures (TTPs), enabling them to better anticipate and mitigate future threats.

Historical effects

Historically, honeypots have had significant impacts on both cybersecurity and information warfare. They’ve led to the discovery of new malware strains, helped dismantle botnets, and provided critical intelligence about state-sponsored cyber operations. For example, honeypots have been instrumental in tracking the activities of sophisticated hacking groups, leading to a deeper understanding of their targets and methods, which, in turn, has informed national security strategies and cybersecurity policies.

One notable example is the GhostNet investigation, which uncovered a significant cyber espionage network targeting diplomatic and governmental institutions worldwide. Honeypots played a key role in identifying the malware and command-and-control servers used in these attacks, highlighting the effectiveness of these tools in uncovering covert operations.

Honeypot hackers and cybercriminals

Ethical and practical considerations

While the benefits of honeypots are clear, their deployment is not without ethical and practical considerations. There’s a fine line between deception for defense and entrapment, raising questions about the legality and morality of certain honeypot operations, especially in international contexts where laws and norms may vary widely.

Moreover, the effectiveness of a honeypot depends on its believability and the skill with which it’s deployed and monitored. Poorly configured honeypots might not only fail to attract attackers but could also become liabilities, offering real vulnerabilities to be exploited.

Cyber attackers and defenders

Honeypots are a critical component of the cybersecurity and information warfare landscapes, providing valuable insights into attacker behaviors and tactics. They reflect the ongoing cat-and-mouse game between cyber attackers and defenders, evolving in response to the increasing sophistication of threats. As digital technologies continue to permeate all aspects of life, the strategic deployment of honeypots will remain a vital tactic in the arsenal of those looking to protect digital assets and information. Their historical impacts demonstrate their value, and ongoing advancements in technology promise even greater potential in understanding and combating cyber threats.

By serving as a mirror to the tactics and techniques of adversaries, honeypots help illuminate the shadowy world of cyber warfare, making them indispensable tools for anyone committed to safeguarding information in an increasingly interconnected world.

Read more

SOTU 2024 Joe Biden Presidential address

Strong economic messages of the Keynesian buttressing of the middle class that is Bidenomics were everywhere in evidence at last night’s State of the Union address, Biden’s third since taking office in 2021. In SOTU 2024 he spoke about stabbing trickle-down economics in its gasping heart as a repeated failure to the American people. Instead of giving another $2 trillion tax cuts to billionaires, Biden wants to give back to the people who he says built America: the middle class.

The President delivered strong, sweeping language and vision reminiscent of LBJ’s Great Society and FDR‘s New Deal. He also delivered a heartwarming sense of unity and appeal to put down our bickering and get things done for the American people.

“We all come from somewhere — but we’re all Americans.”

This while lambasting the Republicans for scuttling the deal over the popular bipartisan immigration bill thanks to 11th hour interference from TFG (“my predecessor” as JRB called him). “This bill would save lives!” He is really effective at calling out the GOP‘s hypocrisy on border security with this delivery.

“We can fight about the border or we can fix the border. Send me a bill!”

He is taking full advantage of being the incumbent candidate here. He has the power and the track record to do all these things he is promising, and he’s telling the exact truth about the Republican obstructionism preventing the American people from having their government work for them.

SOTU 2024 Joe Biden fiery speech with Kamala Harris and Mike Johnson in the background behind him

I love that he calls out Trump in this speech, without naming names — almost a kind of Voldemort effect. He who must not be named — because giving him the dignity even of a name is more than he deserves.

He says that Trump and his cabal of anti-democratic political operatives have ancient ideas (hate, revenge, reactionary, etc.) — and that you can’t lead America with ancient ideas. In America, we look towards the future — relentlessly. Americans wants a president who will protect their rights — not take them away.

“I see a future… for all Americans!” he ends with, in a segment reminiscent of the great Martin Luther King’s “I Have a Dream” speech, with its clear vision of power and authority flowing from what is morally right and just, instead of what is corrupt and cronyish. It gave me hope for the future — that Americans will make the right choice, as we seem to have done under pressure, throughout our history. 🀞🏽

Continue reading Biden SOTU 2024: Success stories and big policy ideas
Read more

The term “hoax” is derived from “hocus,” a term that has been in use since the late 18th century. It originally referred to a trick or deception, often of a playful or harmless nature. The essence of a hoax was its capacity to deceive, typically for entertainment or to prove a point without malicious intent. Over time, the scope and implications of a hoax have broadened significantly. What was once a term denoting jest or trickery has morphed into a label for deliberate falsehoods intended to mislead or manipulate public perception.

From playful deception to malicious misinformation

As society entered the age of mass communication, the potential reach and impact of hoaxes expanded dramatically. The advent of newspapers, radio, television, and eventually the internet and social media platforms, transformed the way informationβ€”and misinformationβ€”circulated. Hoaxes began to be used not just for amusement but for more nefarious purposes, including political manipulation, financial fraud, and social engineering. The line between a harmless prank and damaging disinformation and misinformation became increasingly blurred.

The political weaponization of “hoax”

In the contemporary political landscape, particularly within the US, the term “hoax” has been co-opted as a tool for disinformation and propaganda. This strategic appropriation has been most visible among certain factions of the right-wing, where it is used to discredit damaging information, undermine factual reporting, and challenge the legitimacy of institutional findings or scientific consensus. This application of “hoax” serves multiple purposes: it seeks to sow doubt, rally political bases, and divert attention from substantive issues.

the politicization of hoaxes, via fake scandals that tie up the media unwittingly in bullshit for years, by DALL-E 3

This tactic involves labeling genuine concerns, credible investigations, and verified facts as “hoaxes” to delegitimize opponents and minimize the impact of damaging revelations. It is a form of gaslighting on a mass scale, where the goal is not just to deny wrongdoing but to erode the very foundations of truth and consensus. By branding something as a “hoax,” these actors attempt to preemptively dismiss any criticism or negative information, regardless of its veracity.

Case Studies: The “Hoax” label in action

High-profile instances of this strategy include the dismissal of climate change data, the denial of election results, and the rejection of public health advice during the COVID-19 pandemic. In each case, the term “hoax” has been employed not as a description of a specific act of deception, but as a blanket term intended to cast doubt on the legitimacy of scientifically or empirically supported conclusions. This usage represents a significant departure from the term’s origins, emphasizing denial and division over dialogue and discovery.

The impact on public discourse and trust

The strategic labeling of inconvenient truths as “hoaxes” has profound implications for public discourse and trust in institutions. It creates an environment where facts are fungible, and truth is contingent on political allegiance rather than empirical evidence. This erosion of shared reality undermines democratic processes, hampers effective governance, and polarizes society.

Moreover, the frequent use of “hoax” in political discourse dilutes the term’s meaning and impact, making it more difficult to identify and respond to genuine instances of deception. When everything can be dismissed as a hoax, the capacity for critical engagement and informed decision-making is significantly compromised.

Moving Forward: Navigating a “post-hoax” landscape

The challenge moving forward is to reclaim the narrative space that has been distorted by the misuse of “hoax” and similar terms. This involves promoting media literacy, encouraging critical thinking, and fostering a public culture that values truth and accountability over partisanship. It also requires the media, educators, and public figures to be vigilant in their language, carefully distinguishing between genuine skepticism and disingenuous dismissal.

The evolution of “hoax” from a term denoting playful deception to a tool for political disinformation reflects broader shifts in how information, truth, and reality are contested in the public sphere. Understanding this transformation is crucial for navigating the complexities of the modern informational landscape and for fostering a more informed, resilient, and cohesive society.

Read more

Malware, short for “malicious software,” is any software intentionally designed to cause damage to a computer, server, client, or computer network. This cybersecurity threat encompasses a variety of software types, including viruses, worms, trojan horses, ransomware, spyware, adware, and more. Each type has a different method of infection and damage.

Who uses malware and what for

Malware is utilized by a wide range of actors, from amateur hackers to sophisticated cybercriminals, and even nation-states. The motives can vary greatly:

  • Cybercriminals often deploy malware to steal personal, financial, or business information, which can be used for financial gain through fraud or direct theft.
  • Hacktivists use malware to disrupt services or bring attention to political or social causes.
  • Nation-states and state-sponsored actors might deploy sophisticated malware for espionage and intelligence, to gain strategic advantage, sabotage, or influence geopolitical dynamics.
Malware, illustrated by DALL-E 3

Role in disinformation and geopolitical espionage

Malware plays a significant role in disinformation campaigns and geopolitical espionage. State-sponsored actors might use malware to infiltrate the networks of other nations, steal sensitive information (hacked emails perhaps?), and manipulate or disrupt critical infrastructure. In terms of disinformation, malware can be used to gain unauthorized access to media outlets or social media accounts, spreading false information to influence public opinion or destabilize political situations.

Preventing malware

Preventing malware involves multiple layers of security measures:

  • Educate Users: The first line of defense is often the users themselves. Educating them about the dangers of phishing emails, not to click on suspicious links, and the importance of not downloading or opening files from unknown sources can significantly reduce the risk of malware infections.
  • Regular Software Updates: Keeping all software up to date, including operating systems and antivirus programs, can protect against known vulnerabilities that malware exploits.
  • Use Antivirus Software: A robust antivirus program can detect and remove many types of malware. Regular scans and real-time protection features are crucial.
  • Firewalls: Both hardware and software firewalls can block unauthorized access to your network, which can help prevent malware from spreading.
  • Backups: Regularly backing up important data ensures that, in the event of a malware attack, the lost data can be recovered without paying ransoms or losing critical information.

Famous malware incidents in foreign affairs

Several high-profile malware incidents have had significant implications in the realm of foreign affairs:

  • Stuxnet: Discovered in 2010, Stuxnet was a highly sophisticated worm that targeted supervisory control and data acquisition (SCADA) systems and was believed to be designed to damage Iran’s nuclear program. It is widely thought to be a cyberweapon developed by the United States and Israel, though neither has confirmed involvement.
  • WannaCry: In May 2017, the WannaCry ransomware attack affected over 200,000 computers across 150 countries, with the UK’s National Health Service, Spain’s TelefΓ³nica, FedEx, and Deutsche Bahn among those impacted. The attack exploited a vulnerability in Microsoft Windows, and North Korea was widely blamed for the attack.
  • NotPetya: Initially thought to be ransomware, NotPetya emerged in 2017 and caused extensive damage, particularly in Ukraine. It later spread globally, affecting businesses and causing billions of dollars in damages. It is believed to have been a state-sponsored attack originating from Russia, designed as a geopolitical tool under the guise of ransomware.
  • SolarWinds: Uncovered in December 2020, the SolarWinds hack was a sophisticated supply chain attack that compromised the Orion software suite used by numerous US government agencies and thousands of private companies. It allowed the attackers, believed to be Russian state-sponsored, to spy on the internal communications of affected organizations for months.

In conclusion, malware is a versatile and dangerous tool in the hands of cybercriminals and state actors alike, used for everything from financial theft to sophisticated geopolitical maneuvers. The proliferation of malware in global affairs underscores the need for robust cybersecurity practices at all levels, from individual users to national governments. Awareness, education, and the implementation of comprehensive security measures are key to defending against the threats posed by malware.

Read more

Below is a list of the covert gang of folks trying to take down the US government — the anti-government oligarchs who think they run the place. The Koch network of megarich political operatives has been anointing itself the true (shadowy) leaders of American politics for several decades.

Spearheaded by Charles Koch, the billionaire fossil fuel magnate who inherited his father Fred Koch’s oil business, the highly active and secretive Koch network — aka the “Kochtopus” — features a sprawling network of donors, think tanks, non-profits, political operatives, PR hacks, and other fellow travelers who have come to believe that democracy is incompatible with their ability to amass infinite amounts of wealth.

Despite their obvious and profligate success as some of the world’s richest people, they whine that the system of US government is very unfair to them and their ability to do whatever they want to keep making a buck — the environment, the people, and even the whole planet be damned. Part of an ever larger wealth cult of individuals spending unprecedented amounts of cash to kneecap the US government from any ability to regulate business or create a social safety net for those exploited by concentrated (and to a large extent inherited) wealth, the Koch network is the largest and most formidable group within the larger project of US oligarchy.

The Kochtopus

By 2016 the Koch network of private political groups had a paid staff of 1600 people in 35 states — a payroll larger than that of the Republican National Committee (RNC) itself. They managed a pool of funds from about 400 or so of the richest people in the United States, whose goal was to capture the government and run it according to their extremist views of economic and social policy. They found convenient alignment with the GOP, which has been the party of Big Business ever since it succeeded in first being the party of the Common Man in the 1850s and 60s.

Are we to be just a wholly-owned subsidiary of Koch Industries? Who will help stand and fight for our independence from oligarchy?

  • Philip Anschutz — Founder of Qwest Communications. Colorado oil and entertainment magnate and billionaire dubbed the world’s “greediest executive” by Fortune Magazine in 2002.
  • American Energy Alliance — Koch-funded tax-exempt nonprofit lobbying for corporate-friendly energy policies
  • American Enterprise Institute — The American Enterprise Institute (AEI) is a public policy think tank based in Washington, D.C. Established in 1938, it is one of the oldest and most influential think tanks in the United States. AEI is primarily known for its conservative and free-market-oriented policy research and advocacy.
  • Americans for Prosperity
  • Harry and Lynde Bradley — midwestern defense contractors and Koch donors
  • Michael Catanzaro
  • Cato Institute
  • Center to Protect Patient Rights — The Koch network’s fake front group for fighting against Obama‘s Affordable Care Act.
  • CGCN Group — right-wing lobbying group
  • Citizens for a Sound Economy
  • Club for Growth
  • Competitive Enterprise Institute — Right-wing think tank funded by the Kochs and other oil and gas barons
  • Continental Resources — Harold Hamm’s shale-oil company
  • Joseph Coors — Colorado beer magnate
  • Betsy and Dick DeVos — founders of the Amway MLM empire, and one of the richest families in Michigan
  • Myron Ebell — Outspoken client change denier picked to head Trump’s EPA transition team who previously worked at the Koch-funded Competitive Enterprise Institute.
  • Richard Farmer — Chairman of the Cintas Corporation in Cincinnati, the nation’s largest uniform supply company. Legal problems against him included an employee’s gruesome death thanks to violating safety laws.
  • Freedom Partners — the Koch donor group
  • Freedom School — the all-white CO private school funded by Charles Koch in the 1960s
  • FreedomWorks
  • Richard Gilliam — Head of Virginia coal mining company Cumberland Resources, and Koch network donor.
  • Harold Hamm — Oklahoma fracking king and charter member of the Koch donors’ circle, Hamm became a billionaire founding the Continental Resources shale-oil company
  • Diane Hendricks — $3.6 billion building supply company owner and Trump inaugural committee donor, and the wealthiest woman in Wisconsin.
  • Charles Koch — CEO of Koch Industries and patriarch of the Koch empire following his father and brother’s death, and estrangement from his other younger brother. Former member of the John Birch Society, a group so far to the right that even arch-conservative William F. Buckley excommunicated them from the mainstream party in the 1950s.
  • The Charles Koch Foundation
  • (David Koch) — deceased twin brother of Bill Koch and younger brother to Charles who ran a failed campaign in 1980 as the vice presidential nominee of the Libertarian Party — netting 1% of the popular vote. In 2011 he echoed spurious claims from conservative pundit Dinesh D’Souza that Obama got his “radical” political outlook from his African father.
  • The Leadership Institute
  • Michael McKenna — president of the lobbying firm MWR Strategies, whose clients include Koch Industries, picked by Trump to serve on the Department of Energy transition team
  • Rebekah Mercer — daughter of hedge fund billionaire and right-wing Koch donor Robert Mercer, she worked with Steve Bannon on several projects including Breitbart News, Cambridge Analytica, and Gab.
  • Robert Mercer — billionaire NY hedge fund manager and next largest donor after the Kochs themselves, sometimes even surpassing them
  • MWR Strategies — lobbying firm for the energy industry whose clients include Koch Industries, whose president Michael McKenna served on the Trump energy transition team
  • John M. Olin — chemical and munitions magnate and Koch donor
  • George Pearson — Former head of the Koch Foundation
  • Mike Pence — Charles Koch’s number one pick for president in 2012.
  • Mike Pompeo — former Republican Kansas Congressman who got picked first to lead the CIA, then later as Secretary of State under Trump. He was the single largest recipient of Koch money in Congress as of 2017. The Kochs had been investors and partners in Pompeo’s business ventures before he got into politics.
  • The Reason Foundation
  • Richard Mellon Scaife — heir to the Mellon banking and Gulf Oil fortunes
  • David Schnare — self-described “free-market environmentalist” on Trump’s EPA transition team
  • Marc Short — ran the Kochs’ secretive donor club, Freedom Partners, before becoming a senior advisor to vice president Mike Pence during the Trump transition
  • State Policy Network
  • The Tax Foundation
  • Tea Party

Koch Network Mind Map

This mind map shows the intersections between the Koch network and the larger network of GOP donors, reactionaries, and evil billionaires who feel entitled to control American politics via the fortunes they’ve made or acquired.

Read more

An echo chamber is a metaphorical description of a situation where an individual is encased in a bubble of like-minded information, reinforcing pre-existing views without exposure to opposing perspectives. This concept has gained prominence with the rise of digital and social media, where algorithms personalize user experiences, inadvertently isolating individuals from diverse viewpoints and enabling people to remain cloistered within a closed system that may contain misinformation and disinformation.

The role of digital media and algorithms

Digital platforms and social media leverage algorithms to tailor content that aligns with users’ past behaviors and preferences. This personalization, while enhancing engagement, fosters filter bubblesβ€”closed environments laden with homogeneous information.

Such settings are ripe for the unchecked proliferation of disinformation, as they lack the diversity of opinion necessary for critical scrutiny. The need for critical thinking is greatly diminished when we are only ever exposed to information and beliefs we already agree with.

Disinformation in echo chambers

Echo chambers serve as breeding grounds for disinformation, where false information is designed to mislead and manipulate. In these closed loops, disinformation finds little resistance and is readily accepted and amplified, bolstering existing biases and misconceptions.

We all have psychological traits that make us vulnerable to believing things that aren’t true. Whether sourced via deception, misinterpretation, conspiracy theories, propaganda, or other phenomena, false beliefs are made stickier and harder to debunk when one is surrounded by an echo chamber.

Political polarization exacerbated

Beyond the scale of lone individuals, the isolation facilitated by echo chambers significantly contributes to political polarization more broadly. As people become entrenched in their informational silos, the common ground necessary for democratic discourse dwindles. This division not only fosters extremism but also undermines the social cohesion essential for a functioning democracy.

The impact of confirmation bias

Within echo chambers, confirmation biasβ€”the tendency to favor information that corroborates existing beliefsβ€”becomes particularly pronounced. This cognitive bias solidifies ideological positions, making individuals resistant to changing their views, even in the face of contradictory evidence.

The real-world effects of echo chambers transcend digital boundaries as well, influencing real-world political landscapes. Political actors can exploit these dynamics to deepen divides, manipulate public opinion, and mobilize support based on misinformation, leading to a polarized and potentially radicalized electorate.

Strategies for mitigation

Combating the challenges posed by echo chambers and disinformation necessitates a comprehensive approach:

  • Media Literacy: Educating the public to critically assess information sources, understand content personalization, and identify sources of biases and disinformation.
  • Responsible Platform Design: Encouraging digital platforms to modify algorithms to promote diversity in content exposure and implement measures against disinformation.
  • Regulatory Interventions: Policymakers may need to step in to ensure digital environments foster healthy public discourse.

Echo chambers, particularly within the digital media landscape, significantly impact the spread of disinformation and political polarization. By reinforcing existing beliefs and isolating individuals from diverse perspectives, they contribute to a divided society. Addressing this issue is critical and requires efforts in education, platform design, and regulation to promote a more informed and cohesive public discourse.

Read more

The chemtrails conspiracy theory emerged in the late 1990s. It posits that the long-lasting trails left by aircraft, conventionally known as contrails (short for condensation trails), are actually “chemical trails” (chemtrails). These chemtrails, according to believers, consist of chemical or biological agents deliberately sprayed at high altitudes by government or other agencies for purposes unknown to the general public. This theory gained momentum with the rise of the internet, allowing for widespread dissemination of disinformation, misinformation, and speculation.

Contrails of a Boeing 747-438 from Qantas at 11,000 m (36,000 ft), by Sergey Kustov

The roots of this theory can be traced back to a 1996 report by the United States Air Force titled “Weather as a Force Multiplier: Owning the Weather in 2025.” This document speculated on future weather modification technologies for military purposes. Conspiracy theorists misinterpreted this as evidence of ongoing weather manipulation. The theory was further fueled by a 1997 petition titled “Chemtrails – Ban High Altitude Aerial Spraying” and a 1999 broadcast by investigative journalist William Thomas, who claimed widespread spraying for unknown purposes.

Why people believe in chemtrails

  1. Distrust in Authority: A significant driver of belief in the chemtrail conspiracy is a general mistrust of governments and authoritative bodies. For some, it’s easier to believe in a malevolent secretive plot (which is often some kind of variation on the global cabal theory) than to trust official explanations.
  2. Cognitive Bias: Confirmation bias plays a crucial role. Individuals who believe in chemtrails often interpret ambiguous evidence as confirmation of their beliefs. The sight of a contrail, for instance, is perceived as direct evidence of chemtrail activity.
  3. Scientific Misunderstanding: Many chemtrail believers lack an understanding of atmospheric science. Contrails are formed when the hot humid exhaust from jet engines condenses in the cold, high-altitude air, forming ice crystals. This scientific process is often misunderstood or overlooked by proponents of the chemtrail theory.
  4. Social and Psychological Factors: Belief in conspiracies can be psychologically comforting for some, providing simple explanations for complex phenomena and a sense of control or understanding in a seemingly chaotic world. Social networks, both online as social media and offline as “meatspace” connections, also play a significant role in reinforcing these beliefs.

Chemtrails in the broader context of conspiracy thinking

The chemtrail conspiracy is part of a larger pattern of conspiratorial thinking that includes a range of other theories, from the relatively benign to the dangerously outlandish. This pattern often involves beliefs in a powerful, malevolent group controlling significant world events or possessing hidden knowledge.

  1. Relation to Other Theories: Chemtrail beliefs often intersect with other conspiracy theories. For example, some chemtrail believers also subscribe to New World Order or global depopulation theories like the white supremacist Great Replacement Theory.
  2. Impact on Public Discourse and Policy: The belief in chemtrails has occasionally influenced public discourse and policy. Local governments and councils have been petitioned to stop these perceived practices, reflecting the tangible impact of such beliefs.
  3. Challenges for Science and Education: Confronting the chemtrail conspiracy presents challenges for educators and scientists. Addressing scientific illiteracy and promoting critical thinking are key in combating the spread of such disinformation and misinformation.
  4. A Reflection of Societal Fears: The persistence of the chemtrail theory reflects broader societal fears and anxieties, particularly about government control, environmental destruction, and health concerns.
Contrails (but not chemtrails!) in the sky, by Midjourney

Chemtrails as part of a broader science denialism

The chemtrail conspiracy theory is a multifaceted phenomenon rooted in mistrust, scientific misunderstanding, and psychological factors. It is emblematic of a broader pattern of conspiracy thinking and science denialism that poses challenges to public understanding of science and rational discourse. Addressing these challenges requires a nuanced approach that includes education, transparent communication from authorities, and fostering critical thinking skills among the public.

This theory, while lacking credible scientific evidence, serves as a case study in how misinformation can spread and take root in society. It underscores the need for vigilance in how information is consumed and shared, especially in an age where digital media can amplify fringe theories with unprecedented speed and scale. Ultimately, understanding and addressing the underlying causes of belief in such theories is crucial in promoting a more informed and rational public discourse.

Read more

A “meme” is a term first coined by British evolutionary biologist Richard Dawkins in his 1976 book “The Selfish Gene.” Originally, it referred to an idea, behavior, or style that spreads from person to person within a culture. However, in the digital age, the term has evolved to specifically denote a type of media – often an image with text, but sometimes a video or a hashtag – that spreads rapidly online, typically through social media platforms like Facebook, Twitter/X, Reddit, TikTok, and generally all extant platforms.

Memes on the digital savannah

In the context of the internet, memes are a form of digital content that encapsulates a concept, joke, or sentiment in a highly relatable and easily shareable format. They often consist of a recognizable image or video, overlaid with humorous or poignant text that pertains to current events, popular culture, or universal human experiences. Memes have become a cornerstone of online communication, offering a way for individuals to express opinions, share laughs, and comment on societal norms.

Grumpy Cat meme: "There are two types of people in this world... and I hate them"

Once primarily a tool of whimsy, amusement, and even uplifit, in recent years memes have become far more weaponized by trolls and bad actors as part of a broader shift in internet culture towards incivility and exploitation. The days of funny cats have been encroached upon by the racism and antisemitism of Pepe the Frog, beloved patron saint meme of the alt-right. The use of memes to project cynicism or thinly-veiled white supremacy into culture and politics is an unwelcome trend that throws cold water on the formerly more innocent days of meme yore online.

Memes as tools of disinformation and information warfare

While memes are still used for entertainment and social commentary, they have also become potent tools for disseminating disinformation and conducting information warfare, both domestically and abroad. This is particularly evident in political arenas where, for instance, American right-wing groups have leveraged memes to spread their ideologies, influence public opinion, and discredit opposition.

  1. Simplicity and Virality: Memes are easy to create and consume, making them highly viral. This simplicity allows for complex ideas to be condensed into easily digestible and shareable content, often bypassing critical analysis from viewers.
  2. Anonymity and Plausible Deniability: The often-anonymous nature of meme creation and sharing allows individuals or groups to spread disinformation without accountability. The humorous or satirical guise of memes also provides a shield of plausible deniability against accusations of spreading falsehoods.
  3. Emotional Appeal: Memes often evoke strong emotional responses, which can be more effective in influencing public opinion than presenting factual information. The American right-wing, among other groups, has adeptly used memes to evoke feelings of pride, anger, or fear, aligning such emotions with their political messages.
  4. Echo Chambers and Confirmation Bias: Social media algorithms tend to show users content that aligns with their existing beliefs, creating echo chambers. Memes that reinforce these beliefs are more likely to be shared within these circles, further entrenching ideologies and sometimes spreading misinformation.
  5. Manipulation of Public Discourse: Memes can be used to distract from important issues, mock political opponents, or oversimplify complex social and political problems. This can skew public discourse and divert attention from substantive policy discussions or critical events.
  6. Targeting the Undecided: Memes can be particularly effective in influencing individuals who are undecided or less politically engaged. Their simplicity and humor can be more appealing than traditional forms of political communication, making them a powerful tool for shaping opinions.

Memes in political campaigns

Memes have been used to discredit candidates or push particular narratives that favor right-wing ideologies. Memes have also been employed to foster distrust in mainstream media and institutions, promoting alternative, often unfounded narratives that align with right-wing agendas.

Trump QAnon meme: "The Storm is Coming" in Game of Thrones font, shared on Truth Social

While often benign and humorous, memes can also be wielded as powerful tools of disinformation and information warfare. The American right-wing, among other political groups globally, has harnessed the viral nature of memes to influence public opinion, manipulate discourse, and spread their ideologies. As digital media continues to evolve, the role of memes in political and social spheres is likely to grow, making it crucial for consumers to approach them with a critical eye.

Read more

Cyberbullying involves the use of digital technologies, like social media, texting, and websites, to harass, intimidate, or embarrass individuals. Unlike traditional bullying, its digital nature allows for anonymity and a wider audience. Cyberbullies employ various tactics such as sending threatening messages, spreading rumors online, posting sensitive or derogatory information, or impersonating someone to damage their reputation — on to more sinister and dangerous actions like doxxing.

Geopolitical impact of cyberbullying

In recent years, cyberbullying has transcended personal boundaries and infiltrated the realm of geopolitics. Nation-states or politically motivated groups have started using cyberbullying tactics to intimidate dissidents, manipulate public opinion, or disrupt political processes in other countries. Examples include spreading disinformation, launching smear campaigns against political figures, or using bots to amplify divisive content. This form of cyberbullying can have significant consequences, destabilizing societies and influencing elections.

Recognizing cyberbullying

Identifying cyberbullying involves looking for signs of digital harassment. This can include receiving repeated, unsolicited, and aggressive communications, noticing fake profiles spreading misinformation about an individual, or observing coordinated attacks against a person or group. In geopolitics, recognizing cyberbullying might involve identifying patterns of disinformation, noting unusual social media activity around sensitive political topics, or detecting state-sponsored troll accounts.

Responding to cyberbullying

The response to cyberbullying varies based on the context and severity. For individuals, it involves:

  1. Documentation: Keep records of all bullying messages or posts.
  2. Non-engagement: Avoid responding to the bully, as engagement often escalates the situation.
  3. Reporting: Report the behavior to the platform where it occurred and, if necessary, to law enforcement.
  4. Seeking Support: Reach out to friends, family, or professionals for emotional support.

For geopolitical cyberbullying, responses are more complex and involve:

  1. Public Awareness: Educate the public about the signs of state-sponsored cyberbullying and disinformation.
  2. Policy and Diplomacy: Governments can implement policies to counteract foreign cyberbullying and engage in diplomatic efforts to address these issues internationally.
  3. Cybersecurity Measures: Strengthening cybersecurity infrastructures to prevent and respond to cyberbullying at a state level.

Cyberbullying, in its personal and geopolitical forms, represents a significant challenge in the digital age. Understanding its nature, recognizing its signs, and knowing how to respond are crucial steps in mitigating its impact. For individuals, it means being vigilant online and knowing when to seek help. In the geopolitical arena, it requires a coordinated effort from governments, tech companies, and the public to defend against these insidious forms of digital aggression. By taking these steps, societies can work towards a safer, more respectful digital world.

Read more

The “repetition effect” is a potent psychological phenomenon and a common propaganda device. This technique operates on the principle that repeated exposure to a specific message or idea increases the likelihood of its acceptance as truth or normalcy by an individual or the public. Its effectiveness lies in its simplicity and its exploitation of a basic human cognitive bias: the more we hear something, the more likely we are to believe it.

Repetition effect, by Midjourney

Historical context

The repetition effect has been used throughout history, but its most notorious use was by Adolf Hitler and the Nazi Party in Germany. Hitler, along with his Propaganda Minister, Joseph Goebbels, effectively employed this technique to disseminate Nazi ideology and promote antisemitism. In his autobiography “Mein Kampf,” Hitler wrote about the importance of repetition in reinforcing the message and ensuring that it reached the widest possible audience. He believed that the constant repetition of a lie would eventually be accepted as truth.

Goebbels echoed this sentiment, famously stating, “If you tell a lie big enough and keep repeating it, people will eventually come to believe it.” The Nazi regime used this strategy in various forms, including in speeches, posters, films, and through controlled media. The relentless repetition of anti-Semitic propaganda, the glorification of the Aryan race, and the demonization of enemies played a crucial role in the establishment and maintenance of the Nazi regime.

Psychological basis

The effectiveness of the repetition effect is rooted in cognitive psychology. This bias is known as the “illusory truth effect,” where repeated exposure to a statement increases its perceived truthfulness. The phenomenon is tied to the ease with which familiar information is processed. When we hear something repeatedly, it becomes more fluent to process, and our brains misinterpret this fluency as a signal for truth.

Modern era usage

The transition into the modern era saw the repetition effect adapting to new media and communication technologies. In the age of television and radio, political figures and advertisers used repetition to embed messages in the public consciousness. The rise of the internet and social media has further amplified the impact of this technique. In the digital age, the speed and reach of information are unprecedented, making it easier for false information to be spread and for the repetition effect to be exploited on a global scale.

The repetition effect on screens and social media, by Midjourney

Political campaigns, especially in polarized environments, often use the repetition effect to reinforce their messages. The constant repetition of slogans, talking points, and specific narratives across various platforms solidifies these messages in the public’s mind, regardless of their factual accuracy.

Ethical considerations and countermeasures

The ethical implications of using the repetition effect are significant, especially when it involves spreading disinformation or harmful ideologies. It raises concerns about the manipulation of public opinion and the undermining of democratic processes.

To counteract the repetition effect, media literacy and critical thinking are essential. Educating the public about this psychological bias and encouraging skepticism towards repeated messages can help mitigate its influence. Fact-checking and the promotion of diverse sources of information also play a critical role in combating the spread of falsehoods reinforced by repetition.

Repetition effect: A key tool of propaganda

The repetition effect is a powerful psychological tool in the arsenal of propagandists and communicators. From its historical use by Hitler and the fascists to its continued relevance in the digital era, this technique demonstrates the profound impact of repeated messaging on public perception and belief.

While it can be used for benign purposes, such as in advertising or reinforcing positive social behaviors, its potential for manipulation and spreading misinformation cannot be understated. Understanding and recognizing the repetition effect is crucial in developing a more discerning and informed approach to the information we encounter daily.

Read more

Shitposting, a term that has seeped into the mainstream of internet culture, is often characterized by the act of posting deliberately provocative, off-topic, or nonsensical content in online communities and on social media. The somewhat vulgar term encapsulates a spectrum of online behavior ranging from harmless, humorous banter to malicious, divisive content.

Typically, a shit-post is defined by its lack of substantive content, its primary goal being to elicit attention and reactions — whether amusement, confusion, or irritation — from its intended audience. Closely related to trolling, shitposting is one aspect of a broader pantheon of bad faith behavior online.

Shit-poster motivations

The demographic engaging in shit-posting is diverse, cutting across various age groups, social strata, and political affiliations. However, it’s particularly prevalent among younger internet users who are well-versed in meme culture and online vernacular. The motivations for shit-posting can be as varied as its practitioners.

Some engage in it for humor and entertainment, seeing it as a form of digital performance art. Others may use it as a tool for social commentary or satire, while a more nefarious subset might employ it to spread disinformation and misinformation, sow discord, and/or harass individuals or groups.

Online trolls shitposting on the internet, by Midjourney

Context in US politics

In the realm of U.S. politics, shit-posting has assumed a significant role in recent elections, especially on platforms like Twitter / X, Reddit, and Facebook. Politicians, activists, and politically engaged individuals often use this tactic to galvanize supporters, mock opponents, or shape public perception. It’s not uncommon to see political shit-posts that are laden with irony, exaggeration, or out-of-context information, designed to inflame passions or reinforce existing biases — or exploit them.

Recognition and response

Recognizing shit-posting involves a discerning eye. Key indicators include the use of hyperbole, irony, non-sequiturs, and content that seems outlandishly out of place or context. The tone is often mocking or sarcastic. Visual cues, such as memes or exaggerated images, are common.

Responding to shit-posting is a nuanced affair. Engaging with it can sometimes amplify the message, which might be the poster’s intention. A measured approach is to assess the intent behind the post. If it’s harmless humor, it might warrant a light-hearted response or none at all.

For posts that are disinformation or border on misinformation or toxicity, countering with factual information, reporting the content, or choosing not to engage are viable strategies. The key is not to feed into the cycle of provocation and reaction that shit-posting often seeks to perpetuate.

Shitposting troll farms lurk in the shadows, beaming disinformation across the land -- by Midjourney

Fighting back

Shit-posting, in its many forms, is a complex phenomenon in the digital age. It straddles the line between being a form of modern-day satire and a tool for misinformation, propaganda, and/or cyberbullying. As digital communication continues to evolve, understanding the nuances of shit-posting – its forms, motivations, and impacts – becomes crucial, particularly in politically charged environments. Navigating this landscape requires a balanced approach, blending awareness, discernment, and thoughtful engagement.

This overview provides a basic understanding of shit-posting, but the landscape is ever-changing, with new forms and norms continually emerging. The ongoing evolution of online communication norms, including phenomena like shit-posting, is particularly fascinating and significant in the broader context of digital culture and political discourse.

Read more

Science denialism has a complex and multifaceted history, notably marked by a significant event in 1953 that set a precedent for the tactics of disinformation widely observed in various spheres today, including politics.

The 1953 meeting and the birth of the disinformation playbook

The origins of modern science denial can be traced back to a pivotal meeting in December 1953, involving the heads of the four largest American tobacco companies. This meeting was a response to emerging scientific research linking smoking to lung cancer — a serious existenstial threat to their business model.

Concerned about the potential impact on their business, these industry leaders collaborated with a public relations firm, Hill & Knowlton, to craft a strategy. This strategy was designed not only to dispute the growing evidence about the health risks of smoking, but also to manipulate public perception by creating doubt about the science itself. They created the Tobacco Industry Research Committee (TIRC) as an organization to cast doubt on the established science, and prevent the public from knowing about the lethal dangers of smoking.

And it worked — for over 40 years. The public never formed a consensus on the lethality and addictiveness of nicotine until well into the 1990s, when the jig was finally up and Big Tobacco had to pay a record-breaking $200 billion settlement over their 4 decades of mercilessly lying to the American people following the Tobacco Master Settlement Agreement (MSA) of 1998.

smoking and the disinformation campaign of Big Tobacco leading to science denialism, by Midjourney

Strategies of the disinformation playbook

This approach laid the groundwork for what is often referred to as the “disinformation playbook.” The key elements of this playbook include creating doubt about scientific consensus, funding research that could contradict or cloud scientific understanding, using think tanks or other organizations to promote these alternative narratives, and influencing media and public opinion to maintain policy and regulatory environments favorable to their interests — whether profit, power, or both.

Over the next 7 decades — up to the present day — this disinformation playbook has been used by powerful special interests to cast doubt, despite scientific consensus, on acid rain, depletion of the ozone layer, the viability of Ronald Reagan‘s Strategic Defense Initiative (SDI), and perhaps most notably: the man-made causes of climate change.

Adoption and adaptation in various industries

The tobacco industry’s tactics were alarmingly successful for decades, delaying effective regulation and public awareness of smoking’s health risks. These strategies were later adopted and adapted by various industries and groups facing similar scientific challenges to their products or ideologies. For instance, the fossil fuel industry used similar tactics to cast doubt on global warming — leading to the phenomenon of climate change denialism. Chemical manufacturers have disputed science on the harmful effects of certain chemicals like DDT and BPA.

What began as a PR exercise by Big Tobacco to preserve their fantastic profits once science discovered the deleterious health effects of smoking eventually evolved into a strategy of fomenting science denialism more broadly. Why discredit one single finding of the scientific community when you could cast doubt on the entire process of science itself — as a way of future-proofing any government regulation that might curtail your business interests?

Science denial in modern politics

In recent years, the tactics of science denial have become increasingly prevalent in politics. Political actors, often influenced by corporate interests or ideological agendas, have employed these strategies to challenge scientific findings that are politically inconvenient — despite strong and often overwhelming evidence. This is evident in manufactured “debates” on climate change, vaccine safety, and COVID-19, where scientific consensus is often contested not based on new scientific evidence but through disinformation strategies aimed at sowing doubt and confusion.

The role of digital media and politicization

The rise of social media has accelerated the spread of science denial. The digital landscape allows for rapid dissemination of misinformation and the formation of echo chambers, where groups can reinforce shared beliefs or skepticism, often insulated from corrective or opposing information. Additionally, the politicization of science, where scientific findings are viewed through the lens of political allegiance rather than objective evidence, has further entrenched science denial in modern discourse — as just one aspect of the seeming politicization of absolutely everything in modern life and culture.

Strategies for combatting science denial

The ongoing impact of science denial is profound. It undermines public understanding of science, hampers informed decision-making, and delays action on critical issues like climate change, public health, and environmental protection. The spread of misinformation about vaccines, for instance, has led to a decrease in vaccination rates and a resurgence of diseases like measles.

scientific literacy, by Midjourney

To combat science denial, experts suggest several strategies. Promoting scientific literacy and critical thinking skills among the general public is crucial. This involves not just understanding scientific facts, but also developing an understanding of the scientific method and how scientific knowledge is developed and validated. Engaging in open, transparent communication about science, including the discussion of uncertainties and limitations of current knowledge, can also help build public trust in science.

Science denial, rooted in the strategies developed by the tobacco industry in the 1950s, has evolved into a significant challenge in contemporary society, impacting not just public health and environmental policy but also the very nature of public discourse and trust in science. Addressing this issue requires a multifaceted approach, including education, transparent communication, and collaborative efforts to uphold the integrity of scientific information.

Read more