AI

survival of the richest -- they intend to escape somewhere pre-planned as the planet burns

Douglas Rushkoff’s “Survival of the Richest: Escape Fantasies of the Tech Billionaires” delves into the unsettling strategies of the ultra-wealthy broligarchs as they prepare for global catastrophes of their own making. Drawing from personal encounters with tech magnates, Rushkoff unveils a mindset fixated on personal survival over collective well-being running rampant in Silicon Valley.

The Mindset

At the heart of Rushkoff’s critique is “The Mindset,” a belief system among tech billionaires from Peter Thiel to Elon Musk and beyond characterized by:

  • Extreme Wealth and Privilege: Leveraging vast resources to insulate themselves from societal collapse.
  • Escape Over Prevention: Prioritizing personal exit strategies rather than addressing systemic issues.
  • Technological Transcendence: Aiming to surpass human limitations through advanced technologies.

This worldview drives investments in elaborate escape plans, sidelining efforts to resolve the crises they anticipate. It is almost as if they are in a low-key doomsday cult, albeit one that lacks a singular leader and isn’t holed up in a compound (…yet).

A tech billionaire's private island escape plan -- how the rich will survive the coming catastrophes they've created

The Event

The term “The Event” encapsulates potential disasters such as environmental collapse — particularly from climate change, social unrest, pandemics, and cyberattacks. They believe we should expect more bitter divisiveness, more covid-19s, and more hostile hacking in our future. The elite perceive these scenarios as unavoidable, focusing on personal survival rather than prevention.

Escape Strategies

Rushkoff examines the lengths to which the ultra-rich go to secure their futures, including:

  • Luxury Bunkers: Constructing fortified shelters to withstand various apocalyptic events.
  • Seasteading Communities: Developing autonomous, floating societies beyond governmental reach.
  • Space Colonies: Investing in extraterrestrial habitats as ultimate escape routes.
  • Life Extension Technologies: Pursuing methods to prolong life, aiming to outlast earthly crises.
  • Artificial Intelligence: Exploring consciousness uploading to achieve digital immortality.

These measures reflect a desire to detach from societal responsibilities and the broader human community.

The Insulation Equation

Rushkoff introduces the “insulation equation,” illustrating how billionaires calculate the wealth required to shield themselves from the fallout of their own actions. This cycle perpetuates reckless behavior and further wealth accumulation, exacerbating the very problems they seek to escape.

Critique of Capitalism and Technology

The book critiques the symbiotic relationship between capitalism and technology, highlighting:

  • Exponential Growth Pursuit: An obsession with endless expansion at any cost.
  • Shareholder Primacy: Prioritizing investor returns over societal or environmental considerations.
  • Erosion of Empathy: A growing disconnect between the wealthy and the rest of society.
  • Resource Exploitation: Reducing nature and human complexity to mere commodities.

Rushkoff argues that this dynamic fosters a dystopian future dominated by private technologies and monopolistic control — a very authoritarian direction.

Historical Context

Positioning today’s tech elites within a historical framework, Rushkoff contends they are not pioneers but continuations of past power structures that enriched themselves at others’ expense. Their perceived uniqueness is, in reality, a repetition of historical patterns, including colonialism.

Proposed Solutions

While primarily a critique, Rushkoff offers some ideas for pathways to counteract “The Mindset”:

  • Rejecting Doom’s Inevitability: Embracing proactive solutions over fatalistic resignation.
  • Supporting Local Economies: Fostering community resilience through localized commerce.
  • Advocating Anti-Monopoly Laws: Challenging corporate dominance to promote fair competition.
  • Redefining Identity: Moving beyond algorithmic categorizations to embrace human complexity.

Some critics argue these suggestions may not fully address the scale of the issues presented — but it’s much easier to be a critic than to come up with these solutions. We may not know all the answers yet as to how to curb these alarming trends, but I think Rushkoff’s point is well taken that we ought to involve ourselves in at least starting to work out the solutions with some urgency.

yet another glorious fantasy home of the richest and most famous who will leave the rest of us behind so they can survive

Ultimately, “Survival of the Richest” serves as a stark examination of the escapist fantasies of the tech elite, and an eye-opening look behind the curtains of the Great Oz’s who dot our landscape today. These wealthy tech elites have promised the moon (or Mars) without knowing whether they could really deliver — and all the while planning a Plan B in case their hare-brained schemes went belly-up. They are okay with sacrificing the vast majority of the people on the planet, as long as their underground bunkers (or better yet, private islands) are there for them.

By exposing their self-serving strategies, Rushkoff urges a shift from individualistic survivalism to collective action in tackling the many global challenges that face us today. We would be wise to heed the call and gather our tribes early and often.

Read more

Vladimir Putin and the Russian propaganda campaigns unsealed by the DOJ

In the digital age, the line between fact and fiction is often blurred, and nowhere is this more dangerous than in the realm of political influence. The power to shape narratives, sway public opinion, and manipulate democratic processes is no longer just the domain of politicians and pundits β€” it’s a high-stakes game involving shadowy operatives, shell companies, and an arsenal of disinformation tools. The latest indictments from the Department of Justice expose the scale of Russian propaganda campaigns to reveal just how deeply this game is rigged against us.

At the heart of this operation is a well-oiled propaganda machine, targeting the fault lines of American society β€” free speech, immigration, and even our national pastime of online gaming. And in the backdrop of these revelations looms the 2024 presidential election, a moment ripe for manipulation by foreign actors with the singular goal of deepening our divisions. While these efforts may feel like the plot of a dystopian thriller, they are all too real, with disinformation campaigns working to tilt the scales of democracy in favor of authoritarianism.

Last week, the Department of Justice released a treasure trove of indictments and accompanying information about the depth and breadth of the still ongoing Russian influence campaigns raging in the US and elsewhere — with a particular focus on sowing discord ahead of the US 2024 elections. Let’s take a look at the major pillars of the DOJ’s work.

RT employees and right-wing influencers indicted

On September 3, 2024, the Department of Justice filed an indictment of two Russian nationals, Kostiantyn Kalashnikov and Elena Afanasyeva, for covertly funding a Tennessee-based content creation company that published videos promoting Russian interests. According to the indictment, they funneled nearly $10 million through shell companies to spread pro-Russian propaganda and disinformation on U.S. social media platforms. The defendants posed as U.S.-based editors, directing content that amplified domestic divisions and supported Russian government narratives. Both are charged with conspiracy to violate the Foreign Agents Registration Act (FARA) and money laundering.

Although not specifically named, there are enough uniquely identifying clues in the document to identify the content company in the scheme as Tenet Media, a company run by married couple Liam Donovan and Lauren Chen — herself a prominent “conservative” commentator associated with Glenn Beck‘s The Blaze and Charlie Kirk’s Turning Point USA. The six commentators who were being paid exorbitantly by the Russians for their content (as much as $100,000 per video) — all of whom, improbably, claim to have been duped — are Tim Pool, Dave Rubin, and Benny Johnson, Tayler Hansen, Matt Christiansen, and Lauren Southern. All are outspoken Trump supporters, and are on record parroting Russian talking points despite claiming the work was wholly their own.

Continue reading Russian propaganda campaigns exposed by the DOJ in a slew of indictments
Read more

Marc Andreessen, a prominent tech billionaire, co-founder of the venture capital firm Andreessen Horowitz, and one of Twitter (X)’s current investors, holds a complex and often controversial set of beliefs and ideologies. But who is Marc Andreessen, really — as in, what does he believe in? What is he using his wealth and power to achieve?

His perspectives are often polarizing, marrying an unyielding faith in the transformative power of technology with a worldview that is dismissive of societal concerns and hostile to traditional democratic values. Here are some of the key aspects of his views:

1. Techno-Optimism and Elitism

Andreessen is a strong advocate for techno-optimism, believing that technological advancements are the key to solving societal problems and driving progress. However, this optimism is often tied to an elitist worldview, where he sees technologists and wealthy entrepreneurs as the primary drivers of societal advancement.

 His “Techno-Optimist Manifesto” outlines a vision where technologists are the leaders of society, unencumbered by social responsibility, trust, safety, and ethics — particularly in the realm of AI, which he believes ought to race ahead to whatever end, risks be damned.

2. Critique of Government and Social Structures

Andreessen criticizes the U.S. government for being strangled by special interests and lobbying, yet his firm has engaged in significant lobbying efforts.

He expresses disdain for centralized systems of government, particularly communism, while advocating for technologists to play a central role in planning and governing society.

Who is Marc Andreessen? A Silicon Valley venture capitalist and tech billionaire with extreme views about society

3. Accelerationism and Right-Wing Influences

 Andreessen embraces “effective accelerationism,” a philosophy that champions technological advancement at any cost. This is influenced by thinkers like Nick Land, known for his anti-democratic and anti-egalitarian ideas.

His manifesto draws from the works of Friedrich Hayek, Milton Friedman, and Ayn Rand, reflecting a strong right-wing libertarian influence.

Continue reading Who is Marc Andreessen?
Read more

Fact-checking is a critical process used in journalism to verify the factual accuracy of information before it’s published or broadcast. This practice is key to maintaining the credibility and ethical standards of journalism and media as reliable information sources. It involves checking statements, claims, and data in various media forms for accuracy and context.

Ethical standards in fact-checking

The ethical backbone of fact-checking lies in journalistic integrity, emphasizing accuracy, fairness, and impartiality. Accuracy ensures information is cross-checked with credible sources. Fairness mandates balanced presentation, and impartiality requires fact-checkers to remain as unbiased in their evaluations as humanly possible.

To evaluate a media source’s credibility, look for a masthead, mission statement, about page, or ethics statement that explains the publication’s approach to journalism. Without a stated commitment to journalistic ethics and standards, it’s entirely possible the website or outlet is publishing opinion and/or unverified claims.

Fact-checking in the U.S.: A historical perspective

Fact-checking in the U.S. has evolved alongside journalism. The rise of investigative journalism in the early 20th century highlighted the need for thorough research and factual accuracy. However, recent developments in digital and social media have introduced significant challenges.

Challenges from disinformation and propaganda

The digital era has seen an explosion of disinformation and propaganda, particularly on social media. ‘Fake news‘, a term now synonymous with fabricated or distorted stories, poses a significant hurdle for fact-checkers. The difficulty lies not only in the volume of information but also in the sophisticated methods used to spread falsehoods, such as deepfakes and doctored media.

Bias and trust issues in fact-checking

The subjectivity of fact-checkers has been scrutinized, with some suggesting that personal or organizational biases might influence their work. This perception has led to a trust deficit in certain circles, where fact-checking itself is viewed as potentially politically or ideologically motivated.

Despite challenges, fact-checking remains crucial for journalism. Future efforts may involve leveraging technology like AI for assistance, though human judgment is still essential. The ongoing battle against disinformation will require innovation, collaboration with tech platforms, transparency in the fact-checking process, and public education in media literacy.

Fact-checking stands as a vital element of journalistic integrity and a bulwark against disinformation and propaganda. In the U.S., and globally, the commitment to factual accuracy is fundamental for a functioning democracy and an informed society. Upholding these standards helps protect the credibility of the media and trusted authorities, and supports the fundamental role of journalism in maintaining an informed public and a healthy democracy.

Read more

A “filter bubble” is a concept in the realm of digital publishing, media, and web technology, particularly significant in understanding the dynamics of disinformation and political polarization. At its core, a filter bubble is a state of intellectual isolation that can occur when algorithms selectively guess what information a user would like to see based on past behavior and preferences. This concept is crucial in the digital age, where much of our information comes from the internet and online sources.

Origins and mechanics

The term was popularized by internet activist Eli Pariser around 2011. It describes how personalization algorithms in search engines and social media platforms can isolate users in cultural or ideological bubbles. These algorithms, driven by AI and machine learning, curate content – be it news, search results, or social media posts – based on individual user preferences, search histories, and previous interactions.

filter bubble, by DALL-E 3

The intended purpose is to enhance user experience by providing relevant and tailored content. However, this leads to a situation where users are less likely to encounter information that challenges or broadens their worldview.

Filter bubbles in the context of disinformation

In the sphere of media and information, filter bubbles can exacerbate the spread of disinformation and propaganda. When users are consistently exposed to a certain type of content, especially if it’s sensational or aligns with their pre-existing beliefs, they become more susceptible to misinformation. This effect is compounded on platforms where sensational content is more likely to be shared and become viral, often irrespective of its accuracy.

Disinformation campaigns, aware of these dynamics, often exploit filter bubbles to spread misleading narratives. By tailoring content to specific groups, they can effectively reinforce existing beliefs or sow discord, making it a significant challenge in the fight against fake news and propaganda.

Impact on political beliefs and US politics

The role of filter bubbles in shaping political beliefs is profound, particularly in the polarized landscape of recent US politics. These bubbles create echo chambers where one-sided political views are amplified without exposure to opposing viewpoints. This can intensify partisanship, as individuals within these bubbles are more likely to develop extreme views and less likely to understand or empathize with the other side.

Recent years in the US have seen a stark divide in political beliefs, influenced heavily by the media sources individuals consume. For instance, the right and left wings of the political spectrum often inhabit separate media ecosystems, with their own preferred news sources and social media platforms. This separation contributes to a lack of shared reality, where even basic facts can be subject to dispute, complicating political discourse and decision-making.

Filter bubbles in elections and political campaigns

Political campaigns have increasingly utilized data analytics and targeted advertising to reach potential voters within these filter bubbles. While this can be an effective campaign strategy, it also means that voters receive highly personalized messages that can reinforce their existing beliefs and psychological biases, rather than presenting a diverse range of perspectives.

Breaking out of filter bubbles

Addressing the challenges posed by filter bubbles involves both individual and systemic actions. On the individual level, it requires awareness and a conscious effort to seek out diverse sources of information. On a systemic level, it calls for responsibility from tech companies to modify their algorithms to expose users to a broader range of content and viewpoints.

Filter bubbles play a significant role in the dissemination and reception of information in today’s digital age. Their impact on political beliefs and the democratic process — indeed, on democracy itself — in the United States cannot be overstated. Understanding and mitigating the effects of filter bubbles is crucial in fostering a well-informed public, capable of critical thinking and engaging in healthy democratic discourse.

Read more

SOTU 2024 Joe Biden Presidential address

Strong economic messages of the Keynesian buttressing of the middle class that is Bidenomics were everywhere in evidence at last night’s State of the Union address, Biden’s third since taking office in 2021. In SOTU 2024 he spoke about stabbing trickle-down economics in its gasping heart as a repeated failure to the American people. Instead of giving another $2 trillion tax cuts to billionaires, Biden wants to give back to the people who he says built America: the middle class.

The President delivered strong, sweeping language and vision reminiscent of LBJ’s Great Society and FDR‘s New Deal. He also delivered a heartwarming sense of unity and appeal to put down our bickering and get things done for the American people.

“We all come from somewhere — but we’re all Americans.”

This while lambasting the Republicans for scuttling the deal over the popular bipartisan immigration bill thanks to 11th hour interference from TFG (“my predecessor” as JRB called him). “This bill would save lives!” He is really effective at calling out the GOP‘s hypocrisy on border security with this delivery.

“We can fight about the border or we can fix the border. Send me a bill!”

He is taking full advantage of being the incumbent candidate here. He has the power and the track record to do all these things he is promising, and he’s telling the exact truth about the Republican obstructionism preventing the American people from having their government work for them.

SOTU 2024 Joe Biden fiery speech with Kamala Harris and Mike Johnson in the background behind him

I love that he calls out Trump in this speech, without naming names — almost a kind of Voldemort effect. He who must not be named — because giving him the dignity even of a name is more than he deserves.

He says that Trump and his cabal of anti-democratic political operatives have ancient ideas (hate, revenge, reactionary, etc.) — and that you can’t lead America with ancient ideas. In America, we look towards the future — relentlessly. Americans wants a president who will protect their rights — not take them away.

“I see a future… for all Americans!” he ends with, in a segment reminiscent of the great Martin Luther King’s “I Have a Dream” speech, with its clear vision of power and authority flowing from what is morally right and just, instead of what is corrupt and cronyish. It gave me hope for the future — that Americans will make the right choice, as we seem to have done under pressure, throughout our history. 🀞🏽

Continue reading Biden SOTU 2024: Success stories and big policy ideas
Read more

Cyberbullying involves the use of digital technologies, like social media, texting, and websites, to harass, intimidate, or embarrass individuals. Unlike traditional bullying, its digital nature allows for anonymity and a wider audience. Cyberbullies employ various tactics such as sending threatening messages, spreading rumors online, posting sensitive or derogatory information, or impersonating someone to damage their reputation — on to more sinister and dangerous actions like doxxing.

Geopolitical impact of cyberbullying

In recent years, cyberbullying has transcended personal boundaries and infiltrated the realm of geopolitics. Nation-states or politically motivated groups have started using cyberbullying tactics to intimidate dissidents, manipulate public opinion, or disrupt political processes in other countries. Examples include spreading disinformation, launching smear campaigns against political figures, or using bots to amplify divisive content. This form of cyberbullying can have significant consequences, destabilizing societies and influencing elections.

Recognizing cyberbullying

Identifying cyberbullying involves looking for signs of digital harassment. This can include receiving repeated, unsolicited, and aggressive communications, noticing fake profiles spreading misinformation about an individual, or observing coordinated attacks against a person or group. In geopolitics, recognizing cyberbullying might involve identifying patterns of disinformation, noting unusual social media activity around sensitive political topics, or detecting state-sponsored troll accounts.

Responding to cyberbullying

The response to cyberbullying varies based on the context and severity. For individuals, it involves:

  1. Documentation: Keep records of all bullying messages or posts.
  2. Non-engagement: Avoid responding to the bully, as engagement often escalates the situation.
  3. Reporting: Report the behavior to the platform where it occurred and, if necessary, to law enforcement.
  4. Seeking Support: Reach out to friends, family, or professionals for emotional support.

For geopolitical cyberbullying, responses are more complex and involve:

  1. Public Awareness: Educate the public about the signs of state-sponsored cyberbullying and disinformation.
  2. Policy and Diplomacy: Governments can implement policies to counteract foreign cyberbullying and engage in diplomatic efforts to address these issues internationally.
  3. Cybersecurity Measures: Strengthening cybersecurity infrastructures to prevent and respond to cyberbullying at a state level.

Cyberbullying, in its personal and geopolitical forms, represents a significant challenge in the digital age. Understanding its nature, recognizing its signs, and knowing how to respond are crucial steps in mitigating its impact. For individuals, it means being vigilant online and knowing when to seek help. In the geopolitical arena, it requires a coordinated effort from governments, tech companies, and the public to defend against these insidious forms of digital aggression. By taking these steps, societies can work towards a safer, more respectful digital world.

Read more

Shitposting, a term that has seeped into the mainstream of internet culture, is often characterized by the act of posting deliberately provocative, off-topic, or nonsensical content in online communities and on social media. The somewhat vulgar term encapsulates a spectrum of online behavior ranging from harmless, humorous banter to malicious, divisive content.

Typically, a shit-post is defined by its lack of substantive content, its primary goal being to elicit attention and reactions — whether amusement, confusion, or irritation — from its intended audience. Closely related to trolling, shitposting is one aspect of a broader pantheon of bad faith behavior online.

Shit-poster motivations

The demographic engaging in shit-posting is diverse, cutting across various age groups, social strata, and political affiliations. However, it’s particularly prevalent among younger internet users who are well-versed in meme culture and online vernacular. The motivations for shit-posting can be as varied as its practitioners.

Some engage in it for humor and entertainment, seeing it as a form of digital performance art. Others may use it as a tool for social commentary or satire, while a more nefarious subset might employ it to spread disinformation and misinformation, sow discord, and/or harass individuals or groups.

Online trolls shitposting on the internet, by Midjourney

Context in US politics

In the realm of U.S. politics, shit-posting has assumed a significant role in recent elections, especially on platforms like Twitter / X, Reddit, and Facebook. Politicians, activists, and politically engaged individuals often use this tactic to galvanize supporters, mock opponents, or shape public perception. It’s not uncommon to see political shit-posts that are laden with irony, exaggeration, or out-of-context information, designed to inflame passions or reinforce existing biases — or exploit them.

Recognition and response

Recognizing shit-posting involves a discerning eye. Key indicators include the use of hyperbole, irony, non-sequiturs, and content that seems outlandishly out of place or context. The tone is often mocking or sarcastic. Visual cues, such as memes or exaggerated images, are common.

Responding to shit-posting is a nuanced affair. Engaging with it can sometimes amplify the message, which might be the poster’s intention. A measured approach is to assess the intent behind the post. If it’s harmless humor, it might warrant a light-hearted response or none at all.

For posts that are disinformation or border on misinformation or toxicity, countering with factual information, reporting the content, or choosing not to engage are viable strategies. The key is not to feed into the cycle of provocation and reaction that shit-posting often seeks to perpetuate.

Shitposting troll farms lurk in the shadows, beaming disinformation across the land -- by Midjourney

Fighting back

Shit-posting, in its many forms, is a complex phenomenon in the digital age. It straddles the line between being a form of modern-day satire and a tool for misinformation, propaganda, and/or cyberbullying. As digital communication continues to evolve, understanding the nuances of shit-posting – its forms, motivations, and impacts – becomes crucial, particularly in politically charged environments. Navigating this landscape requires a balanced approach, blending awareness, discernment, and thoughtful engagement.

This overview provides a basic understanding of shit-posting, but the landscape is ever-changing, with new forms and norms continually emerging. The ongoing evolution of online communication norms, including phenomena like shit-posting, is particularly fascinating and significant in the broader context of digital culture and political discourse.

Read more

Science denialism has a complex and multifaceted history, notably marked by a significant event in 1953 that set a precedent for the tactics of disinformation widely observed in various spheres today, including politics.

The 1953 meeting and the birth of the disinformation playbook

The origins of modern science denial can be traced back to a pivotal meeting in December 1953, involving the heads of the four largest American tobacco companies. This meeting was a response to emerging scientific research linking smoking to lung cancer — a serious existenstial threat to their business model.

Concerned about the potential impact on their business, these industry leaders collaborated with a public relations firm, Hill & Knowlton, to craft a strategy. This strategy was designed not only to dispute the growing evidence about the health risks of smoking, but also to manipulate public perception by creating doubt about the science itself. They created the Tobacco Industry Research Committee (TIRC) as an organization to cast doubt on the established science, and prevent the public from knowing about the lethal dangers of smoking.

And it worked — for over 40 years. The public never formed a consensus on the lethality and addictiveness of nicotine until well into the 1990s, when the jig was finally up and Big Tobacco had to pay a record-breaking $200 billion settlement over their 4 decades of mercilessly lying to the American people following the Tobacco Master Settlement Agreement (MSA) of 1998.

smoking and the disinformation campaign of Big Tobacco leading to science denialism, by Midjourney

Strategies of the disinformation playbook

This approach laid the groundwork for what is often referred to as the “disinformation playbook.” The key elements of this playbook include creating doubt about scientific consensus, funding research that could contradict or cloud scientific understanding, using think tanks or other organizations to promote these alternative narratives, and influencing media and public opinion to maintain policy and regulatory environments favorable to their interests — whether profit, power, or both.

Over the next 7 decades — up to the present day — this disinformation playbook has been used by powerful special interests to cast doubt, despite scientific consensus, on acid rain, depletion of the ozone layer, the viability of Ronald Reagan‘s Strategic Defense Initiative (SDI), and perhaps most notably: the man-made causes of climate change.

Adoption and adaptation in various industries

The tobacco industry’s tactics were alarmingly successful for decades, delaying effective regulation and public awareness of smoking’s health risks. These strategies were later adopted and adapted by various industries and groups facing similar scientific challenges to their products or ideologies. For instance, the fossil fuel industry used similar tactics to cast doubt on global warming — leading to the phenomenon of climate change denialism. Chemical manufacturers have disputed science on the harmful effects of certain chemicals like DDT and BPA.

What began as a PR exercise by Big Tobacco to preserve their fantastic profits once science discovered the deleterious health effects of smoking eventually evolved into a strategy of fomenting science denialism more broadly. Why discredit one single finding of the scientific community when you could cast doubt on the entire process of science itself — as a way of future-proofing any government regulation that might curtail your business interests?

Science denial in modern politics

In recent years, the tactics of science denial have become increasingly prevalent in politics. Political actors, often influenced by corporate interests or ideological agendas, have employed these strategies to challenge scientific findings that are politically inconvenient — despite strong and often overwhelming evidence. This is evident in manufactured “debates” on climate change, vaccine safety, and COVID-19, where scientific consensus is often contested not based on new scientific evidence but through disinformation strategies aimed at sowing doubt and confusion.

The role of digital media and politicization

The rise of social media has accelerated the spread of science denial. The digital landscape allows for rapid dissemination of misinformation and the formation of echo chambers, where groups can reinforce shared beliefs or skepticism, often insulated from corrective or opposing information. Additionally, the politicization of science, where scientific findings are viewed through the lens of political allegiance rather than objective evidence, has further entrenched science denial in modern discourse — as just one aspect of the seeming politicization of absolutely everything in modern life and culture.

Strategies for combatting science denial

The ongoing impact of science denial is profound. It undermines public understanding of science, hampers informed decision-making, and delays action on critical issues like climate change, public health, and environmental protection. The spread of misinformation about vaccines, for instance, has led to a decrease in vaccination rates and a resurgence of diseases like measles.

scientific literacy, by Midjourney

To combat science denial, experts suggest several strategies. Promoting scientific literacy and critical thinking skills among the general public is crucial. This involves not just understanding scientific facts, but also developing an understanding of the scientific method and how scientific knowledge is developed and validated. Engaging in open, transparent communication about science, including the discussion of uncertainties and limitations of current knowledge, can also help build public trust in science.

Science denial, rooted in the strategies developed by the tobacco industry in the 1950s, has evolved into a significant challenge in contemporary society, impacting not just public health and environmental policy but also the very nature of public discourse and trust in science. Addressing this issue requires a multifaceted approach, including education, transparent communication, and collaborative efforts to uphold the integrity of scientific information.

Read more

Sockpuppets are fake social media accounts used by trolls for deceptive and covert actions, avoiding culpability for abuse, aggression, death threats, doxxing, and other criminal acts against targets.

In the digital age, the battleground for political influence has extended beyond traditional media to the vast, interconnected realm of social media. Central to this new frontier are “sockpuppet” accounts – fake online personas created for deceptive purposes. These shadowy figures have become tools in the hands of authoritarian regimes, perhaps most notably Russia, to manipulate public opinion and infiltrate the political systems of countries like the UK, Ukraine, and the US.

What are sockpuppet accounts?

A sockpuppet account is a fake online identity used for purposes of deception. Unlike simple trolls or spam accounts, sockpuppets are more sophisticated. They mimic real users, often stealing photos and personal data to appear authentic. These accounts engage in activities ranging from posting comments to spreading disinformation, all designed to manipulate public opinion.

The Strategic Use of Sockpuppets

Sockpuppet accounts are a cog in the larger machinery of cyber warfare. They play a critical role in shaping narratives and influencing public discourse. In countries like Russia, where the state exerts considerable control over media, these accounts are often state-sponsored or affiliated with groups that align with government interests.

Case Studies: Russia’s global reach

  1. The United Kingdom: Investigations have revealed Russian interference in the Brexit referendum. Sockpuppet accounts spread divisive content to influence public opinion and exacerbate social tensions. Their goal was to weaken the European Union by supporting the UK’s departure.
  2. Ukraine: Russia’s geopolitical interests in Ukraine have been furthered through a barrage of sockpuppet accounts. These accounts disseminate pro-Russian propaganda and misinformation to destabilize Ukraine’s political landscape, particularly during times of crisis, elections, or — most notably — during its own current war of aggression against its neighbor nation.
  3. The United States: The 2016 US Presidential elections saw an unprecedented level of interference. Russian sockpuppets spread divisive content, fake news, and even organized real-life events, creating an environment of distrust and chaos. Their goal was to sow discord and undermine the democratic process.
Vladimir Putin with his sheep, by Midjourney

How sockpuppets operate

Sockpuppets often work in networks, creating an echo chamber effect. They amplify messages, create false trends, and give the illusion of widespread support for a particular viewpoint. Advanced tactics include deepfakes and AI-generated text, making it increasingly difficult to distinguish between real and fake content.

Detection and countermeasures

Detecting sockpuppets is challenging due to their evolving sophistication. Social media platforms are employing AI-based algorithms to identify and remove these accounts. However, the arms race between detection methods and evasion techniques continues. Governments and independent watchdogs also play a crucial role in exposing such operations.

Implications for democracy

The use of sockpuppet accounts by authoritarian regimes like Russia poses a significant threat to democratic processes. By influencing public opinion and political outcomes in other countries, they undermine the very essence of democracy – the informed consent of the governed. This digital interference erodes trust in democratic institutions and fuels political polarization.

As we continue to navigate the complex landscape of digital information, the challenge posed by sockpuppet accounts remains significant. Awareness and vigilance are key. Social media platforms, governments, and individuals must collaborate to safeguard the integrity of our political systems. As citizens, staying informed and critically evaluating online information is our first line of defense against this invisible but potent threat.

Read more

Deep fakes, a term derived from “deep learning” (a subset of AI) and “fake,” refer to highly realistic, AI-generated digital forgeries of real human beings. These sophisticated imitations can be videos, images, or audio clips where the person appears to say or do things they never actually did.

The core technology behind deep fakes is based on machine learning and neural network algorithms. Two competing AI systems work in tandem: one generates the fake content, while the other attempts to detect the forgeries. Over time, as the detection system identifies flaws, the generator learns from these mistakes, leading to increasingly convincing fakes.

Deep fakes in politics

However, as the technology has become more accessible, it’s been used for various purposes, not all of them benign. In the political realm, deep fakes have a potential for significant impact. They’ve been used to create false narratives or manipulate existing footage, making it appear as though a public figure has said or done something controversial or scandalous. This can be particularly damaging in democratic societies, where public opinion heavily influences political outcomes. Conversely, in autocracies, deep fakes can be a tool for propaganda or to discredit opposition figures.

How to identify deep fakes

Identifying deep fakes can be challenging, but there are signs to look out for:

  1. Facial discrepancies: Imperfections in the face-swapping process can result in blurred or fuzzy areas, especially where the face meets the neck and hair. Look for any anomalies in facial expressions or movements that don’t seem natural.
  2. Inconsistent lighting and shadows: AI can struggle to replicate the way light interacts with physical objects. If the lighting or shadows on the face don’t match the surroundings, it could be a sign of manipulation.
  3. Audiovisual mismatches: Often, the audio does not perfectly sync with the video in a deep fake. Watch for delays or mismatches between spoken words and lip movements.
  4. Unusual blinking and breathing patterns: AI can struggle to accurately mimic natural blinking and breathing, leading to unnatural patterns.
  5. Contextual anomalies: Sometimes, the content of the video or the actions of the person can be a giveaway. If it seems out of character or contextually odd, it could be fake.

In democratic societies, the misuse of deep fakes can erode public trust in media, manipulate electoral processes, and increase political polarization. Fake videos can quickly spread disinformation and misinformation, influencing public opinion and voting behavior. Moreover, they can be used to discredit political opponents with false accusations or fabricated scandals.

In autocracies, deep fakes can be a potent tool for state propaganda. Governments can use them to create a false image of stability, prosperity, or unity, or conversely, to produce disinformation campaigns against perceived enemies, both foreign and domestic. This can lead to the suppression of dissent and the manipulation of public perception to support the regime.

Deep fakes with Donald Trump, by Midjourney

Response to deep fakes

The response to the threat posed by deep fakes has been multifaceted. Social media platforms and news organizations are increasingly using AI-based tools to detect and flag deep fakes. There’s also a growing emphasis on digital literacy, teaching the public to critically evaluate the content they consume.

Legal frameworks are evolving to address the malicious use of deep fakes. Some countries are considering legislation that would criminalize the creation and distribution of harmful deep fakes, especially those targeting individuals or designed to interfere in elections.

While deep fakes represent a remarkable technological advancement, they also pose a significant threat to the integrity of information and democratic processes. As this technology evolves, so must our ability to detect and respond to these forgeries. It’s crucial for both individuals and institutions to stay informed and vigilant against the potential abuses of deep fakes, particularly in the political domain. As we continue to navigate the digital age, the balance between leveraging AI for innovation and safeguarding against its misuse remains a key challenge.

Read more

ParadoxBot is an adorable chatbot who will cheerfully inform you about the Dark Arts

Sure, you could use the site search. Or, you could have a bot — try having a conversation with my blog via the following AI chatbot, ParadoxBot.

Ask it about conspiracy theories, or narcissism, or cults, or authoritarianism, or fascism, or disinformation — to name a few. You can also ask it about things like dark money, economics, history, and many topics at the intersection of political psychology.

It doesn’t index what’s on Foundations (yet) but it has ingested this site and you can essentially chat with the site itself via the ChatGPT-like interface below. Enjoy! And if you love it or hate it, find me on BlueSky (as @doctorparadox) or Mastodon and let me know your thoughts:

Tips for using ParadoxBot

  • Follow general good practice regarding prompt engineering.
  • If you don’t get an answer right away, try rephrasing your question. Even omitting or adding one word sometimes produces good results.
  • Try broad broad and specific types of queries.
  • Dig deeper into any areas the bot turns up that sound interesting.
Read more

These days the GOP is just 3 cults in a trenchcoat — nevertheless, it’s helpful to understand some of the ideologies and extremist beliefs that folks on the right engage with. Understanding the psychology can help us make predictions about actions, reactions, and other developments in the political landscape.

What is an ideology?

An ideology is a comprehensive set of beliefs, ideas, and values that shape the way individuals or groups perceive the world and interact within it. It serves as a lens through which people interpret social, political, and economic phenomena, guiding their actions and decisions. Ideologies can be as broad as political doctrines like liberalism, conservatism, or socialism, or as specific as belief systems within a particular culture or organization.

Ideologies often manifest in various forms, such as political platforms, religious doctrines, or social movements. They can be explicit, where the principles are clearly outlined, or implicit, subtly influencing behavior without overt expression. Ideologies are not static; they evolve over time, adapting to new information, social changes, or shifts in power dynamics.

In the realm of politics and governance, ideologies play a crucial role. They inform policy decisions, shape public opinion, and influence the behavior of political actors. They can also be divisive, leading to conflict and exclusion of those who do not conform. In the media, ideologies can affect the framing of news and the dissemination of information, subtly shaping public perception.

Right-wing ideologies

Read more

phobia indoctrination, illustrated

Phobia indoctrination is one of the principle ways a charismatic leader will lull potential followers into his thrall, by putting them into a state of perpetual fear and anxiety. They know, either instinctively or through training (or both), that people can be induced into a prolonged state of confusion easily, and that many people in states of confusion act quite irrationally. Abusers, cult leaders, and other controllers use demagoguery and other tricks to hide in plain sight and continue to accrue power while passing themselves off as harmless or extremely patriotic.

These chaos agents use emotional manipulation and other tactics of emotional predators as a tool of control. They whip followers up into a fear frenzy frequently enough to instill a set of phobia-like instinctual reactions to chosen stimuli. In addition to stoking fears of the enemies at the gates, they also inculcate irrational fears of the consequences of questioning their authority — invoking authoritarianism. Any doubts expressed about the leadership or its doctrine are subject to terrifying negative results. Cults use this formula to wield undue influence over followers, and prevent them from questioning or leaving the group.

Phobia indoctrination is a tool of cults

As part of a larger overall program of brainwashing or mind control, cults and destructive organizations use imaginary extremes (going to hell, being possessed by demons, failing miserably at life, race war, Leftist apocalypse, etc.) to shock followers into refusing to examine any evidence whatsoever. A form of unethical hypnosis, phobia indoctrination can now be carried out on a mass scale thanks to the internet and our massive media apparatus. Be sure to be on the lookout for any cult warning signs in groups and messaging all around you.

Sociopaths and other types of emotional predators are taking ample advantage of their advantage in time and distance over the slow pace of justice. The wielding of fear as a cudgel in American politics has reached a fever pitch, with anti-Critical Race Theory hysteria, anti-vaxxers, anti-government types, anti-science, Lost Cause-revival zombie MAGA footsoldiers screeching about the “freedom!!!” they wish the government to provide them for persecuting their enemies, and other social horrors are merely the tip of the climate changing iceberg.

phobia indoctrination, illustrated

Phobia indoctrination tactics

Strategies of phobia indoctrination include Repetition and Conditioning, where fears are built through constant exposure; Misinformation and Propaganda, using false information to paint something as dangerous; Utilizing Existing Fears, exaggerating known fears or anxieties; and Social Pressure and Group Dynamics, leveraging social influences to convince others that irrational fears are common.

Other tactics include Authority and Expert Manipulation, where false credentials are used to lend legitimacy; Emotional Manipulation, appealing directly to emotions; Isolation and Control, where a person’s environment is manipulated; and Media Manipulation, using media to provoke fear.

Phobia indoctrination and cults book list:

Or, support local bookstores instead of Jeff Bezos:

Related to phobia indoctrination:

Cult Dictionary β†—

We had better get familiar with the lexicon and vocabulary of the coming era, so we can fight the creeping scourge of thought control roiling the land.

Jim Jones toasting his cult members with a cup of cyanide, by Midjourney

Disinformation Dictionary β†—

Disinformation is meant to confuse, throw off, distract, polarize, and otherwise create conflict within and between target populations.

Disinformation, by Midjourney

Cult Warning Signs: How to recognize cultish groups β†—

Recognizing cult warning signs can be vital in identifying and understanding the risk before getting involved with a group who may not have your best interests in mind.

cult warning signs, by Midjourney
Read more

Legal statute requiring those persons lobbying on behalf of a foreign government or other entity to register such with the U.S. government.

Folks like Mike Flynn and Jared Kushner ran afoul of this law during their time in the US government.

History of FARA

The Foreign Agents Registration Act, often abbreviated as FARA, is a United States law passed in 1938. The purpose of FARA is to ensure that the U.S. government and the people of the United States are informed about the source of information (propaganda) and the identity of people trying to influence U.S. public opinion, policy, and laws on behalf of foreign principals.

The Act requires every person who acts as an agent of foreign principals in a political or quasi-political capacity to make periodic public disclosure of their relationship with the foreign principal. This includes activities, receipts, and disbursements in support of those activities. Disclosure of the required information facilitates evaluation by the government and the American people of the statements and activities of such persons.

The Act is administered and enforced by the FARA Unit of the National Security Division (NSD) of the United States Department of Justice.

FARA does not restrict publishing of materials or viewpoints; rather, it requires agents representing the interests of foreign powers to disclose their relationship with the foreign government and information about related activities and finances.

Originally, FARA was passed in 1938 in response to concerns about German propaganda agents in the United States in the years leading up to World War II, but its usage has evolved over time. The Act has been amended several times, most significantly in 1966 when its scope was narrowed to focus more specifically on agents working in a political context.

Non-compliance with FARA has become a more prominent issue in recent times, with several high-profile investigations and prosecutions related to the Act. The Act received significant media attention during and after the 2016 U.S. Presidential election, when it was invoked in investigations related to foreign interference in the election — particularly Russian election interference.

More on FARA

Learn more about FARA from the Department of Justice.

Read more