Internet

The concept of “prebunking” emerges as a proactive strategy in the fight against disinformation, an ever-present challenge in the digital era where information spreads at unprecedented speed and scale. In essence, prebunking involves the preemptive education of the public about the techniques and potential contents of disinformation campaigns before they encounter them. This method seeks not only to forewarn but also to forearm individuals, making them more resilient to the effects of misleading information.

Understanding disinformation

Disinformation, by definition, is false information that is deliberately spread with the intent to deceive or mislead. It’s a subset of misinformation, which encompasses all false information regardless of intent.

In our current “information age,” the rapid dissemination of information through social media, news outlets, and other digital platforms has amplified the reach and impact of disinformation campaigns. These campaigns can have various motives, including political manipulation, financial gain, or social disruption — and at times, all of the above; particularly in the case of information warfare.

The mechanism of prebunking

Prebunking works on the principle of “inoculation theory,” a concept borrowed from virology. Much like a vaccine introduces a weakened form of a virus to stimulate the immune system’s response to it, prebunking introduces individuals to a weakened form of an argument or disinformation tactic, thereby enabling them to recognize and resist such tactics in the future.

The process typically involves several key elements:

  • Exposure to Techniques: Educating people on the common techniques used in disinformation campaigns, such as emotional manipulation, conspiracy theories, fake experts, and misleading statistics.
  • Content Examples: Providing specific examples of disinformation can help individuals recognize similar patterns in future encounters.
  • Critical Thinking: Encouraging critical thinking and healthy skepticism, particularly regarding information sources and their motives. Helping people identify trustworthy media sources and discern credible sources in general.
  • Engagement: Interactive and engaging educational methods, such as games or interactive modules, have been found to be particularly effective in prebunking efforts.

The effectiveness of prebunking

Research into the effectiveness of prebunking is promising. Studies have shown that when individuals are forewarned about specific misleading strategies or the general prevalence of disinformation, they are better able to identify false information and less likely to be influenced by it. Prebunking can also increase resilience against disinformation across various subjects, from health misinformation such as the anti-vaccine movement to political propaganda.

However, the effectiveness of prebunking can vary based on several factors:

  • Timing: For prebunking to be most effective, it needs to occur before exposure to disinformation. Once false beliefs have taken root, they are much harder to correct — due to the backfire effect and other psychological, cognitive, and social factors.
  • Relevance: The prebunking content must be relevant to the audience’s experiences and the types of disinformation they are likely to encounter.
  • Repetition: Like many educational interventions, the effects of prebunking can diminish over time, suggesting that periodic refreshers may be necessary.

Challenges and considerations

While promising, prebunking is not a panacea for the disinformation dilemma. It faces several challenges:

  • Scalability: Effectively deploying prebunking campaigns at scale, particularly in a rapidly changing information environment, is difficult.
  • Targeting: Identifying and reaching the most vulnerable or targeted groups before they encounter disinformation requires sophisticated understanding and resources.
  • Adaptation by Disinformers: As prebunking strategies become more widespread, those who spread disinformation may adapt their tactics to circumvent these defenses.

Moreover, there is the ethical consideration of how to prebunk without inadvertently suppressing legitimate debate or dissent, ensuring that the fight against disinformation does not become a vector for censorship.

The role of technology and media

Given the digital nature of contemporary disinformation campaigns, technology companies and media organizations play a crucial role in prebunking efforts. Algorithms that prioritize transparency, the promotion of factual content, and the demotion of known disinformation sources can aid in prebunking. Media literacy campaigns, undertaken by educational institutions and NGOs, can also equip the public with the tools they need to navigate the information landscape critically.

Prebunking represents a proactive and promising approach to mitigating the effects of disinformation. By educating the public about the tactics used in disinformation campaigns and fostering critical engagement with media, it’s possible to build a more informed and resilient society.

However, the dynamic and complex nature of digital disinformation means that prebunking must be part of a broader strategy that includes technology solutions, regulatory measures, and ongoing research. As we navigate this challenge, the goal remains clear: to cultivate an information ecosystem where truth prevails, and public discourse thrives on accuracy and integrity.

Read more

A con artist, also known as a confidence trickster, is someone who deceives others by misrepresenting themselves or lying about their intentions to gain something valuable, often money or personal information. These individuals employ psychological manipulation and emotionally prey on the trust and confidence of their victims.

There are various forms of con artistry, ranging from financial fraud to the spread of disinformation. Each type requires distinct strategies for identification and prevention.

Characteristics of con artists

  1. Charming and Persuasive: Con artists are typically very charismatic. They use their charm to persuade and manipulate others, making their deceit seem believable.
  2. Manipulation of Emotions: They play on emotions to elicit sympathy or create urgency, pushing their targets into making hasty decisions that they might not make under normal circumstances.
  3. Appearing Credible: They often pose as authority figures or experts, sometimes forging documents or creating fake identities to appear legitimate and trustworthy.
  4. Information Gatherers: They are adept at extracting personal information from their victims, either to use directly in fraud or to tailor their schemes more effectively.
  5. Adaptability: Con artists are quick to change tactics if confronted or if their current strategy fails. They are versatile and can shift their stories and methods depending on their target’s responses.

Types of con artists: Disinformation peddlers and financial fraudsters

  1. Disinformation Peddlers: These con artists specialize in the deliberate spread of false or misleading information. They often target vulnerable groups or capitalize on current events to sow confusion and mistrust. Their tactics may include creating fake news websites, using social media to amplify false narratives, or impersonating credible sources to disseminate false information widely.
  2. Financial Fraudsters: These individuals focus on directly or indirectly extracting financial resources from their victims. Common schemes include investment frauds, such as Ponzi schemes and pyramid schemes; advanced-fee scams, where victims are persuaded to pay money upfront for services or benefits that never materialize; and identity theft, where the con artist uses someone else’s personal information for financial gain.

Identifying con artists

  • Too Good to Be True: If an offer or claim sounds too good to be true, it likely is. High returns with no risk, urgent offers, and requests for secrecy are red flags.
  • Request for Personal Information: Be cautious of unsolicited requests for personal or financial information. Legitimate organizations do not typically request sensitive information through insecure channels.
  • Lack of Verification: Check the credibility of the source. Verify the legitimacy of websites, companies, and individuals through independent reviews and official registries.
  • Pressure Tactics: Be wary of any attempt to rush you into a decision. High-pressure tactics are a hallmark of many scams.
  • Unusual Payment Requests: Scammers often ask for payments through unconventional methods, such as wire transfers, gift cards, or cryptocurrencies, which are difficult to trace and recover.

What society can do to stop them

  1. Education and Awareness: Regular public education campaigns can raise awareness about common scams and the importance of skepticism when dealing with unsolicited contacts.
  2. Stronger Regulations: Implementing and enforcing stricter regulations on financial transactions and digital communications can reduce the opportunities for con artists to operate.
  3. Improved Verification Processes: Organizations can adopt more rigorous verification processes to prevent impersonation and reduce the risk of fraud.
  4. Community Vigilance: Encouraging community reporting of suspicious activities and promoting neighborhood watch programs can help catch and deter con artists.
  5. Support for Victims: Providing support and resources for victims of scams can help them recover and reduce the stigma of having been deceived, encouraging more people to come forward and report these crimes.

Con artists are a persistent threat in society, but through a combination of vigilance, education, and regulatory enforcement, we can reduce their impact and protect vulnerable individuals from falling victim to their schemes. Understanding the characteristics and tactics of these fraudsters is the first step in combatting their dark, Machiavellian influence.

Read more

Cultivation theory is a significant concept in media studies, particularly within the context of psychology and how media influences viewers. Developed by George Gerbner in the 1960s, cultivation theory addresses the long-term effects that television has on the perceptions of the audience about reality. This overview will discuss the origins of the theory, its key components, the psychological mechanisms it suggests, and how it applies to modern media landscapes.

Origins and development

Cultivation theory emerged from broader concerns about the effects of television on viewers over long periods. To study those effects, George Gerbner, along with his colleagues at the Annenberg School for Communication at the University of Pennsylvania, initiated the Cultural Indicators Project in the mid-1960s.

This large-scale research project aimed to study how television content affected viewers’ perceptions of reality. Gerbner’s research focused particularly on the cumulative and overarching impact of television as a medium rather than the effects of specific programs.

Core components of cultivation theory

The central hypothesis of cultivation theory is that those who spend more time watching television are more likely to perceive the real world in ways that reflect the most common and recurrent messages of the television world, compared to those who watch less television. This effect is termed ‘cultivation.’

1. Message System Analysis: This involves the study of content on television to understand the recurring and dominant messages and images presented.

2. Cultivation Analysis: This refers to research that examines the long-term effects of television. The focus is on the viewers’ conceptions of reality and whether these conceptions correlate with the world portrayed on television.

3. Mainstreaming and Resonance: Mainstreaming is the homogenization of viewers’ perceptions as television’s ubiquitous narratives become the dominant source of information and reality. Resonance occurs when viewers’ real-life experiences confirm the mediated reality, intensifying the cultivation effect.

Psychological mechanisms

Cultivation theory suggests several psychological processes that explain how media exposure shapes perceptions:

  • Heuristic Processing: Television can lead to heuristic processing, a kind of psychological biasing where viewers use shortcuts in thinking to quickly assess reality based on the most frequently presented images and themes in media.
  • Social Desirability: Television often portrays certain behaviors and lifestyles as more desirable or acceptable, which can influence viewers to adopt these standards as their own.
  • The Mean World Syndrome: A significant finding from cultivation research is that heavy viewers of television tend to believe that the world is a more dangerous place than it actually is, a phenomenon known as the “mean world syndrome.” This is particularly pronounced in genres rich in violence, like crime dramas and news.

Critiques and modern perspectives

Cultivation theory has faced various critiques and adaptations over the years. Critics argue that the theory underestimates viewer agency and the role of individual differences in media consumption. It is also said to lack specificity regarding how different genres of television might affect viewers differently.

Furthermore, with the advent of digital media, the theory’s focus on television as the sole medium of significant influence has been called into question. Modern adaptations of cultivation theory have begun to consider the effects of internet usage, social media, and platform-based viewing, which also offer repetitive and pervasive content capable of shaping perceptions.

Application to modern media

Today, cultivation theory is still relevant as it can be applied to the broader media landscape, including online platforms where algorithms dictate the content viewers receive repetitively. For example, the way social media can affect users’ perceptions of body image, social norms, or even political ideologies often mirrors the longstanding concepts of cultivation theory.

In conclusion, cultivation theory provides a critical framework for understanding the psychological impacts of media on public perceptions and individual worldviews. While originally developed in the context of television, its core principles are increasingly applicable to various forms of media, offering valuable insights into the complex interplay between media content, psychological processes, and the cultivation of perception in the digital age.

Read more

Fact-checking is a critical process used in journalism to verify the factual accuracy of information before it’s published or broadcast. This practice is key to maintaining the credibility and ethical standards of journalism and media as reliable information sources. It involves checking statements, claims, and data in various media forms for accuracy and context.

Ethical standards in fact-checking

The ethical backbone of fact-checking lies in journalistic integrity, emphasizing accuracy, fairness, and impartiality. Accuracy ensures information is cross-checked with credible sources. Fairness mandates balanced presentation, and impartiality requires fact-checkers to remain as unbiased in their evaluations as humanly possible.

To evaluate a media source’s credibility, look for a masthead, mission statement, about page, or ethics statement that explains the publication’s approach to journalism. Without a stated commitment to journalistic ethics and standards, it’s entirely possible the website or outlet is publishing opinion and/or unverified claims.

Fact-checking in the U.S.: A historical perspective

Fact-checking in the U.S. has evolved alongside journalism. The rise of investigative journalism in the early 20th century highlighted the need for thorough research and factual accuracy. However, recent developments in digital and social media have introduced significant challenges.

Challenges from disinformation and propaganda

The digital era has seen an explosion of disinformation and propaganda, particularly on social media. ‘Fake news‘, a term now synonymous with fabricated or distorted stories, poses a significant hurdle for fact-checkers. The difficulty lies not only in the volume of information but also in the sophisticated methods used to spread falsehoods, such as deepfakes and doctored media.

Bias and trust issues in fact-checking

The subjectivity of fact-checkers has been scrutinized, with some suggesting that personal or organizational biases might influence their work. This perception has led to a trust deficit in certain circles, where fact-checking itself is viewed as potentially politically or ideologically motivated.

Despite challenges, fact-checking remains crucial for journalism. Future efforts may involve leveraging technology like AI for assistance, though human judgment is still essential. The ongoing battle against disinformation will require innovation, collaboration with tech platforms, transparency in the fact-checking process, and public education in media literacy.

Fact-checking stands as a vital element of journalistic integrity and a bulwark against disinformation and propaganda. In the U.S., and globally, the commitment to factual accuracy is fundamental for a functioning democracy and an informed society. Upholding these standards helps protect the credibility of the media and trusted authorities, and supports the fundamental role of journalism in maintaining an informed public and a healthy democracy.

Read more

The concept of cherry-picking refers to the practice of selectively choosing data or facts that support one’s argument while ignoring those that may contradict it. This method is widely recognized not just as a logical fallacy but also as a technique commonly employed in the dissemination of disinformation. Cherry-picking can significantly impact the way information is understood and can influence political ideology, public opinion, and policy making.

Cherry-picking and disinformation

Disinformation, broadly defined, is false or misleading information that is spread deliberately, often to deceive or mislead the public. Cherry-picking plays a crucial role in the creation and propagation of disinformation.

By focusing only on certain pieces of evidence while excluding others, individuals or entities can create a skewed or entirely false narrative. This manipulation of facts is particularly effective because the information presented can be entirely true in isolation, making the deceit harder to detect. In the realm of disinformation, cherry-picking is a tool to shape perceptions, create false equivalencies, and undermine credible sources of information.

The role of cherry-picking in political ideology

Political ideologies are comprehensive sets of ethical ideals, principles, doctrines, myths, or symbols of a social movement, institution, class, or large group that explains how society should work. Cherry-picking can significantly influence political ideologies by providing a biased view of facts that aligns with specific beliefs or policies.

This biased information can reinforce existing beliefs, creating echo chambers where individuals are exposed only to viewpoints similar to their own. The practice can deepen political divisions, making it more challenging for individuals with differing viewpoints to find common ground or engage in constructive dialogue.

Counteracting cherry-picking

Identifying and countering cherry-picking requires a critical approach to information consumption and sharing. Here are several strategies:

  1. Diversify Information Sources: One of the most effective ways to recognize cherry-picking is by consuming information from a wide range of sources. This diversity of trustworthy sources helps in comparing different viewpoints and identifying when certain facts are being omitted or overly emphasized.
  2. Fact-Checking and Research: Before accepting or sharing information, it’s essential to verify the facts. Use reputable fact-checking organizations and consult multiple sources to get a fuller picture of the issue at hand.
  3. Critical Thinking: Develop the habit of critically assessing the information you come across. Ask yourself whether the evidence supports the conclusion, what might be missing, and whether the sources are credible.
  4. Educate About Logical Fallacies: Understanding and educating others about logical fallacies, like cherry-picking, can help people recognize when they’re being manipulated. This knowledge can foster healthier public discourse and empower individuals to demand more from their information sources.
  5. Promote Media Literacy: Advocating for media literacy education can equip people with the skills needed to critically evaluate information sources, understand media messages, and recognize bias and manipulation, including cherry-picking.
  6. Encourage Open Dialogue: Encouraging open, respectful dialogue between individuals with differing viewpoints can help combat the effects of cherry-picking. By engaging in conversations that consider multiple perspectives, individuals can bridge the gap between divergent ideologies and find common ground.
  7. Support Transparent Reporting: Advocating for and supporting media outlets that prioritize transparency, accountability, and comprehensive reporting can help reduce the impact of cherry-picking. Encourage media consumers to support organizations that make their sources and methodologies clear.

Cherry-picking is a powerful tool in the dissemination of disinformation and in shaping political ideologies. Its ability to subtly manipulate perceptions makes it a significant challenge to open, informed public discourse.

By promoting critical thinking, media literacy, and the consumption of a diverse range of information, individuals can become more adept at identifying and countering cherry-picked information. The fight against disinformation and the promotion of a well-informed public require vigilance, education, and a commitment to truth and transparency.

Read more

Stochastic terrorism is a term that has emerged in the lexicon of political and social analysis to describe a method of inciting violence indirectly through the use of mass communication. This concept is predicated on the principle that while not everyone in an audience will act on violent rhetoric, a small percentage might.

The term “stochastic” refers to a process that is randomly determined; it implies that the specific outcomes are unpredictable, yet the overall distribution of these outcomes follows a pattern that can be statistically analyzed. In the context of stochastic terrorism, it means that while it is uncertain who will act on incendiary messages and violent political rhetoric, it is almost certain that someone will.

The nature of stochastic terrorism

Stochastic terrorism involves the dissemination of public statements, whether through speeches, social media, or traditional media, that incite violence. The individuals or entities spreading such rhetoric may not directly call for political violence. Instead, they create an atmosphere charged with tension and hostility, suggesting that action must be taken against a perceived threat or enemy. This indirect incitement provides plausible deniability, as those who broadcast the messages can claim they never explicitly advocated for violence.

Prominent stochastic terrorism examples

The following are just a few notable illustrative examples of stochastic terrorism:

  1. The Oklahoma City Bombing (1995): Timothy McVeigh, influenced by extremist anti-government rhetoric, the 1992 Ruby Ridge standoff, and the 1993 siege at Waco, Texas, detonated a truck bomb outside the Alfred P. Murrah Federal Building, killing 168 people. This act was fueled by ideologies that demonized the federal government, highlighting how extremism and extremist propaganda can inspire individuals to commit acts of terror.
  2. The Oslo and Utøya Attacks (2011): Anders Behring Breivik, driven by anti-Muslim and anti-immigrant beliefs, bombed government buildings in Oslo, Norway, then shot and killed 69 people at a youth camp on the island of Utøya. Breivik’s manifesto cited many sources that painted Islam and multiculturalism as existential threats to Europe, showing the deadly impact of extremist online echo chambers and the pathology of right-wing ideologies such as Great Replacement Theory.
  3. The Pittsburgh Synagogue Shooting (2018): Robert Bowers, influenced by white supremacist ideologies and conspiracy theories about migrant caravans, killed 11 worshippers in a synagogue. His actions were preceded by social media posts that echoed hate speech and conspiracy theories rampant in certain online communities, demonstrating the lethal consequences of unmoderated hateful rhetoric.
  4. The El Paso Shooting (2019): Patrick Crusius targeted a Walmart in El Paso, Texas, killing 23 people, motivated by anti-immigrant sentiment and rhetoric about a “Hispanic invasion” of Texas. His manifesto mirrored language used in certain media and political discourse, underscoring the danger of using dehumanizing language against minority groups.
  5. Christchurch Mosque Shootings (2019): Brenton Tarrant live-streamed his attack on two mosques in Christchurch, New Zealand, killing 51 people, influenced by white supremacist beliefs and online forums that amplified Islamophobic rhetoric. The attacker’s manifesto and online activity were steeped in extremist content, illustrating the role of internet subcultures in radicalizing individuals.

Stochastic terrorism in right-wing politics in the US

In the United States, the concept of stochastic terrorism has become increasingly relevant in analyzing the tactics employed by certain right-wing entities and individuals. While the phenomenon is not exclusive to any single political spectrum, recent years have seen notable instances where right-wing rhetoric has been linked to acts of violence.

The January 6, 2021, attack on the U.S. Capitol serves as a stark example of stochastic terrorism. The event was preceded by months of unfounded claims of electoral fraud and calls to “stop the steal,” amplified by right-wing media outlets and figures — including then-President Trump who had extraordinary motivation to portray his 2020 election loss as a victory in order to stay in power. This rhetoric created a charged environment, leading some individuals to believe that violent action was a justified response to defend democracy.

The role of media and technology

Right-wing media platforms have played a significant role in amplifying messages that could potentially incite stochastic terrorism. Through the strategic use of incendiary language, disinformation, misinformation, and conspiracy theories, these platforms have the power to reach vast audiences and influence susceptible individuals to commit acts of violence.

The advent of social media has further complicated the landscape, enabling the rapid spread of extremist rhetoric. The decentralized nature of these platforms allows for the creation of echo chambers where inflammatory messages are not only amplified but also go unchallenged, increasing the risk of radicalization.

Challenges and implications

Stochastic terrorism presents significant legal and societal challenges. The indirect nature of incitement complicates efforts to hold individuals accountable for the violence that their rhetoric may inspire. Moreover, the phenomenon raises critical questions about the balance between free speech and the prevention of violence, challenging societies to find ways to protect democratic values while preventing harm.

Moving forward

Addressing stochastic terrorism requires a multifaceted approach. This includes promoting responsible speech among public figures, enhancing critical thinking and media literacy among the public, and developing legal and regulatory frameworks that can effectively address the unique challenges posed by this form of terrorism. Ultimately, combating stochastic terrorism is not just about preventing violence; it’s about preserving the integrity of democratic societies and ensuring that public discourse does not become a catalyst for harm.

Understanding and mitigating the effects of stochastic terrorism is crucial in today’s increasingly polarized world. By recognizing the patterns and mechanisms through which violence is indirectly incited, societies can work towards more cohesive and peaceful discourse, ensuring that democracy is protected from the forces that seek to undermine it through fear and division.

Read more

Microtargeting is a marketing and political strategy that leverages data analytics to deliver customized messages to specific groups within a larger population. This approach has become increasingly prevalent in the realms of digital media and advertising, and its influence on political campaigns has grown significantly.

Understanding microtargeting

Microtargeting begins with the collection and analysis of vast amounts of data about individuals. This data can include demographics (age, gender, income), psychographics (interests, habits, values), and behaviors (purchase history, online activity). By analyzing this data, organizations can identify small, specific groups of people who share common characteristics or interests. The next step involves crafting tailored messages that resonate with these groups, significantly increasing the likelihood of engagement compared to broad, one-size-fits-all communications.

Microtargeting and digital media

Digital media platforms, with their treasure troves of user data, have become the primary arenas for microtargeting. Social media networks, search engines, and websites collect extensive information on user behavior, preferences, and interactions. This data enables advertisers and organizations to identify and segment their audiences with remarkable precision.

Microtargeting, by Midjourney

Digital platforms offer sophisticated tools that allow for the delivery of customized content directly to individuals or narrowly defined groups, ensuring that the message is relevant and appealing to each recipient. The interactive nature of digital media also provides immediate feedback, allowing for the refinement of targeting strategies in real time.

Application in advertising

In the advertising domain, microtargeting has revolutionized how brands connect with consumers. Rather than casting a wide net with generic advertisements, companies can now send personalized messages that speak directly to the needs and desires of their target audience. This approach can improve the effectiveness of advertising campaigns — but comes with a tradeoff in terms of user data privacy.

Microtargeted ads can appear on social media feeds, as search engine results, within mobile apps, or as personalized email campaigns, making them a versatile tool for marketers. Thanks to growing awareness of the data privacy implications — including the passage of regulations including the GDPR, CCPA, DMA and others — users are beginning to have more control over what data is collected about them and how it is used.

Expanding role in political campaigns

The impact of microtargeting reaches its zenith in the realm of political campaigns. Political parties and candidates use microtargeting to understand voter preferences, concerns, and motivations at an unprecedented level of detail. This intelligence allows campaigns to tailor their communications, focusing on issues that resonate with specific voter segments.

For example, a campaign might send messages about environmental policies to voters identified as being concerned about climate change, while emphasizing tax reform to those worried about economic issues. A campaign might target swing voters with characteristics that match their party’s more consistent voting base, hoping to influence their decision to vote for the “right” candidate.

Microtargeting in politics also extends to voter mobilization efforts. Campaigns can identify individuals who are supportive but historically less likely to vote and target them with messages designed to motivate them to get to the polls. Similarly, microtargeting can help in shaping campaign strategies, determining where to hold rallies, whom to engage for endorsements, and what issues to highlight in speeches.

Ethical considerations and challenges

The rise of microtargeting raises significant ethical and moral questions and challenges. Concerns about privacy, data protection, and the potential for manipulation are at the forefront. The use of personal information for targeting purposes has sparked debates on the need for stricter regulation and transparency. In politics, there’s apprehension that microtargeting might deepen societal divisions by enabling campaigns to exploit sensitive issues or disseminate misleading information — or even disinformation — to susceptible groups.

Furthermore, the effectiveness of microtargeting in influencing consumer behavior and voter decisions has led to calls for more responsible use of data analytics. Critics argue for the development of ethical guidelines that balance the benefits of personalized communication with the imperative to protect individual privacy and maintain democratic integrity.

Microtargeting represents a significant evolution in the way organizations communicate with individuals, driven by advances in data analytics and digital technology. Its application across advertising and, more notably, political campaigns, has demonstrated its power to influence behavior and decision-making.

However, as microtargeting continues to evolve, it will be crucial for society to address the ethical and regulatory challenges it presents. Ensuring transparency, protecting privacy, and promoting responsible use will be essential in harnessing the benefits of microtargeting while mitigating its potential risks. As we move forward, the dialogue between technology, ethics, and regulation will shape the future of microtargeting in our increasingly digital world.

Read more

The backfire effect is a cognitive phenomenon that occurs when individuals are presented with information that contradicts their existing beliefs, leading them not only to reject the challenging information but also to further entrench themselves in their original beliefs.

This effect is counterintuitive, as one might expect that presenting factual information would correct misconceptions. However, due to various psychological mechanisms, the opposite can occur, complicating efforts to counter misinformation, disinformation, and the spread of conspiracy theories.

Origin and mechanism

The term “backfire effect” was popularized by researchers Brendan Nyhan and Jason Reifler, who in 2010 conducted studies demonstrating that corrections to false political information could actually deepen an individual’s commitment to their initial misconception. This effect is thought to stem from a combination of cognitive dissonance (the discomfort experienced when holding two conflicting beliefs) and identity-protective cognition (wherein individuals process information in a way that protects their sense of identity and group belonging).

Relation to media, disinformation, echo chambers, and media bubbles

In the context of media and disinformation, the backfire effect is particularly relevant. The proliferation of digital media platforms has made it easier than ever for individuals to encounter information that contradicts their beliefs — but paradoxically, it has also made it easier for them to insulate themselves in echo chambers and media bubbles—environments where their existing beliefs are constantly reinforced and rarely challenged.

Echo chambers refer to situations where individuals are exposed only to opinions and information that reinforce their existing beliefs, limiting their exposure to diverse perspectives. Media bubbles are similar, often facilitated by algorithms on social media platforms that curate content to match users’ interests and past behaviors, inadvertently reinforcing their existing beliefs and psychological biases.

Disinformation campaigns can exploit these dynamics by deliberately spreading misleading or false information, knowing that it is likely to be uncritically accepted and amplified within certain echo chambers or media bubbles. This can exacerbate the backfire effect, as attempts to correct the misinformation can lead to individuals further entrenching themselves in the false beliefs, especially if those beliefs are tied to their identity or worldview.

How the backfire effect happens

The backfire effect happens through a few key psychological processes:

  1. Cognitive Dissonance: When confronted with evidence that contradicts their beliefs, individuals experience discomfort. To alleviate this discomfort, they often reject the new information in favor of their pre-existing beliefs.
  2. Confirmation Bias: Individuals tend to favor information that confirms their existing beliefs and disregard information that contradicts them. This tendency towards bias can lead them to misinterpret or dismiss corrective information.
  3. Identity Defense: For many, beliefs are tied to their identity and social groups. Challenging these beliefs can feel like a personal attack, leading individuals to double down on their beliefs as a form of identity defense.

Prevention and mitigation

Preventing the backfire effect and its impact on public discourse and belief systems requires a multifaceted approach:

  1. Promote Media Literacy: Educating the public on how to critically evaluate sources and understand the mechanisms behind the spread of misinformation can empower individuals to think critically and assess the information they encounter.
  2. Encourage Exposure to Diverse Viewpoints: Breaking out of media bubbles and echo chambers by intentionally seeking out and engaging with a variety of perspectives can reduce the likelihood of the backfire effect by making conflicting information less threatening and more normal.
  3. Emphasize Shared Values: Framing challenging information in the context of shared values or goals can make it less threatening to an individual’s identity, reducing the defensive reaction.
  4. Use Fact-Checking and Corrections Carefully: Presenting corrections in a way that is non-confrontational and, when possible, aligns with the individual’s worldview or values can make the correction more acceptable. Visual aids and narratives that resonate with the individual’s experiences or beliefs can also be more effective than plain factual corrections.
  5. Foster Open Dialogue: Encouraging open, respectful conversations about contentious issues can help to humanize opposing viewpoints and reduce the instinctive defensive reactions to conflicting information.

The backfire effect presents a significant challenge in the fight against misinformation and disinformation, particularly in the context of digital media. Understanding the psychological underpinnings of this effect is crucial for developing strategies to promote a more informed and less polarized public discourse. By fostering critical thinking, encouraging exposure to diverse viewpoints, and promoting respectful dialogue, it may be possible to mitigate the impact of the backfire effect and create a healthier information ecosystem.

Read more

The “wallpaper effect” is a phenomenon in media, propaganda, and disinformation where individuals become influenced or even indoctrinated by being continuously exposed to a particular set of ideas, perspectives, or ideologies. This effect is akin to wallpaper in a room, which, though initially noticeable, becomes part of the unnoticed background over time.

The wallpaper effect plays a significant role in shaping public opinion and individual beliefs, often without the conscious awareness of the individuals affected.

Origins and mechanisms

The term “wallpaper effect” stems from the idea that constant exposure to a specific type of media or messaging can subconsciously influence an individual’s perception and beliefs, similar to how wallpaper in a room becomes a subtle but constant presence. This effect is potentiated by the human tendency to seek information that aligns with existing beliefs, known as confirmation bias. It leads to a situation where diverse viewpoints are overlooked, and a singular perspective dominates an individual’s information landscape.

The wallpaper effect, by DALL-E 3

Media and information bubbles

In the context of media, the wallpaper effect is exacerbated by the formation of information bubbles or echo chambers. These are environments where a person is exposed only to opinions and information that reinforce their existing beliefs.

The rise of digital media and personalized content algorithms has intensified this effect, as users often receive news and information tailored to their preferences, further entrenching their existing viewpoints. Even more insidiously, social media platforms tend to earn higher profits when they fill users’ feeds with ideological perspectives they already agree with. Even more profitable is the process of tilting them towards more extreme versions of those beliefs — a practice that in other contexts we call “radicalization.”

Role in propaganda and disinformation

The wallpaper effect is a critical tool in propaganda and disinformation campaigns. By consistently presenting a specific narrative or viewpoint, these campaigns can subtly alter the perceptions and beliefs of the target audience. Over time, the repeated exposure to these biased or false narratives becomes a backdrop to the individual’s understanding of events, issues, or groups, often leading to misconceptions or unwarranted biases.

Psychological impact

The psychological impact of the wallpaper effect is profound. It can lead to a narrowing of perspective, where individuals become less open to new information or alternative viewpoints. This effect can foster polarized communities and hyper partisan politics, where dialogue and understanding between differing viewpoints become increasingly difficult.

Case studies and examples

Historically, authoritarian regimes have used the wallpaper effect to control public opinion and suppress dissent. By monopolizing the media landscape and continuously broadcasting their propaganda, these regimes effectively shaped the public’s perception of reality.

In contemporary times, this effect is also seen in democracies, where partisan news outlets or social media algorithms create a similar, though more fragmented, landscape of information bubbles.

Counteracting the wallpaper effect

Counteracting the wallpaper effect involves a multifaceted approach. Media literacy education is crucial, as it empowers individuals to critically analyze and understand the sources and content of information they consume.

Encouraging exposure to a wide range of viewpoints and promoting critical thinking skills are also essential strategies. Additionally, reforms in digital media algorithms to promote diverse viewpoints and reduce the creation of echo chambers can help mitigate this effect.

Implications for democracy and society

The wallpaper effect has significant implications for democracy and society. It can lead to a polarized public, where consensus and compromise become challenging to achieve. The narrowing of perspective and entrenchment of beliefs can undermine democratic discourse, leading to increased societal divisions and decreased trust in media and institutions.

The wallpaper effect is a critical phenomenon that shapes public opinion and belief systems. Its influence is subtle yet profound, as constant exposure to a specific set of ideas can subconsciously mold an individual’s worldview. Understanding and addressing this effect is essential in promoting a healthy, informed, and open society. Efforts to enhance media literacy, promote diverse viewpoints, and reform digital media practices are key to mitigating the wallpaper effect and fostering a more informed and less polarized public.

Read more

Election denialism, the refusal to accept credible election outcomes, has significantly impacted U.S. history, especially in recent years. This phenomenon is not entirely new; election denial has roots that stretch back through various periods of American history. However, its prevalence and intensity have surged in the contemporary digital and political landscape, influencing public trust, political discourse, and the very fabric of democracy.

Historical context

Historically, disputes over election outcomes are as old as the U.S. electoral system itself. For instance, the fiercely contested 1800 election between Thomas Jefferson and John Adams resulted in a constitutional amendment (the 12th Amendment) to prevent similar confusion in the future. The 1876 election between Rutherford B. Hayes and Samuel J. Tilden was resolved through the Compromise of 1877, which effectively ended Reconstruction and had profound effects on the Southern United States.

Yet these instances, while contentious, were resolved within the framework of existing legal and political mechanisms, without denying the legitimacy of the electoral process itself. Over time, claims of election fraud would come to be levied against the electoral and political system itself — with dangerous implications for the peaceful transfer of power upon which democracy rests.

Voting box in an election, by Midjourney

The 21st century and digital influence

Fast forward to the 21st century, and election denialism has taken on new dimensions, fueled by the rapid dissemination of disinformation (and misinformation) through digital media and a polarized political climate. The 2000 Presidential election, with its razor-thin margins and weeks of legal battles over Florida’s vote count, tested the country’s faith in the electoral process.

Although the Supreme Court‘s decision in Bush v. Gore was deeply controversial, Al Gore’s concession helped to maintain the American tradition of peaceful transitions of power.

The 2020 Election: A flashpoint

The 2020 election, marked by the COVID-19 pandemic and an unprecedented number of mail-in ballots, became a flashpoint for election denialism. Claims of widespread voter fraud and electoral malfeasance were propagated at the highest levels of government, despite a lack of evidence substantiated by multiple recounts, audits, and legal proceedings across several states.

The refusal to concede by President Trump and the storming of the U.S. Capitol on January 6, 2021, marked a watershed moment in U.S. history, where election denialism moved from the fringes to the center of political discourse, challenging the norms of democratic transition. Widely referred to as The Big Lie, the baseless claims of election fraud that persist in the right-wing to this day are considered themselves to be a form of election fraud by justice officials, legal analysts, and a host of concerned citizens worried about ongoing attempts to overthrow democracy in the United States.

Implications, public trust, and voter suppression

The implications of this recent surge in election denialism are far-reaching. It has eroded public trust in the electoral system, with polls indicating a significant portion of the American populace doubting the legitimacy of election results. This skepticism is not limited to the national level but has trickled down to local elections, with election officials facing threats and harassment. The spread of misinformation, propaganda, and conspiracy theories about electoral processes and outcomes has become a tool for political mobilization, often exacerbating divisions within the American society.

Moreover, election denialism has prompted legislative responses at the state level, with numerous bills introduced to restrict voting access in the name of election security. These measures have sparked debates about voter suppression and the balance between securing elections and ensuring broad electoral participation. The challenge lies in addressing legitimate concerns about election integrity while avoiding the disenfranchisement of eligible voters.

Calls for reform and strengthening democracy

In response to these challenges, there have been calls for reforms to strengthen the resilience of the U.S. electoral system. These include measures to enhance the security and transparency of the voting process, improve the accuracy of voter rolls, and counter misinformation about elections. There’s also a growing emphasis on civic education to foster a more informed electorate capable of critically evaluating electoral information.

The rise of election denialism in recent years highlights the fragility of democratic norms and the crucial role of trust in the electoral process. While disputes over election outcomes are not new, the scale and impact of recent episodes pose unique challenges to American democracy. Addressing these challenges requires a multifaceted approach, including legal, educational, and technological interventions, to reinforce the foundations of democratic governance and ensure that the will of the people is accurately and fairly represented.

Read more

A “filter bubble” is a concept in the realm of digital publishing, media, and web technology, particularly significant in understanding the dynamics of disinformation and political polarization. At its core, a filter bubble is a state of intellectual isolation that can occur when algorithms selectively guess what information a user would like to see based on past behavior and preferences. This concept is crucial in the digital age, where much of our information comes from the internet and online sources.

Origins and mechanics

The term was popularized by internet activist Eli Pariser around 2011. It describes how personalization algorithms in search engines and social media platforms can isolate users in cultural or ideological bubbles. These algorithms, driven by AI and machine learning, curate content – be it news, search results, or social media posts – based on individual user preferences, search histories, and previous interactions.

filter bubble, by DALL-E 3

The intended purpose is to enhance user experience by providing relevant and tailored content. However, this leads to a situation where users are less likely to encounter information that challenges or broadens their worldview.

Filter bubbles in the context of disinformation

In the sphere of media and information, filter bubbles can exacerbate the spread of disinformation and propaganda. When users are consistently exposed to a certain type of content, especially if it’s sensational or aligns with their pre-existing beliefs, they become more susceptible to misinformation. This effect is compounded on platforms where sensational content is more likely to be shared and become viral, often irrespective of its accuracy.

Disinformation campaigns, aware of these dynamics, often exploit filter bubbles to spread misleading narratives. By tailoring content to specific groups, they can effectively reinforce existing beliefs or sow discord, making it a significant challenge in the fight against fake news and propaganda.

Impact on political beliefs and US politics

The role of filter bubbles in shaping political beliefs is profound, particularly in the polarized landscape of recent US politics. These bubbles create echo chambers where one-sided political views are amplified without exposure to opposing viewpoints. This can intensify partisanship, as individuals within these bubbles are more likely to develop extreme views and less likely to understand or empathize with the other side.

Recent years in the US have seen a stark divide in political beliefs, influenced heavily by the media sources individuals consume. For instance, the right and left wings of the political spectrum often inhabit separate media ecosystems, with their own preferred news sources and social media platforms. This separation contributes to a lack of shared reality, where even basic facts can be subject to dispute, complicating political discourse and decision-making.

Filter bubbles in elections and political campaigns

Political campaigns have increasingly utilized data analytics and targeted advertising to reach potential voters within these filter bubbles. While this can be an effective campaign strategy, it also means that voters receive highly personalized messages that can reinforce their existing beliefs and psychological biases, rather than presenting a diverse range of perspectives.

Breaking out of filter bubbles

Addressing the challenges posed by filter bubbles involves both individual and systemic actions. On the individual level, it requires awareness and a conscious effort to seek out diverse sources of information. On a systemic level, it calls for responsibility from tech companies to modify their algorithms to expose users to a broader range of content and viewpoints.

Filter bubbles play a significant role in the dissemination and reception of information in today’s digital age. Their impact on political beliefs and the democratic process — indeed, on democracy itself — in the United States cannot be overstated. Understanding and mitigating the effects of filter bubbles is crucial in fostering a well-informed public, capable of critical thinking and engaging in healthy democratic discourse.

Read more

The concept of a “honeypot” in the realms of cybersecurity and information warfare is a fascinating and complex one, straddling the line between deception and defense. At its core, a honeypot is a security mechanism designed to mimic systems, data, or resources to attract and detect unauthorized users or attackers, essentially acting as digital bait. By engaging attackers, honeypots serve multiple purposes: they can distract adversaries from more valuable targets, gather intelligence on attack methods, and help in enhancing security measures.

Origins and Usage

The use of honeypots dates back to the early days of computer networks, evolving significantly with the internet‘s expansion. Initially, they were simple traps set to detect anyone probing a network. However, as cyber threats grew more sophisticated, so did honeypots, transforming into complex systems designed to emulate entire networks, applications, or databases to lure in cybercriminals.

A honeypot illustration with a circuit board beset by a bee, by Midjourney

Honeypots are used by a variety of entities, including corporate IT departments, cybersecurity firms, government agencies, and even individuals passionate about cybersecurity. Their versatility means they can be deployed in almost any context where digital security is a concern, from protecting corporate data to safeguarding national security.

Types and purposes

There are several types of honeypots, ranging from low-interaction honeypots, which simulate only the services and applications attackers might find interesting, to high-interaction honeypots, which are complex and fully-functional systems designed to engage attackers more deeply. The type chosen depends on the specific goals of the deployment, whether it’s to gather intelligence, study attack patterns, or improve defensive strategies.

In the context of information warfare, honeypots serve as a tool for deception and intelligence gathering. They can be used to mislead adversaries about the capabilities or intentions of a state or organization, capture malware samples, and even identify vulnerabilities in the attacker’s strategies. By analyzing the interactions attackers have with these traps, defenders can gain insights into their techniques, tools, and procedures (TTPs), enabling them to better anticipate and mitigate future threats.

Historical effects

Historically, honeypots have had significant impacts on both cybersecurity and information warfare. They’ve led to the discovery of new malware strains, helped dismantle botnets, and provided critical intelligence about state-sponsored cyber operations. For example, honeypots have been instrumental in tracking the activities of sophisticated hacking groups, leading to a deeper understanding of their targets and methods, which, in turn, has informed national security strategies and cybersecurity policies.

One notable example is the GhostNet investigation, which uncovered a significant cyber espionage network targeting diplomatic and governmental institutions worldwide. Honeypots played a key role in identifying the malware and command-and-control servers used in these attacks, highlighting the effectiveness of these tools in uncovering covert operations.

Honeypot hackers and cybercriminals

Ethical and practical considerations

While the benefits of honeypots are clear, their deployment is not without ethical and practical considerations. There’s a fine line between deception for defense and entrapment, raising questions about the legality and morality of certain honeypot operations, especially in international contexts where laws and norms may vary widely.

Moreover, the effectiveness of a honeypot depends on its believability and the skill with which it’s deployed and monitored. Poorly configured honeypots might not only fail to attract attackers but could also become liabilities, offering real vulnerabilities to be exploited.

Cyber attackers and defenders

Honeypots are a critical component of the cybersecurity and information warfare landscapes, providing valuable insights into attacker behaviors and tactics. They reflect the ongoing cat-and-mouse game between cyber attackers and defenders, evolving in response to the increasing sophistication of threats. As digital technologies continue to permeate all aspects of life, the strategic deployment of honeypots will remain a vital tactic in the arsenal of those looking to protect digital assets and information. Their historical impacts demonstrate their value, and ongoing advancements in technology promise even greater potential in understanding and combating cyber threats.

By serving as a mirror to the tactics and techniques of adversaries, honeypots help illuminate the shadowy world of cyber warfare, making them indispensable tools for anyone committed to safeguarding information in an increasingly interconnected world.

Read more

The term “hoax” is derived from “hocus,” a term that has been in use since the late 18th century. It originally referred to a trick or deception, often of a playful or harmless nature. The essence of a hoax was its capacity to deceive, typically for entertainment or to prove a point without malicious intent. Over time, the scope and implications of a hoax have broadened significantly. What was once a term denoting jest or trickery has morphed into a label for deliberate falsehoods intended to mislead or manipulate public perception.

From playful deception to malicious misinformation

As society entered the age of mass communication, the potential reach and impact of hoaxes expanded dramatically. The advent of newspapers, radio, television, and eventually the internet and social media platforms, transformed the way information—and misinformation—circulated. Hoaxes began to be used not just for amusement but for more nefarious purposes, including political manipulation, financial fraud, and social engineering. The line between a harmless prank and damaging disinformation and misinformation became increasingly blurred.

The political weaponization of “hoax”

In the contemporary political landscape, particularly within the US, the term “hoax” has been co-opted as a tool for disinformation and propaganda. This strategic appropriation has been most visible among certain factions of the right-wing, where it is used to discredit damaging information, undermine factual reporting, and challenge the legitimacy of institutional findings or scientific consensus. This application of “hoax” serves multiple purposes: it seeks to sow doubt, rally political bases, and divert attention from substantive issues.

the politicization of hoaxes, via fake scandals that tie up the media unwittingly in bullshit for years, by DALL-E 3

This tactic involves labeling genuine concerns, credible investigations, and verified facts as “hoaxes” to delegitimize opponents and minimize the impact of damaging revelations. It is a form of gaslighting on a mass scale, where the goal is not just to deny wrongdoing but to erode the very foundations of truth and consensus. By branding something as a “hoax,” these actors attempt to preemptively dismiss any criticism or negative information, regardless of its veracity.

Case Studies: The “Hoax” label in action

High-profile instances of this strategy include the dismissal of climate change data, the denial of election results, and the rejection of public health advice during the COVID-19 pandemic. In each case, the term “hoax” has been employed not as a description of a specific act of deception, but as a blanket term intended to cast doubt on the legitimacy of scientifically or empirically supported conclusions. This usage represents a significant departure from the term’s origins, emphasizing denial and division over dialogue and discovery.

The impact on public discourse and trust

The strategic labeling of inconvenient truths as “hoaxes” has profound implications for public discourse and trust in institutions. It creates an environment where facts are fungible, and truth is contingent on political allegiance rather than empirical evidence. This erosion of shared reality undermines democratic processes, hampers effective governance, and polarizes society.

Moreover, the frequent use of “hoax” in political discourse dilutes the term’s meaning and impact, making it more difficult to identify and respond to genuine instances of deception. When everything can be dismissed as a hoax, the capacity for critical engagement and informed decision-making is significantly compromised.

Moving Forward: Navigating a “post-hoax” landscape

The challenge moving forward is to reclaim the narrative space that has been distorted by the misuse of “hoax” and similar terms. This involves promoting media literacy, encouraging critical thinking, and fostering a public culture that values truth and accountability over partisanship. It also requires the media, educators, and public figures to be vigilant in their language, carefully distinguishing between genuine skepticism and disingenuous dismissal.

The evolution of “hoax” from a term denoting playful deception to a tool for political disinformation reflects broader shifts in how information, truth, and reality are contested in the public sphere. Understanding this transformation is crucial for navigating the complexities of the modern informational landscape and for fostering a more informed, resilient, and cohesive society.

Read more

republican vs. democrat cage match boxing ring

Buckle up, we’re in for a wild ride. Many of the serious scholars of political history and authoritarian regimes are sounding the alarm bells that, although it is a very very good thing that we got the Trump crime family out of the Oval Office, it is still a very very bad thing for America to have so rapidly tilted towards authoritarianism. How did we get here?! How has hyper partisanship escalated to the point of an attempted coup by 126 sitting Republican House Representatives? How has political polarization gotten this bad?

These are some of the resources that have helped me continue grappling with that question, and with the rapidly shifting landscape of information warfare. How can we understand this era of polarization, this age of tribalism? This outline is a work in progress, and I’m planning to keep adding to this list as the tape keeps rolling.

Right-Wing Authoritarianism

Authoritarianism is both a personality type and a form of government — it operates at both the interpersonal and the societal level. The words authoritarian and fascist are often used interchangeably, but fascism is a more specific type of authoritarianism, and far more historically recent.

America has had flavors of authoritarianism since its founding, and when fascism came along the right-wing authoritarians ate it up — and deeply wanted the United States to be a part of it. Only after they became social pariahs did they change position to support American involvement in World War II — and some persisted even after the attack of Pearl Harbor.

Scholars of authoritarianism

  • Hannah Arendt — The Origins of Totalitarianism
  • Bob Altemeyer — The Authoritarians
  • Derrida — the logic of the unconscious; performativity in the act of lying
  • ketman — Ketman is the psychological concept of concealing one’s true aims, akin to doublethink in Orwell’s 1984, that served as a central theme to Polish dissident Czesław Miłosz‘s book The Captive Mind about intellectual life under totalitarianism during the Communist post-WWII occupation.
  • Erich Fromm — coined the term “malignant narcissism” to describe the psychological character of the Nazis. He also wrote extensively about the mindset of the authoritarian follower in his seminal work, Escape from Freedom.
  • Eric Hoffer — his book The True Believers explores the mind of the authoritarian follower, and the appeal of losing oneself in a totalist movement
  • Fascism — elevation of the id as the source of truth; enthusiasm for political violence
  • Tyrants and dictators
  • John Dean — 3 types of authoritarian personality:
    • social dominators
    • authoritarian followers
    • double highs — social dominators who can “switch” to become followers in certain circumstances
  • Loyalty; hero worship
    • Freud = deeply distrustful of hero worship and worried that it indulged people’s needs for vertical authority. He found the archetype of the authoritarian primal father very troubling.
  • Ayn Rand
    • The Fountainhead (1943)
    • Atlas Shrugged (1957)
    • Objectivism ideology
  • Greatness Thinking; heroic individualism
  • Nietszche — will to power; the Uberman
  • Richard Hofstadter — The Paranoid Style
  • George Lakoff — moral framing; strict father morality
  • Neil Postman — Entertaining Ourselves to Death
  • Anti-Intellectualism
  • Can be disguised as hyper-rationalism (Communism)
  • More authoritarianism books
Continue reading Hyper Partisanship: How to understand American politics today
Read more

who owns twitter elon musk and others

On October 27, 2022, the world’s richest man bought the de facto global town square. Elon Musk‘s purchase of Twitter had been brewing since April when the South African-born tech magnate first offered (or threatened?) to take over the struggling social network to the tune of $44 billion.

He later tried to reneg on the deal, or at least made a big public show of trying to back out of it during the summer of 2022, claiming that Twitter had falsely represented the percentage of bot accounts on the platform. Company executives filed suit to force Musk to agree to the share price in his original offer — despite significant stock price losses due largely to Musk’s own disparagement of the site.

Forced to go through with the deal despite admittedly “overpaying for Twitter right now,” Musk and his set of investor backers took the company private and began an uncertain new era for the heretofore arguably closest thing to a public town square in all of history, with the Tesla and SpaceX entrepreneur at the helm. A self-proclaimed “free speech absolutist,” the billionaire immediately began pronouncing ideas wildly unpopular with its power user base of journalists, academics, and public professionals of all stripes, including:

Elon Musk as a clown, by Midjourney

Who actually owns Twitter now?

Amidst the chaos of Musk’s first weeks of ownership, many inside and outside of Twitter have speculated on the potential ulterior motives of the tech oligarch’s purchase. Theories range from intentional sabotage to gross incompetence — or potentially some mixture of the two. There is widespread disparagement of the idea that a billionaire can simply step in and upend what passed for a fledgling tool of democratic influence, and a bulwark against the march of right-wing authoritarianism around the world.

But who actually owns Twitter, outside of Musk himself? Perhaps the laundry list of investors can now or in the future shed some light on probable strategies behind the massive shakeup at one of the world’s most popular tools for the media-industrial class. Given the company’s privatization, it is now far less transparent about its financials — and it will be difficult to know at any moment in time if given investors have come in or out, or increased or decreased their holdings via their relationships with Musk or intermediaries.

As it stands though, this is the list of Twitter backers I have so far been able to find. Do you know of others I’ve missed, or changes to relative holdings? Please do give me a shout over — where else — on Twitter (while it stands — else on Mastodon) and let me know!

Twitter owners list

  • The number one primary owner is Elon Musk himself, who already owned 9.6% in shares of the company before taking it over. During the sale he put in $27 billion in cash from his own fortune, liquidated by selling shares of his Tesla stock
  • Original co-founder and CEO Jack Dorsey owned 2.4% in shares and kept them, for about a $1B stake
  • Larry Ellison of Oracle — $1B
  • Qatar Holding, part of the Qatar sovereign wealth fund (Qatar Investment Authority)
  • Prince Alwaleed bin Talal of Saudi Arabia — transferred 35 million shares to Musk (about $1.9B according to Dave Troy)
  • Binance cryptocurrency exchange — $500M
  • $13B in bank loans from:
    • Morgan Stanley — $3.5B
    • Bank of America
    • Barclays
    • Japanese banks
      • Mitsubishi UFJ Financial Group
      • Mizuho
    • French banks
      • Societe Generale
      • BNP Paribas
Read more