Internet

Election denialism, the refusal to accept credible election outcomes, has significantly impacted U.S. history, especially in recent years. This phenomenon is not entirely new; election denial has roots that stretch back through various periods of American history. However, its prevalence and intensity have surged in the contemporary digital and political landscape, influencing public trust, political discourse, and the very fabric of democracy.

Historical context

Historically, disputes over election outcomes are as old as the U.S. electoral system itself. For instance, the fiercely contested 1800 election between Thomas Jefferson and John Adams resulted in a constitutional amendment (the 12th Amendment) to prevent similar confusion in the future. The 1876 election between Rutherford B. Hayes and Samuel J. Tilden was resolved through the Compromise of 1877, which effectively ended Reconstruction and had profound effects on the Southern United States.

Yet these instances, while contentious, were resolved within the framework of existing legal and political mechanisms, without denying the legitimacy of the electoral process itself. Over time, claims of election fraud would come to be levied against the electoral and political system itself — with dangerous implications for the peaceful transfer of power upon which democracy rests.

Voting box in an election, by Midjourney

The 21st century and digital influence

Fast forward to the 21st century, and election denialism has taken on new dimensions, fueled by the rapid dissemination of disinformation (and misinformation) through digital media and a polarized political climate. The 2000 Presidential election, with its razor-thin margins and weeks of legal battles over Florida’s vote count, tested the country’s faith in the electoral process.

Although the Supreme Court‘s decision in Bush v. Gore was deeply controversial, Al Gore’s concession helped to maintain the American tradition of peaceful transitions of power.

The 2020 Election: A flashpoint

The 2020 election, marked by the COVID-19 pandemic and an unprecedented number of mail-in ballots, became a flashpoint for election denialism. Claims of widespread voter fraud and electoral malfeasance were propagated at the highest levels of government, despite a lack of evidence substantiated by multiple recounts, audits, and legal proceedings across several states.

The refusal to concede by President Trump and the storming of the U.S. Capitol on January 6, 2021, marked a watershed moment in U.S. history, where election denialism moved from the fringes to the center of political discourse, challenging the norms of democratic transition. Widely referred to as The Big Lie, the baseless claims of election fraud that persist in the right-wing to this day are considered themselves to be a form of election fraud by justice officials, legal analysts, and a host of concerned citizens worried about ongoing attempts to overthrow democracy in the United States.

Implications, public trust, and voter suppression

The implications of this recent surge in election denialism are far-reaching. It has eroded public trust in the electoral system, with polls indicating a significant portion of the American populace doubting the legitimacy of election results. This skepticism is not limited to the national level but has trickled down to local elections, with election officials facing threats and harassment. The spread of misinformation, propaganda, and conspiracy theories about electoral processes and outcomes has become a tool for political mobilization, often exacerbating divisions within the American society.

Moreover, election denialism has prompted legislative responses at the state level, with numerous bills introduced to restrict voting access in the name of election security. These measures have sparked debates about voter suppression and the balance between securing elections and ensuring broad electoral participation. The challenge lies in addressing legitimate concerns about election integrity while avoiding the disenfranchisement of eligible voters.

Calls for reform and strengthening democracy

In response to these challenges, there have been calls for reforms to strengthen the resilience of the U.S. electoral system. These include measures to enhance the security and transparency of the voting process, improve the accuracy of voter rolls, and counter misinformation about elections. There’s also a growing emphasis on civic education to foster a more informed electorate capable of critically evaluating electoral information.

The rise of election denialism in recent years highlights the fragility of democratic norms and the crucial role of trust in the electoral process. While disputes over election outcomes are not new, the scale and impact of recent episodes pose unique challenges to American democracy. Addressing these challenges requires a multifaceted approach, including legal, educational, and technological interventions, to reinforce the foundations of democratic governance and ensure that the will of the people is accurately and fairly represented.

Read more

A “filter bubble” is a concept in the realm of digital publishing, media, and web technology, particularly significant in understanding the dynamics of disinformation and political polarization. At its core, a filter bubble is a state of intellectual isolation that can occur when algorithms selectively guess what information a user would like to see based on past behavior and preferences. This concept is crucial in the digital age, where much of our information comes from the internet and online sources.

Origins and mechanics

The term was popularized by internet activist Eli Pariser around 2011. It describes how personalization algorithms in search engines and social media platforms can isolate users in cultural or ideological bubbles. These algorithms, driven by AI and machine learning, curate content – be it news, search results, or social media posts – based on individual user preferences, search histories, and previous interactions.

filter bubble, by DALL-E 3

The intended purpose is to enhance user experience by providing relevant and tailored content. However, this leads to a situation where users are less likely to encounter information that challenges or broadens their worldview.

Filter bubbles in the context of disinformation

In the sphere of media and information, filter bubbles can exacerbate the spread of disinformation and propaganda. When users are consistently exposed to a certain type of content, especially if it’s sensational or aligns with their pre-existing beliefs, they become more susceptible to misinformation. This effect is compounded on platforms where sensational content is more likely to be shared and become viral, often irrespective of its accuracy.

Disinformation campaigns, aware of these dynamics, often exploit filter bubbles to spread misleading narratives. By tailoring content to specific groups, they can effectively reinforce existing beliefs or sow discord, making it a significant challenge in the fight against fake news and propaganda.

Impact on political beliefs and US politics

The role of filter bubbles in shaping political beliefs is profound, particularly in the polarized landscape of recent US politics. These bubbles create echo chambers where one-sided political views are amplified without exposure to opposing viewpoints. This can intensify partisanship, as individuals within these bubbles are more likely to develop extreme views and less likely to understand or empathize with the other side.

Recent years in the US have seen a stark divide in political beliefs, influenced heavily by the media sources individuals consume. For instance, the right and left wings of the political spectrum often inhabit separate media ecosystems, with their own preferred news sources and social media platforms. This separation contributes to a lack of shared reality, where even basic facts can be subject to dispute, complicating political discourse and decision-making.

Filter bubbles in elections and political campaigns

Political campaigns have increasingly utilized data analytics and targeted advertising to reach potential voters within these filter bubbles. While this can be an effective campaign strategy, it also means that voters receive highly personalized messages that can reinforce their existing beliefs and psychological biases, rather than presenting a diverse range of perspectives.

Breaking out of filter bubbles

Addressing the challenges posed by filter bubbles involves both individual and systemic actions. On the individual level, it requires awareness and a conscious effort to seek out diverse sources of information. On a systemic level, it calls for responsibility from tech companies to modify their algorithms to expose users to a broader range of content and viewpoints.

Filter bubbles play a significant role in the dissemination and reception of information in today’s digital age. Their impact on political beliefs and the democratic process — indeed, on democracy itself — in the United States cannot be overstated. Understanding and mitigating the effects of filter bubbles is crucial in fostering a well-informed public, capable of critical thinking and engaging in healthy democratic discourse.

Read more

The concept of a “honeypot” in the realms of cybersecurity and information warfare is a fascinating and complex one, straddling the line between deception and defense. At its core, a honeypot is a security mechanism designed to mimic systems, data, or resources to attract and detect unauthorized users or attackers, essentially acting as digital bait. By engaging attackers, honeypots serve multiple purposes: they can distract adversaries from more valuable targets, gather intelligence on attack methods, and help in enhancing security measures.

Origins and Usage

The use of honeypots dates back to the early days of computer networks, evolving significantly with the internet‘s expansion. Initially, they were simple traps set to detect anyone probing a network. However, as cyber threats grew more sophisticated, so did honeypots, transforming into complex systems designed to emulate entire networks, applications, or databases to lure in cybercriminals.

A honeypot illustration with a circuit board beset by a bee, by Midjourney

Honeypots are used by a variety of entities, including corporate IT departments, cybersecurity firms, government agencies, and even individuals passionate about cybersecurity. Their versatility means they can be deployed in almost any context where digital security is a concern, from protecting corporate data to safeguarding national security.

Types and purposes

There are several types of honeypots, ranging from low-interaction honeypots, which simulate only the services and applications attackers might find interesting, to high-interaction honeypots, which are complex and fully-functional systems designed to engage attackers more deeply. The type chosen depends on the specific goals of the deployment, whether it’s to gather intelligence, study attack patterns, or improve defensive strategies.

In the context of information warfare, honeypots serve as a tool for deception and intelligence gathering. They can be used to mislead adversaries about the capabilities or intentions of a state or organization, capture malware samples, and even identify vulnerabilities in the attacker’s strategies. By analyzing the interactions attackers have with these traps, defenders can gain insights into their techniques, tools, and procedures (TTPs), enabling them to better anticipate and mitigate future threats.

Historical effects

Historically, honeypots have had significant impacts on both cybersecurity and information warfare. They’ve led to the discovery of new malware strains, helped dismantle botnets, and provided critical intelligence about state-sponsored cyber operations. For example, honeypots have been instrumental in tracking the activities of sophisticated hacking groups, leading to a deeper understanding of their targets and methods, which, in turn, has informed national security strategies and cybersecurity policies.

One notable example is the GhostNet investigation, which uncovered a significant cyber espionage network targeting diplomatic and governmental institutions worldwide. Honeypots played a key role in identifying the malware and command-and-control servers used in these attacks, highlighting the effectiveness of these tools in uncovering covert operations.

Honeypot hackers and cybercriminals

Ethical and practical considerations

While the benefits of honeypots are clear, their deployment is not without ethical and practical considerations. There’s a fine line between deception for defense and entrapment, raising questions about the legality and morality of certain honeypot operations, especially in international contexts where laws and norms may vary widely.

Moreover, the effectiveness of a honeypot depends on its believability and the skill with which it’s deployed and monitored. Poorly configured honeypots might not only fail to attract attackers but could also become liabilities, offering real vulnerabilities to be exploited.

Cyber attackers and defenders

Honeypots are a critical component of the cybersecurity and information warfare landscapes, providing valuable insights into attacker behaviors and tactics. They reflect the ongoing cat-and-mouse game between cyber attackers and defenders, evolving in response to the increasing sophistication of threats. As digital technologies continue to permeate all aspects of life, the strategic deployment of honeypots will remain a vital tactic in the arsenal of those looking to protect digital assets and information. Their historical impacts demonstrate their value, and ongoing advancements in technology promise even greater potential in understanding and combating cyber threats.

By serving as a mirror to the tactics and techniques of adversaries, honeypots help illuminate the shadowy world of cyber warfare, making them indispensable tools for anyone committed to safeguarding information in an increasingly interconnected world.

Read more

The term “hoax” is derived from “hocus,” a term that has been in use since the late 18th century. It originally referred to a trick or deception, often of a playful or harmless nature. The essence of a hoax was its capacity to deceive, typically for entertainment or to prove a point without malicious intent. Over time, the scope and implications of a hoax have broadened significantly. What was once a term denoting jest or trickery has morphed into a label for deliberate falsehoods intended to mislead or manipulate public perception.

From playful deception to malicious misinformation

As society entered the age of mass communication, the potential reach and impact of hoaxes expanded dramatically. The advent of newspapers, radio, television, and eventually the internet and social media platforms, transformed the way informationβ€”and misinformationβ€”circulated. Hoaxes began to be used not just for amusement but for more nefarious purposes, including political manipulation, financial fraud, and social engineering. The line between a harmless prank and damaging disinformation and misinformation became increasingly blurred.

The political weaponization of “hoax”

In the contemporary political landscape, particularly within the US, the term “hoax” has been co-opted as a tool for disinformation and propaganda. This strategic appropriation has been most visible among certain factions of the right-wing, where it is used to discredit damaging information, undermine factual reporting, and challenge the legitimacy of institutional findings or scientific consensus. This application of “hoax” serves multiple purposes: it seeks to sow doubt, rally political bases, and divert attention from substantive issues.

the politicization of hoaxes, via fake scandals that tie up the media unwittingly in bullshit for years, by DALL-E 3

This tactic involves labeling genuine concerns, credible investigations, and verified facts as “hoaxes” to delegitimize opponents and minimize the impact of damaging revelations. It is a form of gaslighting on a mass scale, where the goal is not just to deny wrongdoing but to erode the very foundations of truth and consensus. By branding something as a “hoax,” these actors attempt to preemptively dismiss any criticism or negative information, regardless of its veracity.

Case Studies: The “Hoax” label in action

High-profile instances of this strategy include the dismissal of climate change data, the denial of election results, and the rejection of public health advice during the COVID-19 pandemic. In each case, the term “hoax” has been employed not as a description of a specific act of deception, but as a blanket term intended to cast doubt on the legitimacy of scientifically or empirically supported conclusions. This usage represents a significant departure from the term’s origins, emphasizing denial and division over dialogue and discovery.

The impact on public discourse and trust

The strategic labeling of inconvenient truths as “hoaxes” has profound implications for public discourse and trust in institutions. It creates an environment where facts are fungible, and truth is contingent on political allegiance rather than empirical evidence. This erosion of shared reality undermines democratic processes, hampers effective governance, and polarizes society.

Moreover, the frequent use of “hoax” in political discourse dilutes the term’s meaning and impact, making it more difficult to identify and respond to genuine instances of deception. When everything can be dismissed as a hoax, the capacity for critical engagement and informed decision-making is significantly compromised.

Moving Forward: Navigating a “post-hoax” landscape

The challenge moving forward is to reclaim the narrative space that has been distorted by the misuse of “hoax” and similar terms. This involves promoting media literacy, encouraging critical thinking, and fostering a public culture that values truth and accountability over partisanship. It also requires the media, educators, and public figures to be vigilant in their language, carefully distinguishing between genuine skepticism and disingenuous dismissal.

The evolution of “hoax” from a term denoting playful deception to a tool for political disinformation reflects broader shifts in how information, truth, and reality are contested in the public sphere. Understanding this transformation is crucial for navigating the complexities of the modern informational landscape and for fostering a more informed, resilient, and cohesive society.

Read more

republican vs. democrat cage match boxing ring

Buckle up, we’re in for a wild ride. Many of the serious scholars of political history and authoritarian regimes are sounding the alarm bells that, although it is a very very good thing that we got the Trump crime family out of the Oval Office, it is still a very very bad thing for America to have so rapidly tilted towards authoritarianism. How did we get here?! How has hyper partisanship escalated to the point of an attempted coup by 126 sitting Republican House Representatives? How has political polarization gotten this bad?

These are some of the resources that have helped me continue grappling with that question, and with the rapidly shifting landscape of information warfare. How can we understand this era of polarization, this age of tribalism? This outline is a work in progress, and I’m planning to keep adding to this list as the tape keeps rolling.

Right-Wing Authoritarianism

Authoritarianism is both a personality type and a form of government — it operates at both the interpersonal and the societal level. The words authoritarian and fascist are often used interchangeably, but fascism is a more specific type of authoritarianism, and far more historically recent.

America has had flavors of authoritarianism since its founding, and when fascism came along the right-wing authoritarians ate it up — and deeply wanted the United States to be a part of it. Only after they became social pariahs did they change position to support American involvement in World War II — and some persisted even after the attack of Pearl Harbor.

With Project 2025, Trump now openly threatens fascism on America — and sadly, some are eager for it. The psychology behind both authoritarian leaders and followers is fascinating, overlooked, and misunderstood.

Scholars of authoritarianism

  • Hannah Arendt — The Origins of Totalitarianism
  • Bob Altemeyer — The Authoritarians
  • Derrida — the logic of the unconscious; performativity in the act of lying
  • ketman — Ketman is the psychological concept of concealing one’s true aims, akin to doublethink in Orwell’s 1984, that served as a central theme to Polish dissident CzesΕ‚aw MiΕ‚osz‘s book The Captive Mind about intellectual life under totalitarianism during the Communist post-WWII occupation.
  • Erich Fromm — coined the term “malignant narcissism” to describe the psychological character of the Nazis. He also wrote extensively about the mindset of the authoritarian follower in his seminal work, Escape from Freedom.
  • Eric Hoffer — his book The True Believers explores the mind of the authoritarian follower, and the appeal of losing oneself in a totalist movement
  • Fascism — elevation of the id as the source of truth; enthusiasm for political violence
  • Tyrants and dictators
  • John Dean — 3 types of authoritarian personality:
    • social dominators
    • authoritarian followers
    • double highs — social dominators who can “switch” to become followers in certain circumstances
  • Loyalty; hero worship
    • Freud = deeply distrustful of hero worship and worried that it indulged people’s needs for vertical authority. He found the archetype of the authoritarian primal father very troubling.
  • Ayn Rand
    • The Fountainhead (1943)
    • Atlas Shrugged (1957)
    • Objectivism ideology
  • Greatness Thinking; heroic individualism
  • Nietszche — will to power; the Uberman
  • Richard Hofstadter — The Paranoid Style
  • George Lakoff — moral framing; strict father morality
  • Neil Postman — Entertaining Ourselves to Death
  • Anti-Intellectualism
  • Can be disguised as hyper-rationalism (Communism)
  • More authoritarianism books
Continue reading Hyper Partisanship: How to understand American political polarization
Read more

Surveillance Capitalism Dictionary

They were inspired by hippies, but Orwell would fear them. The giants of Silicon Valley started out trying to outsmart The Man, and in the process became him. And so, surveillance capitalism got born. Such is the story of corruption since time immemorial.

This surveillance capitalism dictionary of surveillance is a work in progress! Check back for further updates!

TermDefinition
algorithmA set of instructions that programmers give to computers to run software and make decisions.
artificial intelligence (AI)
Bayes' Theorem
bioinformaticsA technical and computational subfield of genetics, concerned with the information and data encoded by our genes and genetic codes.
child machineAlan Turing's concept for developing an "adult brain" by creating a child brain and giving it an education
CHINOOKcheckers program that becomes the first time an AI wins an official world championship in a game of skill, in 1994
click-wrap
collateral behavioral data
common carrierA sort of hybrid public interest served by corporate promise of meeting a high bar of neutrality -- a historical precedent setby the early Bell system monopoly, and an issue of public-private strife today with the advent of the internet.
contracts of adhesion
cookiesSmall packets of data deposited by the vast majority of websites you visit, that store information in the browser as a way to extract intelligence about their users and visitors.
corpusIn Natural Language Processing, a compendium of words used to "train" the AI to understand patterns in new texts.
decision trees
Deep BlueChess program that beats world chess champion Garry Kasparov in 1997
deep learning
evolutionary algorithms
Facebook
facial recognition
Flash Crash of 2010sudden drop of over $1 trillion in the E-Mini S&P 500 futures contract market via runaway feedback loop within a set of algorithmic traders
FLOPSfloating-point operations per second
Free BasicsFacebook's plan, via Internet.org, to provide limited free internet services in rural India (and elsewhere in the developing world).Controversy centers on the β€œlimited” nature of the offering, which gives Facebook the power to select or reject individual websites and resources for inclusion.
genetic algorithms
GOFAI"Good Old-Fashioned Artificial Intelligence"
HLMIhuman-level machine intelligence: defined as being able to carry out most human professions at least as well as a typical human
interoperability
Kolmogorov complexity
language translation
linear regression
machine learning
Markov chains
monopoly
NAFTA
natural language processing (NLP)A technology for processing and analyzing words
neofeudalism
net neutralityLegal and regulatory concept maintaining that Internet Service Providers must act as common carriers, allowing businesses and citizens to interoperate with the physical infrastructure of the communications network equally, without being subject to biased or exclusionary activities on the part of the network.
neural networks
netizens
"Online Eraser" law (CA)
patrimonial capitalism
Pegasus
phonemes
predatory lending
predictive analytics
privacy
private eminent domain
probability
prosody
qualia
r > gPiketty's insight
randomness
random walk
recommender systems
recursion
recursive learning
right to be forgottenWhen it became EU law in 2014, this groundbreaking legislation gave citizens the power to demand search engines remove pointers to content about them. It was the growing of a data rights movement in Europe that led later to GDPR.
SciKit
simulation
smart speakers
speech recognition
spyware
statistical modeling
strong vs. weak AI"weak AI" refers to algorithms designed to master a specific narrow domain of knowledge or problem-solving, vs. achieving a more general intelligence (strong AI)
supermajority
supervised learning
surplus data
TensorFlow
Tianhe-2The world's fastest supercomputer, developed in China, until it was surpassed in June 2016 by the also Chinese Sunway TaihuLight
Terms of Service
Twitter
unsupervised learning
WatsonIBM AI that defeats the two all-time greatest human Jeopardy! champions in 2010
WhatsApp
WTO
Zuccotti Park
Read more

speak, sistah!

see also: Shoshanna Zuboff (who wrote the seminal work on surveillance capitalism), Don Norman, Dystopia vs. Utopia Book List: A Fight to the Finish, surveillance capitalism dictionary

Some takeaways:

  • surveillance won’t be obvious and overt like in Orwell’s classic totalitarian novel 1984 — it’ll be covert and subtle (“more like a spider’s web”)
  • social networks use persuasion architecture — the same cloying design aesthetic that puts gum at the eye level of children in the grocery aisle

Example:

AI modeling of potential Las Vegas ticket buyers

The machine learning algorithms can classify people into two buckets, “likely to buy tickets to Vegas” and “unlikely to” based on exposure to lots and lots of data patterns. Problem being, it’s a black box and no one — not even the computer scientists — know how it works or what it’s doing exactly.

So the AI may have discovered that bipolar individuals just about to go into mania are more susceptible to buying tickets to Vegas — and that is the segment of the population they are targeting: a vulnerable set of people prone to overspending and gambling addictions. The ethical implications of unleashing this on the world — and routinely using and optimizing it relentlessly — are staggering.

Profiting from extremism

“You’re never hardcore enough for YouTube” — YouTube gives you content recommendations that are increasingly polarized and polarizing, because it turns out that preying on your reptilian brain makes you keep clicking around in the YouTube hamster wheel.

The amorality of AI — “algorithms don’t care if they’re selling shoes, or politics.” Our social, political, and cultural flows are being organized by these persuasion architectures — organized for profit; not for the collective good, not for public interests, not subject to our political will anymore. These powerful surveillance capitalism tools are running mostly unchecked, with little oversight and with few people minding the ethics of the stores of essentially a cadre of Silicon Valley billionaires.

Intent doesn’t matter — good intentions aren’t enough; it’s the structure and business models that matter. Facebook isn’t a half trillion dollar con: its value is in its highly effective persuasion power, which is highly troubling and concerning in a supposedly democratic society. Mark Zuckerberg may even ultimately mean well (…debatable), but it doesn’t excuse the railroading over numerous obviously negative externalities resulting from the unchecked power of Facebook in not only the U.S., but in countries around the world including highly volatile domains.

Extremism benefits demagogues — Oppressive regimes both come to power by and benefit from political extremism; from whipping up citizens into a frenzy, often against each other as much as against perceived external or internal enemies. Our data and attention are now for sale to the highest bidding authoritarians and demagogues around the world — enabling them to use AI against us in election after election and PR campaign after PR campaign. We gave foreign dictators even greater powers to influence and persuade us in ways that benefit them at the expense of our own self-interest.

Read more

When usability pioneers have All the Feels about the nature of our creeping technological dystopia, how we got here, and what we might need to do to right the ship, it’s wise to pay attention. Don Norman’s preaching resonated with my choir, and they’ve asked me to sing a summary song of our people in bulleted list format:

  • What seemed like a virtuous thing at the time — building the internet with an ethos of trust and openness — has led to a travesty via lack of security, because no one took bad actors into account.
  • Google, Facebook, et al didn’t have the advertising business model in mind a priori, but sort of stumbled into it and got carried away giving advertisers what they wanted — more information about users — without really taking into consideration the boundary violations of appropriating people’s information. (see Shoshana Zuboff’s definitive new book on Surveillance Capitalism for a lot more on this topic)
  • Tech companies have mined the psychological sciences for techniques that — especially at scale — border on mass manipulation of fundamental human drives to be informed and to belong. Beyond the creepy Orwellian slant of information appropriation and emotional manipulation, the loss of productivity and mental focus from years of constant interruptions takes a toll on society at large.
  • We sign an interminable series of EULAs, ToS’s and other lengthy legalese-ridden agreements just to access the now basic utilities that enable our lives. Experts refer to these as “contracts of adhesion” or “click-wrap,” as a way of connoting the “obvious lack of meaningful consent.” (Zuboff)
  • The “bubble effect” — the internet allows one to surround oneself completely with like-minded opinions and avoid ever being exposed to alternative points of view. This has existential implications for being able to inhabit a shared reality, as well as a deleterious effect on public discourse, civility, and the democratic process itself.
  • The extreme commercialization of almost all of our information sources is problematic, especially in the age of the “Milton Friedman-ification” of the economic world and the skewing of values away from communities and individuals, towards a myopic view of shareholder value and all the attendant perverse incentives that accompany this philosophical business shift over the past 50 years. He notes that the original public-spiritedness of new communication technologies has historically been co-opted by corporate lobbyists via regulatory capture — a subject Tim Wu explores in-depth in his excellent 2011 book, “The Master Switch: The Rise and Fall of Information Empires.

Is it all bleak, Don?! His answer is clear: “yes, maybe, no.” He demurs on positing a definitive answer to all of these issues, but he doesn’t really mince words about a “hunch” that it may in fact involve burning it all down and starting over again.

Pointing to evolution, Norman notes that we cannot eke radical innovation out of incremental changes — and that when radical change does happen it is often imposed unexpectedly from the outside in the form of catastrophic events. Perhaps if we can’t manage to Marie Kondo our way to a more joyful internet, we’ll have to pray for Armageddon soon…?! 😱

https://www.youtube.com/watch?v=uCEeAn6_QJo
Read more

Anger is the defining emotion of the internet.

It’s designed to whip you up into a frenzy in order to foment cheap pageviews. Its interest is in you becoming a histrionic attention whore, such that you suck in as much clandestinely stolen user data to your platform of choice as possible. Turns out, conflict gets attention.

Anger is also notably the “loophole” emotion — it’s the invisible one men get to have, while claiming for generations upon generations that “women are too emotional to be entrusted” with leadership or anything meaningful, really. Meanwhile male anger and aggression have killed hundreds of millions and wreaked destruction upon the earth many times over, as fragile masculinity is repeatably and predictably triggered over any little old thing.

A neat trick.

A story.

A lie.

Read more

Peter Thiel at Isengaard looking into the Palantir

Peter Thiel and Palmer Luckey are a particularly toxic breed of billionaire welfare queen, who outwardly revile government with every chance they get while having both sucked at its teat to make their fortunes, and currently making a luxe living on taxpayer largesse.

Thiel’s Paypal and Facebook-induced riches rode the coattails of the DARPA-created internet, while Luckey had his exit to internet giant Facebook. Now Thiel helms creepy-AF data mining company Palantir, whose tentacles are wrapped all the way around the intelligence community’s various agencies, while Luckey’s Thiel-funded startup Anduril is bidding for lucrative defense contracts to build Trump’s border wall. It’s the stuff of full-on right-wing neocon wet dreams for both men.

They follow in a long line of right-wing denialism in which Austrian School econ acolytes (and trickle down aficionados) have claimed to be self-made men while reaping untold rewards from lucrative military contracts and other sources of government funding or R&D windfall. Barry Goldwater once famously invoked the mythology of the independent cowboy to describe his successful rise (as would union man Ronald Reagan years later) — when in reality he inherited the family department store business that itself became viable only due to the public money pouring in to nearby military installations springing up in Arizona since as far back as the Civil War.

Even without the American government as their businesses’ largest client, the Libertarian ideal of disproportionately enjoying the fruits of public goods while viciously fighting against the taxation required to pay for them puts the lie to these mens’ claims of Ayn Randian moral supremacy. The ritual flogging of so-called “Great Man Theory” animates all sorts of dangerous social projects such as the world’s richest man purchasing the de facto town square and turning it into a right-wing plaything.

If we’re lucky, Luckey will create some sort of VR seasteading community that sucks the Silicon Valley Supremacists right in and traps them in a sort of Libertarian Matrix forever.

More on Peter Thiel and his right-wing political network:

  • Buddies with right-wing Silicon Valley venture capitalist David Sacks
  • Member of the PayPal Mafia
  • Funded successful Ohio Senator JD Vance‘s campaign
  • Funded loser Blake Masters’ Senatorial campaign in Arizona in 2022
Read more

While multiple formal investigations against the Trump family and administration continue to unfold, and Drumpf supporters weirdly deny the probable cause for concern, Putin’s troll army continues to operate out in the open on Twitter, Facebook, Medium, and other social media networks. The sheer scale of this operation started to become clear to me in the months leading up to Election 2016, having both spent a lot of time on social media both professionally and personally for over a decade as well as a hefty amount of time on political investigation during this presidential cycle: bots on Twitter had taken over.

Whatever your thoughts on the #RussiaGate corruption scandal may be, it should concern any citizen that an enormous group of bad actors is working together to infiltrate American social media, with a specific intent to sway politics. Media literacy is one part of the answer, but we’re going to need new tools to help us identify accounts that are only present in bad faith to political discourse: they are not who they claim to be, and their real goals are kept carefully opaque.

Cold War 2.0

We should consider our nation embroiled in a large international game of psychological warfare, or PsyOps as it is referred to in intelligence circles. The goal is to sow disinformation as widely as possible, such that it becomes very difficult to discern what separates truth from propaganda. A secondary goal is to sow dissent among the citizenry, particularly to rile up the extremist factions within America’s two dominant political parties in an attempt to pull the political sphere apart from the center. 

We didn’t really need much help in that department as it is, with deep partisan fault lines having been open as gaping wounds on the American political landscape for some decades now — so the dramatically escalated troll army operation has acted as an intense catalyst for further igniting the power kegs being stored up between conservatives and progressives in this country.

Luckily there are some ways to help defray the opposition’s ability to distract and spread disinfo by identifying the signatures given off by suspicious accounts. I’ve developed a few ways to evaluate whether a given account may be a participant in paid propaganda, or at least is likely to be misrepresenting who they say they are, and what their agenda is. 

Sometimes it’s fun to get embroiled in a heated “tweetoff,” but I’ve noticed how easy it is to feel “triggered” by something someone says online and how the opposition is effectively “hacking” that tendency to drag well-meaning people into pointless back-and-forths designed not to defend a point of view, but simply to waste an activist’s time, demoralize them, and occupy the focus — a focus that could be better spent elsewhere on Real Politics with real citizens who in some way care about their country and their lives.

Bots on Twitter have “Tells”

1) Hyper-patriotism

– Conspicuously hyper-patriotic bio (and often, name)Β  – Posts predominantly anti-Democrat, anti-liberal/libtard, anti-Clinton, anti-Sanders, anti-antifa etc. memes:


2) Hyper-Christianity

– Conspicuously hyper-Christian in bio and/or name of bots on Twitter: 


3) Abnormally high tweet volume

Seems to tweet &/or RT constantly without breaks — supporting evidence of use of a scheduler tool at minimum, and displaying obviously automated responses from some accounts. The above account, for example, started less than 2 years ago, has tweeted 15,000 more times than I have in over 10 years of frequent use (28K). Most normal people don’t schedule their tweets — but marketers and PR people do.


4) Posts only about politics and one other thing (usually a sport)

– Posts exclusively about politics and potentially one other primary “normie” topic, which is often a sport – May proclaim to be staunchly not “politically correct”:


5) Hates Twitter Lists

– Bots on Twitter have a strange aversion to being added to Lists, or making Lists of their own:


6) Overuse of hashtags 

– Uses hashtags more than normal, non-marketing people usually do:


7) Pushes a one-dimensional message

– Seems ultimately too one-dimensional and predictable to reflect a real personality, and/or too vaguely similar to the formula:


8) Redundant tweets

– Most obviously of all, it retweets the same thing over and over again:


9) Rehashes a familiar set of memes

– Tweets predominantly about a predictable set of memes:

Mismatched location and time zone is another “tell” — and although you can’t get the second piece of data from the public profile, it is available from the Twitter API. If you know Python and/or feel adventurous, I’m sharing an earlier version of the above tool on Github (and need to get around to pushing the latest version…) — and if you know of any other “tells” please share by commenting or tweeting at me. Next bits I want to work on include:

  • Examining follower & followed networks against a matchlist of usual suspect accounts
  • Looking at percentage of Cyrillic characters in use
  • Graphing tweet volume over time to identify “bot” and “cyborg” periods
  • Looking at “burst velocity” of opposition tweets as bot networks are engaged to boost messages
  • Digging deeper into the overlap between the far-right and far-left as similar memes are implanted and travel through both “sides” of the networks
Read more

I still see a lot of denialism on this point about the DNC email hacks from the far-left (or the alt-left, depending on your favored terminology), which is a bit devastating to see as it essentially parrots the pro-Russian ideology of the far-right (both the alt-right and the neo-libertarian flavors). Green Party candidate Jill Stein is an especially pernicious promoter of this myth that Vladimir Putin is a poor, innocent, peaceful world leader who is being bullied by NATO (when in fact, Russia has been the aggressor since its annexation of Crimea in 2014).

DNC email hacks forensic evidence

Two separate Russian-affiliated adversaries were behind the attacks, according to a post-mortem by cyber-security firm CrowdStrike when the news of the intrusion first broke in early June, 2016. This has since been confirmed by other independent security firms including Fidelis, Mandiant, SecureWorks, and ThreatConnect as well as corroborated by analysis from Ars Technica and Edward Snowden.

At this point the US intelligence community is confident enough to formally accuse Russia of involvement in the hacks, and are currently investigating other breaches of voter registration databases in Arizona and Illinois as well as in Floridaβ€Šβ€”β€Šthe key battleground state from the 2000 election that handed GWB an unfortunate victory. Elsewhere, there is ample evidence of Putin’s extensive disinformation campaign being waged online (including several experiences I have myself witnessed), which is the continuation of a long through line of wielding propaganda as a tool from the former head of the KGB.

Related:

  • A timeline of recent Russian aggression
  • A RussiaGate Dictionary: Lexicon for the New Cold War
  • A RussiaGate Bestiary: Principal actors and related extras in the 2016 election scandal
  • The Russian Mafia State: How the former USSR has become a sclerotic kleptocracy under the rule of former KGB agent Vladimir Putin, who vowed revenge on the West after his station in Dresden, East Germany was overrun by angry citizens during the month leading up to the fall of the Berlin Wall.
Read more