AI

Cyberbullying involves the use of digital technologies, like social media, texting, and websites, to harass, intimidate, or embarrass individuals. Unlike traditional bullying, its digital nature allows for anonymity and a wider audience. Cyberbullies employ various tactics such as sending threatening messages, spreading rumors online, posting sensitive or derogatory information, or impersonating someone to damage their reputation — on to more sinister and dangerous actions like doxxing.

Geopolitical impact of cyberbullying

In recent years, cyberbullying has transcended personal boundaries and infiltrated the realm of geopolitics. Nation-states or politically motivated groups have started using cyberbullying tactics to intimidate dissidents, manipulate public opinion, or disrupt political processes in other countries. Examples include spreading disinformation, launching smear campaigns against political figures, or using bots to amplify divisive content. This form of cyberbullying can have significant consequences, destabilizing societies and influencing elections.

Recognizing cyberbullying

Identifying cyberbullying involves looking for signs of digital harassment. This can include receiving repeated, unsolicited, and aggressive communications, noticing fake profiles spreading misinformation about an individual, or observing coordinated attacks against a person or group. In geopolitics, recognizing cyberbullying might involve identifying patterns of disinformation, noting unusual social media activity around sensitive political topics, or detecting state-sponsored troll accounts.

Responding to cyberbullying

The response to cyberbullying varies based on the context and severity. For individuals, it involves:

  1. Documentation: Keep records of all bullying messages or posts.
  2. Non-engagement: Avoid responding to the bully, as engagement often escalates the situation.
  3. Reporting: Report the behavior to the platform where it occurred and, if necessary, to law enforcement.
  4. Seeking Support: Reach out to friends, family, or professionals for emotional support.

For geopolitical cyberbullying, responses are more complex and involve:

  1. Public Awareness: Educate the public about the signs of state-sponsored cyberbullying and disinformation.
  2. Policy and Diplomacy: Governments can implement policies to counteract foreign cyberbullying and engage in diplomatic efforts to address these issues internationally.
  3. Cybersecurity Measures: Strengthening cybersecurity infrastructures to prevent and respond to cyberbullying at a state level.

Cyberbullying, in its personal and geopolitical forms, represents a significant challenge in the digital age. Understanding its nature, recognizing its signs, and knowing how to respond are crucial steps in mitigating its impact. For individuals, it means being vigilant online and knowing when to seek help. In the geopolitical arena, it requires a coordinated effort from governments, tech companies, and the public to defend against these insidious forms of digital aggression. By taking these steps, societies can work towards a safer, more respectful digital world.

Read more

Shitposting, a term that has seeped into the mainstream of internet culture, is often characterized by the act of posting deliberately provocative, off-topic, or nonsensical content in online communities and on social media. The somewhat vulgar term encapsulates a spectrum of online behavior ranging from harmless, humorous banter to malicious, divisive content.

Typically, a shit-post is defined by its lack of substantive content, its primary goal being to elicit attention and reactions — whether amusement, confusion, or irritation — from its intended audience. Closely related to trolling, shitposting is one aspect of a broader pantheon of bad faith behavior online.

Shit-poster motivations

The demographic engaging in shit-posting is diverse, cutting across various age groups, social strata, and political affiliations. However, it’s particularly prevalent among younger internet users who are well-versed in meme culture and online vernacular. The motivations for shit-posting can be as varied as its practitioners.

Some engage in it for humor and entertainment, seeing it as a form of digital performance art. Others may use it as a tool for social commentary or satire, while a more nefarious subset might employ it to spread disinformation and misinformation, sow discord, and/or harass individuals or groups.

Online trolls shitposting on the internet, by Midjourney

Context in US politics

In the realm of U.S. politics, shit-posting has assumed a significant role in recent elections, especially on platforms like Twitter / X, Reddit, and Facebook. Politicians, activists, and politically engaged individuals often use this tactic to galvanize supporters, mock opponents, or shape public perception. It’s not uncommon to see political shit-posts that are laden with irony, exaggeration, or out-of-context information, designed to inflame passions or reinforce existing biases — or exploit them.

Recognition and response

Recognizing shit-posting involves a discerning eye. Key indicators include the use of hyperbole, irony, non-sequiturs, and content that seems outlandishly out of place or context. The tone is often mocking or sarcastic. Visual cues, such as memes or exaggerated images, are common.

Responding to shit-posting is a nuanced affair. Engaging with it can sometimes amplify the message, which might be the poster’s intention. A measured approach is to assess the intent behind the post. If it’s harmless humor, it might warrant a light-hearted response or none at all.

For posts that are disinformation or border on misinformation or toxicity, countering with factual information, reporting the content, or choosing not to engage are viable strategies. The key is not to feed into the cycle of provocation and reaction that shit-posting often seeks to perpetuate.

Shitposting troll farms lurk in the shadows, beaming disinformation across the land -- by Midjourney

Fighting back

Shit-posting, in its many forms, is a complex phenomenon in the digital age. It straddles the line between being a form of modern-day satire and a tool for misinformation, propaganda, and/or cyberbullying. As digital communication continues to evolve, understanding the nuances of shit-posting – its forms, motivations, and impacts – becomes crucial, particularly in politically charged environments. Navigating this landscape requires a balanced approach, blending awareness, discernment, and thoughtful engagement.

This overview provides a basic understanding of shit-posting, but the landscape is ever-changing, with new forms and norms continually emerging. The ongoing evolution of online communication norms, including phenomena like shit-posting, is particularly fascinating and significant in the broader context of digital culture and political discourse.

Read more

Science denialism has a complex and multifaceted history, notably marked by a significant event in 1953 that set a precedent for the tactics of disinformation widely observed in various spheres today, including politics.

The 1953 meeting and the birth of the disinformation playbook

The origins of modern science denial can be traced back to a pivotal meeting in December 1953, involving the heads of the four largest American tobacco companies. This meeting was a response to emerging scientific research linking smoking to lung cancer — a serious existenstial threat to their business model.

Concerned about the potential impact on their business, these industry leaders collaborated with a public relations firm, Hill & Knowlton, to craft a strategy. This strategy was designed not only to dispute the growing evidence about the health risks of smoking, but also to manipulate public perception by creating doubt about the science itself. They created the Tobacco Industry Research Committee (TIRC) as an organization to cast doubt on the established science, and prevent the public from knowing about the lethal dangers of smoking.

And it worked — for over 40 years. The public never formed a consensus on the lethality and addictiveness of nicotine until well into the 1990s, when the jig was finally up and Big Tobacco had to pay a record-breaking $200 billion settlement over their 4 decades of mercilessly lying to the American people following the Tobacco Master Settlement Agreement (MSA) of 1998.

smoking and the disinformation campaign of Big Tobacco leading to science denialism, by Midjourney

Strategies of the disinformation playbook

This approach laid the groundwork for what is often referred to as the “disinformation playbook.” The key elements of this playbook include creating doubt about scientific consensus, funding research that could contradict or cloud scientific understanding, using think tanks or other organizations to promote these alternative narratives, and influencing media and public opinion to maintain policy and regulatory environments favorable to their interests — whether profit, power, or both.

Over the next 7 decades — up to the present day — this disinformation playbook has been used by powerful special interests to cast doubt, despite scientific consensus, on acid rain, depletion of the ozone layer, the viability of Ronald Reagan‘s Strategic Defense Initiative (SDI), and perhaps most notably: the man-made causes of climate change.

Adoption and adaptation in various industries

The tobacco industry’s tactics were alarmingly successful for decades, delaying effective regulation and public awareness of smoking’s health risks. These strategies were later adopted and adapted by various industries and groups facing similar scientific challenges to their products or ideologies. For instance, the fossil fuel industry used similar tactics to cast doubt on global warming — leading to the phenomenon of climate change denialism. Chemical manufacturers have disputed science on the harmful effects of certain chemicals like DDT and BPA.

What began as a PR exercise by Big Tobacco to preserve their fantastic profits once science discovered the deleterious health effects of smoking eventually evolved into a strategy of fomenting science denialism more broadly. Why discredit one single finding of the scientific community when you could cast doubt on the entire process of science itself — as a way of future-proofing any government regulation that might curtail your business interests?

Science denial in modern politics

In recent years, the tactics of science denial have become increasingly prevalent in politics. Political actors, often influenced by corporate interests or ideological agendas, have employed these strategies to challenge scientific findings that are politically inconvenient — despite strong and often overwhelming evidence. This is evident in manufactured “debates” on climate change, vaccine safety, and COVID-19, where scientific consensus is often contested not based on new scientific evidence but through disinformation strategies aimed at sowing doubt and confusion.

The role of digital media and politicization

The rise of social media has accelerated the spread of science denial. The digital landscape allows for rapid dissemination of misinformation and the formation of echo chambers, where groups can reinforce shared beliefs or skepticism, often insulated from corrective or opposing information. Additionally, the politicization of science, where scientific findings are viewed through the lens of political allegiance rather than objective evidence, has further entrenched science denial in modern discourse — as just one aspect of the seeming politicization of absolutely everything in modern life and culture.

Strategies for combatting science denial

The ongoing impact of science denial is profound. It undermines public understanding of science, hampers informed decision-making, and delays action on critical issues like climate change, public health, and environmental protection. The spread of misinformation about vaccines, for instance, has led to a decrease in vaccination rates and a resurgence of diseases like measles.

scientific literacy, by Midjourney

To combat science denial, experts suggest several strategies. Promoting scientific literacy and critical thinking skills among the general public is crucial. This involves not just understanding scientific facts, but also developing an understanding of the scientific method and how scientific knowledge is developed and validated. Engaging in open, transparent communication about science, including the discussion of uncertainties and limitations of current knowledge, can also help build public trust in science.

Science denial, rooted in the strategies developed by the tobacco industry in the 1950s, has evolved into a significant challenge in contemporary society, impacting not just public health and environmental policy but also the very nature of public discourse and trust in science. Addressing this issue requires a multifaceted approach, including education, transparent communication, and collaborative efforts to uphold the integrity of scientific information.

Read more

Sockpuppets are fake social media accounts used by trolls for deceptive and covert actions, avoiding culpability for abuse, aggression, death threats, doxxing, and other criminal acts against targets.

In the digital age, the battleground for political influence has extended beyond traditional media to the vast, interconnected realm of social media. Central to this new frontier are “sockpuppet” accounts – fake online personas created for deceptive purposes. These shadowy figures have become tools in the hands of authoritarian regimes, perhaps most notably Russia, to manipulate public opinion and infiltrate the political systems of countries like the UK, Ukraine, and the US.

What are sockpuppet accounts?

A sockpuppet account is a fake online identity used for purposes of deception. Unlike simple trolls or spam accounts, sockpuppets are more sophisticated. They mimic real users, often stealing photos and personal data to appear authentic. These accounts engage in activities ranging from posting comments to spreading disinformation, all designed to manipulate public opinion.

The Strategic Use of Sockpuppets

Sockpuppet accounts are a cog in the larger machinery of cyber warfare. They play a critical role in shaping narratives and influencing public discourse. In countries like Russia, where the state exerts considerable control over media, these accounts are often state-sponsored or affiliated with groups that align with government interests.

Case Studies: Russia’s global reach

  1. The United Kingdom: Investigations have revealed Russian interference in the Brexit referendum. Sockpuppet accounts spread divisive content to influence public opinion and exacerbate social tensions. Their goal was to weaken the European Union by supporting the UK’s departure.
  2. Ukraine: Russia’s geopolitical interests in Ukraine have been furthered through a barrage of sockpuppet accounts. These accounts disseminate pro-Russian propaganda and misinformation to destabilize Ukraine’s political landscape, particularly during times of crisis, elections, or — most notably — during its own current war of aggression against its neighbor nation.
  3. The United States: The 2016 US Presidential elections saw an unprecedented level of interference. Russian sockpuppets spread divisive content, fake news, and even organized real-life events, creating an environment of distrust and chaos. Their goal was to sow discord and undermine the democratic process.
Vladimir Putin with his sheep, by Midjourney

How sockpuppets operate

Sockpuppets often work in networks, creating an echo chamber effect. They amplify messages, create false trends, and give the illusion of widespread support for a particular viewpoint. Advanced tactics include deepfakes and AI-generated text, making it increasingly difficult to distinguish between real and fake content.

Detection and countermeasures

Detecting sockpuppets is challenging due to their evolving sophistication. Social media platforms are employing AI-based algorithms to identify and remove these accounts. However, the arms race between detection methods and evasion techniques continues. Governments and independent watchdogs also play a crucial role in exposing such operations.

Implications for democracy

The use of sockpuppet accounts by authoritarian regimes like Russia poses a significant threat to democratic processes. By influencing public opinion and political outcomes in other countries, they undermine the very essence of democracy – the informed consent of the governed. This digital interference erodes trust in democratic institutions and fuels political polarization.

As we continue to navigate the complex landscape of digital information, the challenge posed by sockpuppet accounts remains significant. Awareness and vigilance are key. Social media platforms, governments, and individuals must collaborate to safeguard the integrity of our political systems. As citizens, staying informed and critically evaluating online information is our first line of defense against this invisible but potent threat.

Read more

Deep fakes, a term derived from “deep learning” (a subset of AI) and “fake,” refer to highly realistic, AI-generated digital forgeries of real human beings. These sophisticated imitations can be videos, images, or audio clips where the person appears to say or do things they never actually did.

The core technology behind deep fakes is based on machine learning and neural network algorithms. Two competing AI systems work in tandem: one generates the fake content, while the other attempts to detect the forgeries. Over time, as the detection system identifies flaws, the generator learns from these mistakes, leading to increasingly convincing fakes.

Deep fakes in politics

However, as the technology has become more accessible, it’s been used for various purposes, not all of them benign. In the political realm, deep fakes have a potential for significant impact. They’ve been used to create false narratives or manipulate existing footage, making it appear as though a public figure has said or done something controversial or scandalous. This can be particularly damaging in democratic societies, where public opinion heavily influences political outcomes. Conversely, in autocracies, deep fakes can be a tool for propaganda or to discredit opposition figures.

How to identify deep fakes

Identifying deep fakes can be challenging, but there are signs to look out for:

  1. Facial discrepancies: Imperfections in the face-swapping process can result in blurred or fuzzy areas, especially where the face meets the neck and hair. Look for any anomalies in facial expressions or movements that don’t seem natural.
  2. Inconsistent lighting and shadows: AI can struggle to replicate the way light interacts with physical objects. If the lighting or shadows on the face don’t match the surroundings, it could be a sign of manipulation.
  3. Audiovisual mismatches: Often, the audio does not perfectly sync with the video in a deep fake. Watch for delays or mismatches between spoken words and lip movements.
  4. Unusual blinking and breathing patterns: AI can struggle to accurately mimic natural blinking and breathing, leading to unnatural patterns.
  5. Contextual anomalies: Sometimes, the content of the video or the actions of the person can be a giveaway. If it seems out of character or contextually odd, it could be fake.

In democratic societies, the misuse of deep fakes can erode public trust in media, manipulate electoral processes, and increase political polarization. Fake videos can quickly spread disinformation and misinformation, influencing public opinion and voting behavior. Moreover, they can be used to discredit political opponents with false accusations or fabricated scandals.

In autocracies, deep fakes can be a potent tool for state propaganda. Governments can use them to create a false image of stability, prosperity, or unity, or conversely, to produce disinformation campaigns against perceived enemies, both foreign and domestic. This can lead to the suppression of dissent and the manipulation of public perception to support the regime.

Deep fakes with Donald Trump, by Midjourney

Response to deep fakes

The response to the threat posed by deep fakes has been multifaceted. Social media platforms and news organizations are increasingly using AI-based tools to detect and flag deep fakes. There’s also a growing emphasis on digital literacy, teaching the public to critically evaluate the content they consume.

Legal frameworks are evolving to address the malicious use of deep fakes. Some countries are considering legislation that would criminalize the creation and distribution of harmful deep fakes, especially those targeting individuals or designed to interfere in elections.

While deep fakes represent a remarkable technological advancement, they also pose a significant threat to the integrity of information and democratic processes. As this technology evolves, so must our ability to detect and respond to these forgeries. It’s crucial for both individuals and institutions to stay informed and vigilant against the potential abuses of deep fakes, particularly in the political domain. As we continue to navigate the digital age, the balance between leveraging AI for innovation and safeguarding against its misuse remains a key challenge.

Read more

ParadoxBot is an adorable chatbot who will cheerfully inform you about the Dark Arts

Sure, you could use the site search. Or, you could have a bot — try having a conversation with my blog via the following AI chatbot, ParadoxBot.

Ask it about conspiracy theories, or narcissism, or cults, or authoritarianism, or fascism, or disinformation — to name a few. You can also ask it about things like dark money, economics, history, and many topics at the intersection of political psychology.

It doesn’t index what’s on Foundations (yet) but it has ingested this site and you can essentially chat with the site itself via the ChatGPT-like interface below. Enjoy! And if you love it or hate it, find me on BlueSky (as @doctorparadox) or Mastodon and let me know your thoughts:

Tips for using ParadoxBot

  • Follow general good practice regarding prompt engineering.
  • If you don’t get an answer right away, try rephrasing your question. Even omitting or adding one word sometimes produces good results.
  • Try broad broad and specific types of queries.
  • Dig deeper into any areas the bot turns up that sound interesting.
Read more

These days the GOP is just 3 cults in a trenchcoat — nevertheless, it’s helpful to understand some of the ideologies and extremist beliefs that folks on the right engage with. Understanding the psychology can help us make predictions about actions, reactions, and other developments in the political landscape.

What is an ideology?

An ideology is a comprehensive set of beliefs, ideas, and values that shape the way individuals or groups perceive the world and interact within it. It serves as a lens through which people interpret social, political, and economic phenomena, guiding their actions and decisions. Ideologies can be as broad as political doctrines like liberalism, conservatism, or socialism, or as specific as belief systems within a particular culture or organization.

Ideologies often manifest in various forms, such as political platforms, religious doctrines, or social movements. They can be explicit, where the principles are clearly outlined, or implicit, subtly influencing behavior without overt expression. Ideologies are not static; they evolve over time, adapting to new information, social changes, or shifts in power dynamics.

In the realm of politics and governance, ideologies play a crucial role. They inform policy decisions, shape public opinion, and influence the behavior of political actors. They can also be divisive, leading to conflict and exclusion of those who do not conform. In the media, ideologies can affect the framing of news and the dissemination of information, subtly shaping public perception.

Right-wing ideologies

Read more

phobia indoctrination, illustrated

Phobia indoctrination is one of the principle ways a charismatic leader will lull potential followers into his thrall, by putting them into a state of perpetual fear and anxiety. They know, either instinctively or through training (or both), that people can be induced into a prolonged state of confusion easily, and that many people in states of confusion act quite irrationally. Abusers, cult leaders, and other controllers use demagoguery and other tricks to hide in plain sight and continue to accrue power while passing themselves off as harmless or extremely patriotic.

These chaos agents use emotional manipulation and other tactics of emotional predators as a tool of control. They whip followers up into a fear frenzy frequently enough to instill a set of phobia-like instinctual reactions to chosen stimuli. In addition to stoking fears of the enemies at the gates, they also inculcate irrational fears of the consequences of questioning their authority — invoking authoritarianism. Any doubts expressed about the leadership or its doctrine are subject to terrifying negative results. Cults use this formula to wield undue influence over followers, and prevent them from questioning or leaving the group.

Phobia indoctrination is a tool of cults

As part of a larger overall program of brainwashing or mind control, cults and destructive organizations use imaginary extremes (going to hell, being possessed by demons, failing miserably at life, race war, Leftist apocalypse, etc.) to shock followers into refusing to examine any evidence whatsoever. A form of unethical hypnosis, phobia indoctrination can now be carried out on a mass scale thanks to the internet and our massive media apparatus. Be sure to be on the lookout for any cult warning signs in groups and messaging all around you.

Sociopaths and other types of emotional predators are taking ample advantage of their advantage in time and distance over the slow pace of justice. The wielding of fear as a cudgel in American politics has reached a fever pitch, with anti-Critical Race Theory hysteria, anti-vaxxers, anti-government types, anti-science, Lost Cause-revival zombie MAGA footsoldiers screeching about the “freedom!!!” they wish the government to provide them for persecuting their enemies, and other social horrors are merely the tip of the climate changing iceberg.

phobia indoctrination, illustrated

Phobia indoctrination tactics

Strategies of phobia indoctrination include Repetition and Conditioning, where fears are built through constant exposure; Misinformation and Propaganda, using false information to paint something as dangerous; Utilizing Existing Fears, exaggerating known fears or anxieties; and Social Pressure and Group Dynamics, leveraging social influences to convince others that irrational fears are common.

Other tactics include Authority and Expert Manipulation, where false credentials are used to lend legitimacy; Emotional Manipulation, appealing directly to emotions; Isolation and Control, where a person’s environment is manipulated; and Media Manipulation, using media to provoke fear.

Phobia indoctrination and cults book list:

Or, support local bookstores instead of Jeff Bezos:

Related to phobia indoctrination:

Cult Dictionary β†—

We had better get familiar with the lexicon and vocabulary of the coming era, so we can fight the creeping scourge of thought control roiling the land.

Jim Jones toasting his cult members with a cup of cyanide, by Midjourney

Disinformation Dictionary β†—

Disinformation is meant to confuse, throw off, distract, polarize, and otherwise create conflict within and between target populations.

Disinformation, by Midjourney

Cult Warning Signs: How to recognize cultish groups β†—

Recognizing cult warning signs can be vital in identifying and understanding the risk before getting involved with a group who may not have your best interests in mind.

cult warning signs, by Midjourney
Read more

Legal statute requiring those persons lobbying on behalf of a foreign government or other entity to register such with the U.S. government.

Folks like Mike Flynn and Jared Kushner ran afoul of this law during their time in the US government.

History of FARA

The Foreign Agents Registration Act, often abbreviated as FARA, is a United States law passed in 1938. The purpose of FARA is to ensure that the U.S. government and the people of the United States are informed about the source of information (propaganda) and the identity of people trying to influence U.S. public opinion, policy, and laws on behalf of foreign principals.

The Act requires every person who acts as an agent of foreign principals in a political or quasi-political capacity to make periodic public disclosure of their relationship with the foreign principal. This includes activities, receipts, and disbursements in support of those activities. Disclosure of the required information facilitates evaluation by the government and the American people of the statements and activities of such persons.

The Act is administered and enforced by the FARA Unit of the National Security Division (NSD) of the United States Department of Justice.

FARA does not restrict publishing of materials or viewpoints; rather, it requires agents representing the interests of foreign powers to disclose their relationship with the foreign government and information about related activities and finances.

Originally, FARA was passed in 1938 in response to concerns about German propaganda agents in the United States in the years leading up to World War II, but its usage has evolved over time. The Act has been amended several times, most significantly in 1966 when its scope was narrowed to focus more specifically on agents working in a political context.

Non-compliance with FARA has become a more prominent issue in recent times, with several high-profile investigations and prosecutions related to the Act. The Act received significant media attention during and after the 2016 U.S. Presidential election, when it was invoked in investigations related to foreign interference in the election — particularly Russian election interference.

More on FARA

Learn more about FARA from the Department of Justice.

Read more

Some of us have been boning up on this topic for about 6 years already, while others are just tuning in now based on the horrors of recent events. It can be overwhelming to come in cold, so here — don’t go it alone! Take this:

Putin’s war against the west

President Biden “declassified” an intelligence analysis many of us had arrived at some time ago: Russian president Vladimir Putin is a cruel revanchist leader who will stop at nothing to claw out a larger legacy before he dies. His goal is nothing less than reconstituting the former Soviet Union and restoring the “glory” of the Russian empire of yesteryear. And for some reason he thinks the world community is going to let him get away with his delusional fever dreams of conquest — as if fever dreams of Mongol domination are still de rigueur.

The attacks on the 2016 election and on the American Capitol in 2021 are related — both are Russian hybrid warfare operations. Russia also is the cold beating heart of the right-wing authoritarianism movement around the world, via financial, political, psychological, economic, and other means of government and regulatory capture.

Putin has hated democracy for a long time — since before the Berlin Wall fell where he was stationed in East Berlin as a young KGB agent, taking the news hard. Now, he has many fifth column confederates aiding and abetting him from within the United States — a number of them brazenly, and openly. It is getting harder and harder for those treasonous types to “hide out” in the folds of disinformation, misinformation, and plausible deniability. The play is being called — and everyone will need to decide if they’re for democracy or authoritarianism.

Further reading:

Media Resources

Twitter Lists

Read more

How to detect fake from real

It is going to become increasingly more difficult to discern from fact from fiction, here in this world that seemingly quickly flipped from a world of The Enlightenment to a world of dark disinformation. From artificial intelligence to vast propaganda machines, from deep fakes to fake lives — it’s going to require more from us to be able to detect what’s real.

Already we can’t rely on old cues, signposts, and tropes anymore. We’re less credulous about credentials, and trust isn’t automatic based on caste, title, or familiar status markers.

Go slow and look for mimics

Here’s one key to more accurate reality detection: take more time to spot the fake. Don’t judge too quickly, because it can take time to weed out the fakesters and the hucksters — some are decent mimics and can fool people who are in a hurry, not paying much attention, or attracted to some irrelevant other quality about the ersatz knockoff and thus forms an affinity with them based on something else entirely. Some drink the Kool-Aid for various reasons.

Clues of fraud

Those who cling absurdly to abstract symbols are often fakes. And in general, any folks who feel like they are just trying a little bit too hard might be fake. Then, of course, there are the full-on zealots and religious nutbags. These theocrats are definitely faux compassionate Jesus-lovers. What better cloak than the robes of a religious man (or, less frequently, woman)? It’s the perfect disguise.

No wonder so many child abusers hide out in churches of all kinds, from famously the Catholic to the more recently-outed (though not surprising) Evangelical Southern Baptist Church. No one will ever suspect them, or want to confront them if they do. Plus, they have Democrats to absurdly try and pin the blame on repeatedly, despite a lack of a shred of evidence.

Read more

The concept of the Goldilocks Zone reminds us that most typically, there is a range of possibilities above and below which would not be viable. This is in contrast to the idea of unbounded growth, in which one or more key performance indicators is expected to continue to grow forever, without bounds. Think: up and to the right.

Commonly used as a metaphor, the Goldilocks Zone has its origins in planetary science. It defines a planet that is within the habitable zone of its star system, meaning not too hot and not too cold — with the ability to sustain liquid water. Without it, life on the only living planet we know — ours — would cease to exist. Therefore, one good place to look for potential life on other planets is the Goldilocks Zone, which has also come to be used as a reference meaning “the perfect conditions” for some ideal state or goal.

“Going viral” isn’t always desirable

We crave it in our social media feeds, but avoid it like the plague when it is the plague — viral contagion can both giveth and taketh away. In America we’ve recently been having both as of this writing.

Whereas the Goldilocks Zone presupposes limits at both ends, unbounded growth expects no limits to ever be encountered from the start. In a finite world inside a finite universe, it is simply unlikely to be true with much regularity.

You could say that Goldilocks Zones know a lot about establishing boundaries, while the infinite growth areas tend to extremism. Beyond the pandemic, cancer is another infamous candidate for illustrating the dangers of growth without bounds. Arguably, hypercapitalism belongs.

The Goldilocks Zone is a moderate

Goldilocks Zones are akin to the center of the Bell curve; the boundaries of the margin of error; the middle path. James Madison would have been a fan of the Goldilocks Zone — it would have smelled to him like his own concept of the moderating force of many factions preventing too much extremism from taking root in governance, and reminded him of the insights of the Marquis de Condorcet.

“Moderation in all things” was made famous by first the Greeks and later the Romans. It is a kind of ancient wisdom that turns out to have very old roots indeed — back even to the early days of the universe.

Read more

Surveillance Capitalism Dictionary

They were inspired by hippies, but Orwell would fear them. The giants of Silicon Valley started out trying to outsmart The Man, and in the process became him. And so, surveillance capitalism got born. Such is the story of corruption since time immemorial.

This surveillance capitalism dictionary of surveillance is a work in progress! Check back for further updates!

TermDefinition
algorithmA set of instructions that programmers give to computers to run software and make decisions.
artificial intelligence (AI)
Bayes' Theorem
bioinformaticsA technical and computational subfield of genetics, concerned with the information and data encoded by our genes and genetic codes.
child machineAlan Turing's concept for developing an "adult brain" by creating a child brain and giving it an education
CHINOOKcheckers program that becomes the first time an AI wins an official world championship in a game of skill, in 1994
click-wrap
collateral behavioral data
common carrierA sort of hybrid public interest served by corporate promise of meeting a high bar of neutrality -- a historical precedent setby the early Bell system monopoly, and an issue of public-private strife today with the advent of the internet.
contracts of adhesion
cookiesSmall packets of data deposited by the vast majority of websites you visit, that store information in the browser as a way to extract intelligence about their users and visitors.
corpusIn Natural Language Processing, a compendium of words used to "train" the AI to understand patterns in new texts.
decision trees
Deep BlueChess program that beats world chess champion Garry Kasparov in 1997
deep learning
evolutionary algorithms
Facebook
facial recognition
Flash Crash of 2010sudden drop of over $1 trillion in the E-Mini S&P 500 futures contract market via runaway feedback loop within a set of algorithmic traders
FLOPSfloating-point operations per second
Free BasicsFacebook's plan, via Internet.org, to provide limited free internet services in rural India (and elsewhere in the developing world).Controversy centers on the β€œlimited” nature of the offering, which gives Facebook the power to select or reject individual websites and resources for inclusion.
genetic algorithms
GOFAI"Good Old-Fashioned Artificial Intelligence"
HLMIhuman-level machine intelligence: defined as being able to carry out most human professions at least as well as a typical human
interoperability
Kolmogorov complexity
language translation
linear regression
machine learning
Markov chains
monopoly
NAFTA
natural language processing (NLP)A technology for processing and analyzing words
neofeudalism
net neutralityLegal and regulatory concept maintaining that Internet Service Providers must act as common carriers, allowing businesses and citizens to interoperate with the physical infrastructure of the communications network equally, without being subject to biased or exclusionary activities on the part of the network.
neural networks
netizens
"Online Eraser" law (CA)
patrimonial capitalism
Pegasus
phonemes
predatory lending
predictive analytics
privacy
private eminent domain
probability
prosody
qualia
r > gPiketty's insight
randomness
random walk
recommender systems
recursion
recursive learning
right to be forgottenWhen it became EU law in 2014, this groundbreaking legislation gave citizens the power to demand search engines remove pointers to content about them. It was the growing of a data rights movement in Europe that led later to GDPR.
SciKit
simulation
smart speakers
speech recognition
spyware
statistical modeling
strong vs. weak AI"weak AI" refers to algorithms designed to master a specific narrow domain of knowledge or problem-solving, vs. achieving a more general intelligence (strong AI)
supermajority
supervised learning
surplus data
TensorFlow
Tianhe-2The world's fastest supercomputer, developed in China, until it was surpassed in June 2016 by the also Chinese Sunway TaihuLight
Terms of Service
Twitter
unsupervised learning
WatsonIBM AI that defeats the two all-time greatest human Jeopardy! champions in 2010
WhatsApp
WTO
Zuccotti Park
Read more

There are many things in life you don’t want to rush through; many experiences you wish to linger. The American cult of efficiency is a kind of over-optimization, and over-fitting of a line that delusionally demands up and to the right every single day, every single quarter, every single time.

The benefits of stopping to smell the flowers have been extolled by sages and philosophers throughout the ages. In all of recorded human history lies some form of the mantra, “haste unto death” — for it is true. We rush headlong off the cliff after all the lemmings ahead of us. We can’t help ourselves — eternal moths to eternal flames.

The slow life

From the cuisine to jurisprudence, from behavior economics to psychological well-being, moving more slowly has numerous well-established benefits. Efficiency should never be the only goal, in any domain or at all times. As James Madison strongly agreed with, “moderation in all things” is the mathematically optimal way to approach life, justice, and governing. Influenced by the Marquis du Condorcet, the invention of statistics, and a distaste for extremism in all forms, The Founders were prescient regarding the later theory of the wisdom of the crowds. They sought to temper the passions of the crowds via checks and balances in our system of governance.

“The arc of the moral universe is long, but it bends toward justice,” said Martin Luther King, Jr. That the veracity of the quote remains unsettled is unsettling, like strange fruit swinging in the southern breeze. Yet the “quick justice” barbaric efficiency of slavery, the Confederacy, Jim Crow, superpredators, and mowing down unarmed Black men for traffic violations to name a few, are no examples of fairness. Faster isn’t always better, especially when it comes to justice. It takes time to gather facts, talk to witnesses, piece together the crimes and document them in an airtight way, brokering no doubt in the mind of a single jurist.

More efficiency topics

Areas I’ll be further exploring:

  • Slow thinking — Daniel Kahneman’s behavioral economics and cognition theory about slow and fast thinking systems in the brain, how they physiologically arose, and their implications for bias, decision making, geopolitics, and more.
  • Journey vs. Destination — It’s not just about getting to the same restaurant and eating the same thing. The end doesn’t always justify the means. Traveler vs. Tourist. Go with the flow. Roll with it, baby.
  • An ounce of caution — A stitch of time. He who makes haste makes waste. Don’t count your chickens before they hatch. Be careful!
  • Self-reflection — Thoughtfulness. Rumination. Mindfulness. Presence.
  • Being too busy speeds up time, not necessarily in a good way. Leads to the unexamined life, a Stoic no-no. Socrates would not approve, dude.
  • Enoughness — Sustainability. Patience. Non-violence. Whole-heartedness.
  • Hierarchy vs. Fairness — Consensus takes a lot longer. Dictators and monarchs are nothing if not efficient.
  • The appeal of fascism — History and ideology of the Nazis and their obsession with efficiency.
  • PR — soundbites. Simple narratives. Tropes, slogans, repetition.
  • Entertainment — intellectual empty calories. Neil Postman. McLuhan.
  • Automation — AI, bots, robotics, threats to labor
  • Walking vs. Transportation
  • The slow food movement
  • Speed reading
  • Speed runs — video games
Read more

speak, sistah!

see also: Shoshanna Zuboff (who wrote the seminal work on surveillance capitalism), Don Norman, Dystopia vs. Utopia Book List: A Fight to the Finish, surveillance capitalism dictionary

Some takeaways:

  • surveillance won’t be obvious and overt like in Orwell’s classic totalitarian novel 1984 — it’ll be covert and subtle (“more like a spider’s web”)
  • social networks use persuasion architecture — the same cloying design aesthetic that puts gum at the eye level of children in the grocery aisle

Example:

AI modeling of potential Las Vegas ticket buyers

The machine learning algorithms can classify people into two buckets, “likely to buy tickets to Vegas” and “unlikely to” based on exposure to lots and lots of data patterns. Problem being, it’s a black box and no one — not even the computer scientists — know how it works or what it’s doing exactly.

So the AI may have discovered that bipolar individuals just about to go into mania are more susceptible to buying tickets to Vegas — and that is the segment of the population they are targeting: a vulnerable set of people prone to overspending and gambling addictions. The ethical implications of unleashing this on the world — and routinely using and optimizing it relentlessly — are staggering.

Profiting from extremism

“You’re never hardcore enough for YouTube” — YouTube gives you content recommendations that are increasingly polarized and polarizing, because it turns out that preying on your reptilian brain makes you keep clicking around in the YouTube hamster wheel.

The amorality of AI — “algorithms don’t care if they’re selling shoes, or politics.” Our social, political, and cultural flows are being organized by these persuasion architectures — organized for profit; not for the collective good, not for public interests, not subject to our political will anymore. These powerful surveillance capitalism tools are running mostly unchecked, with little oversight and with few people minding the ethics of the stores of essentially a cadre of Silicon Valley billionaires.

Intent doesn’t matter — good intentions aren’t enough; it’s the structure and business models that matter. Facebook isn’t a half trillion dollar con: its value is in its highly effective persuasion power, which is highly troubling and concerning in a supposedly democratic society. Mark Zuckerberg may even ultimately mean well (…debatable), but it doesn’t excuse the railroading over numerous obviously negative externalities resulting from the unchecked power of Facebook in not only the U.S., but in countries around the world including highly volatile domains.

Extremism benefits demagogues — Oppressive regimes both come to power by and benefit from political extremism; from whipping up citizens into a frenzy, often against each other as much as against perceived external or internal enemies. Our data and attention are now for sale to the highest bidding authoritarians and demagogues around the world — enabling them to use AI against us in election after election and PR campaign after PR campaign. We gave foreign dictators even greater powers to influence and persuade us in ways that benefit them at the expense of our own self-interest.

Read more