Effective Altruism and Longtermism are relatively recent (since the late 2000s) twin philosophical movements making the claim that, as a human species, we ought to prioritize impacting the long-term future of humanity — hundreds, thousands, or millions of years from now — over and above any concerns for actual humans alive today. Largely inspired by utilitarianism, it favors questionable metrics like “lives saved per dollar” in its quest to not just do good, but “do the most good.”
Longtermism is an outgrowth of Effective Altruism (EA), a social movement developed by philosophers Peter Singer and William MacAskill. It emphasizes the moral importance of trying to shape the far future, and adherents argue that the long-term consequences of our actions far outweigh their short-term effects because of the potential of vast numbers of future lives. In other words, future people will outnumber us at such a scale that, by comparison to this imaginary future universe, our current-day lives are not very important at all.
It has numerous and powerful adherents among the Silicon Valley elite including Trump bromance Elon Musk, tech billionaire Peter Thiel (who spoke at the RNC in 2016), indicted and disgraced crypto trader Sam Bankman-Fried, Twitter and Square founder Jack Dorsey (who is good friends with Elon), OpenAI‘s CEO Sam Altman, Ethereum founder (and Thiel fellow) Vitalik Buterin, co-founder of Asana Dustin Moskovitz, and others.
Why longtermism resonates with tech oligarchs
The tech-industrial complex is steeped in the idea of longtermism in part because it aligns so well with so many of their values:
- technological optimism / techno-utopianism — the belief that technology is the solution to all of humanity’s greatest challenges
- risk-taking mindset — venture capital is famous for its high-risk, high-reward mentality
- Greatness Thinking — unwavering devotion to an Ayn Randian worldview in which only two groups exist: a small group of otherworldly titans, and everyone else
- atomized world — social groups and historical context don’t matter much, because one’s personal individualized contributions are what make real impact on the world
The dubious ethics of effective altruism
Although it positions itself high, high above the heady clouds of moral superiority, EA is yet another in a long line of elaborate excuses for ignoring urgent problems we actually face, in favor of “reallocating resources” towards some long-distant predictively “better” class of people that do not currently exist and will not exist for thousands, millions, or even billions of years. It’s an elaborate excuse framework for “billionaires behaving badly” — who claim to be akin to saints or even gods who are doing the difficult work of “saving humanity,” but in reality are navel-gazing into their vanity projects and stroking each others’ raging narcissism while completely ignoring large, looming actual dangers in the here and now like climate change, systemic inequality, and geopolitical instabillity to name a few.
Meanwhile, the entire ideology rests on the idea that these tech elites have miraculously predicted a highly accurate picture of the universe billions or even trillions of years from now. We are not good — in fact we are achingly terrible — at making predictions about the future even 5 years out; even 1 year out! Elon Musk famously predicted we’d have unmanned missions to Mars by 2022 and a human crew landing there in 2024. Yet these folks are supremely confident about their ability to calculate extremely distant futures with a high measure of precision. It doesn’t add up. The math doesn’t math.
Variation on Mudsill Theory
As well, one of the net effects of the EA “philosophy” is the conclusion stunningly similar to James Henry Hammond’s Mudsill Theory from 1858: “It is better for the Great Good if I am the one who decides how to spend your money.” In other words, some people are better than others — and those are the people who should make all the decisions and allocate all the funding. Moreover, those Better People are entirely justified in deceiving others out of their money (or ability to own property and of course, basic human rights in the case of James Henry Hammond’s southern planter class of white supremacists) — under the guise of supposedly benefiting all of humanity.
Both are variations on “end justifies the means” excuse frameworks for behaving aggressively with absurd moral authority over others, as well as borrowing heavily from lifeboat ethics — in which extreme actions are justified under the invented position of abject survival.
In this new version of Mudsill Theory, it benefits future humans enormously for Sam Bankman-Fried to live high on the hog in a mansion in Bermuda, and hoover up billions in cash from hoodwinked investors so that he can invest it “better.” He’s just a smarter gambler, you see, and the plebes of today are worthless anyhow compared to the magnificent Future People who will definitely exist (according to the brilliant gamblers’ wisdom of SBF). Those Future People would absolutely want SBF to be pampered today so he can make better decisions with the money that will seed their own pampered existences in a few million years, out there in the stars tending the “light of consciousness.“
Obnoxious grandiosity
The “license to grandiosity” this ethical framework gives to the already grandiose tech oligarchy is made all the more obnoxious by the extremism of the denial of obvious problems pressing down upon humanity right now that threaten to extinguish the light of consciousness. Unfortunately in their calculation, if 6 or 7 billion of us die off because of climate change, it could still be worth it to fuel the rise of the AI system(s) that can take us off planet to colonize other worlds.
Don’t be surprised if we’re about to ignore climate change even harder now.
Comments are closed.