Newsflash: Technology can't take the politics out of politics

It's part of a broader misconception on the part of the techno-utopian set that the purpose of tech innovation is to replace human beings instead of augment us. We are being replaced now at a rate greater than ever before, as a result of the rise of machine learning.

Underlying that assumption is the dark, Orwellian belief that at its core, humanity is fundamentally hopeless, irrational, and must be controlled by some force: typically a small cadre of wise (and very rich!) men (...usually white) who are irrevocably convinced they know more about humanity than other people. Or perhaps a single, highly-authoritarian entity... not like anyone we know who is currently running for el presidente...

To wit: another presidential candidate you probably haven't heard about: Zoltan Istvan of the Transhumanist Party. I'mma let him speak for himself:


But the Anti-Enlightment period was the greatest philosophical era OF ALL TIME!:

This has been a fantasy since the Enlightenment, but despite almost half a millennium of philosophical thought and technical progress, is there any sense that we are actually getting closer to this point vs. further away? Post-modernism pretty much destroyed the idea of objectivity. Decisions may be “rational," but they are unlikely ever to be unbiased — even as made by the machines we create. Tautologically speaking, we cannot create an algorithm that escapes human fallibility. And though machine learning relies less (or not at all) on algorithms, it still must be trained by a corpus that comes from somewhere, and “somewhere" always has a point of view the data will be skewed by. There is bias implicit even in what we choose to measure and in the basic epistemology of what we consider information. As philosophy might say, there is bias “all the way down" that technology is unlikely to simply escape.

As well, the classical Utilitarian view that our goal should be “maximizing the greater good" has been shown to be problematic in a number of ways (most notably, when pushed to logical extreme it can justify genocide, or giving disproportionate power to Utility Monsters who “get more enjoyment" out of resources).

But it’s really the idea that “every decision we make" will be outsourced to machine learning that is the bridge too far. Human beings like making decisions. In some sense, it’s a good deal of what makes us human. I think Mr. Istvan may be overestimating both the prevalence and the speed at which human society will be willing to hand over free will. Of course, that says nothing about whether or not some powerful autocratic force — or an intelligent agent itself — will amass the power to unilaterally make the decision to force humanity to submit to complete control of decision-making (in fact, this would be a simple matter once we’ve ceded this authority). Though I wouldn’t necessarily consider that a fun future.


See also: