Document compiled from salon notes, interpretation & additional theory.
The 0x Salon is an experiment in reformulating the salon context as a venue for post-disciplinary exchange.
The purpose of this text (and others to follow) is twofold: to provide some written documentation for the recent 0x Salon discussions on Algorithmic Realism and as an opportunity to further develop concepts arising in conversations at and around salon topics. We aim to capture the spirit of early-stage salon ideation whilst bridging the gap from speculative & generalistic discourse to substantive analysis, critique, synthesis and extension of the concepts in source materials and salon discussions.
This is a public draft / living document - we welcome your comments and critiques!
This is the fifth in a series of articles on Algorithmic Realism. Access the entire collection here.
0xSalon003 (a) & (b) ::: Algorithmic Realism
b) Thursday 27th August 2020 ::: Theory Branch ::: Berlin
a) Tuesday 14th April 2020 ::: Praxis Branch ::: Virtual
Massive, dynamically updated and persistent data corpuses such as those used by platforms in AL/ML models can be thought of as hyperobjects both in the original information systems sense but also in wider contexts as popularised by Morton. The systems that leverage and implement them in some sense transcend the traditional boundaries of algorithmic logic and agency in addition to spatio-temporality. Could they also transcend the mundane and assume mystical characteristics?
“Captured data forms an ambivalent and opaque ‘hyperobject’ that out‐scales and outlasts contributors who nevertheless have to work with it in a pragmatic, aspirational fashion, because it is translated into dynamic and potentially consequential reputational information—only part of which is made public on a platform’s front end. As such, these data‐derived assets constitute a volatile measure of human (“über‐”)capital gig workers cannot afford not to cultivate.”
van Doorn & Badger, Antipode 2020 (edited)
“A galactic algorithm is one that outperforms any other algorithm for problems that are sufficiently large, but where "sufficiently large" is so big that the algorithm is never used in (conventional / terrestrial) practice.”
Taking the field of data visualization for example, designers and computer scientists are working with these algorithms to visualize complex datasets. Very often it is easy to regard the corpus algorithmi as an entity in itself, rather than an abstraction of something captured by a type of logic. The formalist temptation is to then regard the data as epistemic bedrock from which more models, analysis and insight can be gleaned from.
“All models are wrong, but some are useful.”
George Box (possibly apocryphal)
The idea that you can have an absolute, stable “truth” seems rather twee today. Perhaps you can get close to the truth - have a truth value - but all you can really achieve is to approach the truth asymptotically. The quest of the epistemic idealist is made all the more challenging by virtue of the fact that one does not always know if progress is being made, as a non-axiomatic truth cannot be known a priori. Many epistemologists believe that there are no confirmatory tests, only disconfirmatory ones.
“If you asked me to describe the rising philosophy of the day, I’d say it is data-ism. We now have the ability to gather huge amounts of data. This ability seems to carry with it certain cultural assumptions — that everything that can be measured should be measured; that data is a transparent and reliable lens that allows us to filter out emotionalism and ideology; that data will help us do remarkable things — like foretell the future.”
David Brooks, “The Philosophy of Data,” New York Times, 2013.
Brooks’ op-ed has become something of a punching bag for countless response articles. The most salient of those critiques perhaps comes from Byung-Chul Han in Psychopolitics.
As Brooks describes them, data afford a ‘transparent and reliable lens’. The imperative of the second Enlightenment declares: everything must become data and information. The soul of the second Enlightenment is data totalitarianism, or data fetishism. Although it announces that it is taking leave of all ideology, dataism itself is an ideology. It is leading to digital totalitarianism. Therefore, a third Enlightenment is called for – in order to shine a light on how digital enlightenment has transformed into a new kind of servitude.”
Byung-Chul Han. Psychopolitics.
“Digital positivism resonates with historical views of data and knowledge,promising that with more data comes more truth. … The digital positivist stance is pushed to its extreme in the famous piece by Anderson (2008), arguing that the ‘scientific method is obsolete’ and the hypotheses themselves will soon be formulated by data that ‘speak for themselves.”
“All we have is data but we can't access the truth” and “there's reality but we can only work with the data” resonates with Žižek’s point regarding ideology being the mask, but it can't be removed. There’s only a mask that conceals the fact that there's nothing beyond it.
"The mask is not simply hiding the real state of things; the ideological distortion is written into its very essence."
Slavoj Žižek, The Sublime Object of Ideology.
For Žižek (and thinkers before him that emphasized a similar insight: Marx, Freud, Sohn-Rethel, etc.), there is no secret material grounding (labour-behind-value, latent dream content) waiting to be unearthed through analysis. Analysis of “the secret of the form” itself, rather than the secret contents behind the form, can demystify the fetish character (of the commodity or dream). Is the same critique applicable to algorithms? It’s easy to recognize symptoms of the fetishization of data: dataism; the quest to liberate data from private companies and give users ownership of their own data; the belief that algorithms possess alien-like or God-like capabilities. A materialist critique of algorithms would likewise address the algorithmic form rather than the inner workings of the algorithm and the composition of its data.
What about self-fulfilling or even self-perpetuating algorithmic prophecies? The increasingly observed phenomenon in which these more advanced forms of algorithms get to create some kind of realities, from the technologically-mediated and partially abstracted realities such as those of the network cultures that we all live in, but also potentially beyond this in the way that they can potentially mimic or surpass the structure and function of legacy institutions. Perhaps we can frame these “reality producing” algorithmic institutions in the context of spiritual institutions such as organised religious movements and canonised texts. Can Google’s search engine algorithmic system be equated with for example The Bible in terms of their impacts on pervasive knowledge production and morphic resonance on human society collectively? A quick Google search says “yes,” and members of the ruling class agree. “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World,” which argues that “whoever controls the algorithm is master of the universe,” was spotted on the bookshelf or President Xi Jinping.
Alex Karp from Palantir was building a “systematic” conceptual framework reinterpreting Theodor Adorno’s Jargon of Authenticity (1973), most completely characterised in his 2002 dissertation undertaken at J. W. Goethe University of Frankfurt entitled “Aggression in the Life-World” - viewing all social interaction as a form of aggression and how the unfolding power dynamics influence social integration and cohesion. Fully embracing this perspective requires adopting this formalist hyperobject lens to the social fabric itself (treating human and semantic information as syntactic data?) - at once naïve, utopian and incredibly dangerous particularly given Palantir’s position principally as an analytical intelligence contractor to nation-state defense administrations.
“AI” is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity. Given that any such replacement is a mirage, this ideology has strong resonances with other historical ideologies, such as technocracy and central-planning-based forms of socialism, which viewed as desirable or inevitable the replacement of most human judgement/agency with systems created by a small technical elite. It is thus not all that surprising that the Chinese Communist Party would find AI to be a welcome technological formulation of its own ideology.
Lanier & Weyl, AI is an Ideology, not a Technology, 2020
“In its received (sometimes called its “classical”) form, computationalism is the view that not just human minds are computers but that mind itself must be a computer— that our notion of intellect is, at bottom, identical with abstract computation, and that in discovering the principles of algorithmic computation via the Turing Machine human beings have, in fact, discovered the essence not just of human thought in practice but all thought in principle (see especially the work of Kurzweil).”
David Golumbia, Cultural Logic of Computation (edited)
“...an admixture between calculation and religion continues into algorithmic computation and statistical mathematics. These tools were developed by thinkers within the Abrahamic religious tradition: algebraic formulation in Islam and a theistic element that seems indissociable with probability theory. The article emphasizes theological traces in the vantage over large numbers that exceed enumeration in probability theory, which further suggests collateral secularizations of predestination and theodicy as it optimizes into Bayesian algorithms and machine learning.”
Genealogy of Algorithms: Datafication as Transvaluation - Virgil W. Brower (edited), Le foucaldien 6, no. 1 (2020): 11, 1–43. DOI.