The 0x Salon is an experiment in reformulating the salon context as a venue for post-disciplinary exchange.
The purpose of this text (and others to follow) is twofold: to provide some written documentation for the recent 0x Salon discussions on Algorithmic Realism and as an opportunity to further develop concepts arising in conversations at and around salon topics. We aim to capture the spirit of early-stage salon ideation whilst bridging the gap from speculative & generalistic discourse to substantive analysis, critique, synthesis and extension of the concepts in source materials and salon discussions.
This is a public draft / living document - we welcome your comments and critiques!
This is the fourth in a series of articles on Algorithmic Realism. Access the entire collection here.
0xSalon003 (a) & (b) ::: Algorithmic Realism
b) Thursday 27th August 2020 ::: Theory Branch ::: Berlin
a) Tuesday 14th April 2020 ::: Praxis Branch ::: Virtual
There’s something brutally disempowering about having one’s fate rest upon the fickle judgment of a government agency, an insurance company, a bank—any circumstance where we’re cut out from the decision-making process yet destined to live with its final and irrefutable decision. This feeling of powerlessness, which strikes us when we submit a job application, a grant proposal, a visa application, and nags us when bouncers reject us from exclusive clubs, or when content moderation algorithms censor our online posts, is only accentuated when the evaluation process is convoluted or opaque. Everybody knows the system is rigged—it’s racist, sexist, nationalist, classist, and chauvinistic in every way imaginable. From the perspective of the one being judged, it matters little whether the institution operates through algorithmic machinery or bureaucratic bodies. Are the bureaucrats, some of whom are barely human, really so different from machines? Both are vehicles for the smooth operation of regimes of power.
“Much of [Western] governance has been algorithmic, even before the current species of [mechanised-computerised] algorithms that we have today. You only have to go to the Bürgeramt or read Kafka to know the sort of dehumanising feeling that interaction with bureaucracy has always brought with it, and continues to do so.”
Patrick Urs Riechert, remarks at 0xSalon003
How can we conceive of the relationship between bureaucratic and algorithmic governance? Are algorithms merely a highly optimized form of bureaucracy, monumentalising the successful entrenchment of bureaucracy into hard code? Or is contemporary algorithmic governance a hard break from 20th-century bureaucratic organization? Depending on whether we emphasize the continuity or split between bureaucracy and algorithmic governance, two schools of thought emerge. On the side that emphasizes the break between the two, we find that sensational claims made by Silicon Valley entrepreneurs and the critiques leveraged by post-structuralist theorists of power sit strangely comfortably alongside each other. On the side of continuity, we confront our everyday frustrations with the rigidity of digitized bureaucratic systems. And what image of history emerges when we attempt to think the two dialectically, grasping their continuity without erasing difference? The notion that algorithms can be thought of as policies is a stepping stone for understanding their subjective, political and performative elements and also for bridging computational and bureaucratic logics. Algorithmic qualities have been present in Western governance for a long time, far before the primacy of (mathematised, mechanised and/or computerised) algorithms. One only has to interact with state agencies or read Kafka to remember the dehumanising feeling that interaction with bureaucracy has always brought with it, and indeed continues to.
Algorithms can be thought of as policies, and this framing acts as a stepping stone for bridging computational and bureaucratic logics, and understanding their subjective, political and performative elements. The concepts of algorithmic realism and algorithmic formalism emerge from the idea that “algorithmic interventions operate in a manner akin to legal ones.” To say that both of these types of policy interventions operate in an analogous manner is to recognise a continuity between algorithms and law. Legal decisions seem to be analogue algorithms, and algorithms seem to be automated legal decisions. By painting both with the same brushstroke, law appears to have always been algorithmic. But this is only one way of conceptualising law among many possible organisational models. Bureaucratic organisation implements the social conditions under which algorithms can operate efficiently. Bureaucracy’s dream of an ideal society is realised in the form of full automation. Does this dream belong to a capitalist or to a worker? A worker’s dream of ideal bureaucracy would be one in which the worker doesn’t need to work to earn a living. Coincidentally this might look like full automation.
Put differently, the continuity between bureaucracy and automated decision-making is one history among many potential histories. The worker’s rights movement provides inspiration for another possible history. In Automating Inequality, Virginia Eubanks makes this same point, which she develops from a case study of ‘the digital poorhouse.’ Activists were able to re-orient the surveillance infrastructure that was implemented to prevent poor people from receiving welfare into a system for working class advocacy. The government agents who previously surveilled the poor and formulated cases for blocking welfare, instead become advocates for the poor people to whom they were assigned. This is a reformist strategy. A more radical strategy would do away with the technology all together. Rather than choosing between a vengeful or a benevolent arbiter of who does and does not receive welfare, a radical reorganisation could provide universal welfare services. Both the reformist and radical approaches could be useful for activists.
Robert Ellis Smith, Chart 4, in: U.S. House of Representatives. 98th Congress, 1st and 2nd Session. Hearing on Civil Liberties and the National Security State (Serial No. 103). Washington: Government Printing Office, 1984.
Automated decision-making could be framed as an algorithmic crystallization of bureaucratic logic. Prior to the widespread deployment of these algorithms, this robotic logic was cloaked behind a human face. Machinic and computation metaphors are common in descriptions of bureaucracy [apparatus, cogs in the machine, process, proforma, arbitrary IDs...]. The inversion of this perspective is rarely stated as bluntly. When algorithms are personified, they usually morph into individuals, not a collection of government agents. AI is pictured as a human-like android or like a human worker in popular imagination. Your female voice assistant Siri or Alexa jokes with you. Your (least) favorite social media influencer is a 3D rendered character. The online support chatbot for your German bank is named Nikolaos. Counter to today’s dominant narrative that imagines bureaucracy to have always been an inflexible and impersonal, it can be argued that (arithmetic) calculation has always been procedural, but institutional administration has only recently become majority formalised.
The struggle to present bureaucracy as an inflexible machine goes back over a century, at least to the scientific charity movement, but it has always been an open political question to imagine other forms of organization that are possible by retooling the technological infrastructure of bureaucracy. The totalising logic of algorithmic computation present in the platform era is the latest development in “naturalising” the logic of bureaucracy and algorithms as identical. Computerised algorithms, supposedly grounded on irrefutable scientific empiricism, are presented as the driver of what is in actuality a political program [more discussion around political questions to follow in future 0x003 outputs].
“[W]hite economic elites responded to the growing militancy of poor and working-class people by attacking welfare. They asked: How can legitimate need be tested in a communal lodging house? How can one enforce work and provide free soup at the same time? In response, a new kind of social reform—the scientific charity movement—began an all-out attack on public poor relief.
Scientific charity argued for more rigorous, data-driven methods to separate the deserving poor from the undeserving. In-depth investigation was a mechanism of moral classification and social control. Each poor family became a “case” to be solved; in its early years, the Charity Organization Society even used city police officers to investigate applications for relief. Casework was born.”
Virginia Eubanks, Automating Inequality, St Martin's Press
Foucault famously traces the development of power from sovereignty to disciplinary to biopower. In recent decades, many have critiqued Foucault’s formulation by citing social changes brought about by technologies unavailable during Foucault’s lifetime.
Building on Michel Foucault’s analyses, Gilles Deleuze announced in the 1990s “the progressive and dispersed installation of a new system of domination” and claimed that “societies of control … are in the process of replacing the disciplinary societies.” This control, he argued, is exercised by “machines of a third type, computers.” In fact:
If motorized machines constituted the second age of the technical machine, cybernetic and informational machines form a third age that reconstructs a generalized regime of subjection: recurrent and reversible ‘humans-machines systems’ replace the old nonrecurrent and nonreversible relations of subjection between the two elements.” (from Deleuze & Guattari, A Thousand Plateaus)
Anna Longo, Love in the Age of Algorithms, Glass Bead
Regardless of whether or not automated decision making is characteristic of biopolitical societies, control societies, or a subtler and more contemporary form of power within this series of developments, the framework illustrates continuity between bureaucracy and automated decision-making. In the most wretched cases, talking to algorithmically disempowered humans unable to exercise their judgement and discretion in bureaucratic contexts is functionally equivalent to talking to a robot anyway.
"the progressive and dispersed installation of a new system of domination … ultra-rapid forms of free-floating control that replaced the old disciplines operating in the time frame of a closed system ... what is important is no longer either a signature or a number, but a code: the code is a password. The numerical language of control is made of codes that mark access to information, or reject it. We no longer find ourselves dealing with the mass/individual pair. Individuals have become “dividuals”, and masses, samples, data, markets, or “banks.”
Gilles Deleuze, Postscript on the Societies of Control
There are still many administrative contexts where the human can still make meaningful actions to influence machine-determined outcomes. Here, a bureaucrat can serve as organic lubricant to smooth the unsympathetic gears of deterministic logic. Without the human element of the administration, won’t these systems be that much less accommodating and flexible? Perhaps we should not unduly valorise the bureaucratic homunculus - some of the forms of discrimination that you experience under a kind of human system are quite different to the kind of lack of flexibility that would be observed in an automated system, though some may be quite similar being the usual suspects around fairness, bias & transparency.
“Before citizens were datafied and algorithms deployed to inform or misinform microtargets, the political ambition of probability was to minimize biased passions and misinformed opinion. Laplace endeavors to apply Bayes' theorem to "decisions of assemblies," which depend not only on "the plurality of votes [but] the impartiality of the" voter (secularising the absolute indifference of a Bayesian god).”
Genealogy of Algorithms: Datafication as Transvaluation - Virgil W. Brower (edited). Le foucaldien 6, no. 1 (2020): 11, 1–43. DOI
“What is today either heralded as a new techno-utopian mode of algorithmic governance or conversely as an utterly dystopian kind of computational empire is what we are calling the Algorithmic Agartha: an altogether esoteric, over-human (übermenschlich), and calculatively mathè-mètic matrix that has taken the reins of power in our current techno-cultural dronological surveillance societies. Algorithms, of course, reduce or transduce human expressions and actions to machine-readable form, and in this respect the human being finds itself at once both post-humanized and machinified, as well as pre-humanized and animalized, proceeding and being processed by-way-of and in-tandem-with programs that shepherd it through a matrix with regard to which it is in general misinformed if not monumentally moronic.
The rise of an algorithmically-governed planetary regime ‘manages’ and ‘makes use of’ humans (as well as animals, objects and what-have-you) as conduits for machine evolution, machinic intellection, and the proliferation of overhuman orchestrations that occur and recur under the cover of computational power supposedly instrumentalized by human beings. It does not dispense with humans altogether, but rather lures humans into a predatory economy of tantalising prostheses that promise to extend, expand and enlarge the dominion (never mind the desires) of what in fact is an ever-waning species – a species on its way out.”
Mellamphy & Mellamphy, The Electrocene, CultureMachine 16, 2015 (edited)