Skip to main content
SearchLoginLogin or Signup

0x003 Report I ::: Limits of the Algorithmic Concept

Document compiled from salon notes, interpretation & additional theory.

Published onDec 09, 2020
0x003 Report I ::: Limits of the Algorithmic Concept
·

The 0x Salon is an experiment in reformulating the salon context as a venue for post-disciplinary exchange.

The purpose of this text (and others to follow) is twofold: to provide some written documentation for the recent 0x Salon discussions on Algorithmic Realism and as an opportunity to further develop concepts arising in conversations at and around salon topics. We aim to capture the spirit of early-stage salon ideation whilst bridging the gap from speculative & generalistic discourse to substantive analysis, critique, synthesis and extension of the concepts in source materials and salon discussions.

This is a public draft / living document - we welcome your comments and critiques!

This is the first in a series of articles on Algorithmic Realism. Access the entire collection here.

0x003 Event Information

0xSalon003 (a) & (b) ::: Algorithmic Realism
b) Thursday 27th August 2020 ::: Theory Branch ::: Berlin
a) Tuesday 14th April 2020 ::: Praxis Branch ::: Virtual

Limits of the Algorithmic Concept

At different points during the 0x Salon’s discussions of Algorithmic Realism, we characterised algorithms in a variety of different and often contradictory ways, and we mentioned a range of concrete applications of algorithmic technology.

Former UK government advisor Dominic Cummings and protests over A-Level results in summer 2020.

Some examples that came up in discussion were: 

Additionally we described many qualities of algorithms: 

  • they recognise patterns; 

  • they sort data;

  • they take never-ending data streams as input; 

  • they’re computationally resource-intensive; 

  • they become more accurate or effective the more data they ingress. 

How can we make sense of the diverse ways that algorithms are being put to use and account for the effects that algorithms have on society? (To give a tongue in cheek response, perhaps a neural network, given enough data, could process our list and spit out a few categories and associations that help us understand what makes an algorithm…) 

In all of the above cases, we’ve presupposed that there’s some degree of efficacy to the way that algorithms affect the world. Deploying an algorithm has consequences, intentional or otherwise. Governments, scientists, healthcare providers, and corporations appear to treat algorithms as a magic solution capable of superhuman feats, and their enthusiastic portrayal in the media further reinforces the impression that algorithms are widely used and play a major role in shaping our lives. Regardless of the extent to which this efficacy is an illusion, putting faith in the power of algorithms centers them as a field of political contestation and entrenches a particular set of beliefs (i.e. that algorithms are agents of social change, that they are capable of producing models and predictions that are ‘accurate enough,’ that algorithms are technically complex and mysterious, etc.). The enormous impact that algorithms have today seems to be beyond comprehension. Through the 0x Salon programming on Algorithmic Realism, we’re attempting to better understand the ways we engage with algorithms, and the way algorithms engage with us.

The current “data revolution” is haunted by many of the same epistemic issues that troubled the first data revolution in the 1800s. A full examination of the history of data science and algorithms is beyond the scope of this blog post– see Matteo Pasquinelli, Rodrigo Ochigame, Dan McQuillan, and many others for more on that. The language used in discussions of ‘cutting edge’ machine learning algorithms cloaks them in a pervasive sense of newness which has the surface-level effect of insulating their use from criticism. Terminology and buzzwords aside, the concern over machine learning today echo the paranoia around big data and surveillance a decade ago. Many of the same institutions that researched and funded statistics and computer science have pivoted their language from focusing on big data towards AI and machine learning or deep learning and rebranded their efforts as “data science.” 

Abraham Bosse, Les Perspecteurs. Print from Maniere universelle de M. Desargues pour traiter la perspective, 1648. Bibliotèque Nationale de France.

In parsing out the multiplicity of ideas that are flattened into the word ‘algorithm’ in discourse, commentators often point out the technical difference between AI and other algorithms. In a sense, recent advances in neural network technology indicate a break from earlier research in statistics and algorithmic techniques. And in fact neural networks do operate in a manner that had been dismissed as prohibitively inefficient until around 2006, when researchers published the seminal paper “A fast learning algorithm for deep belief nets”. But more often, this break between AI and other algorithms is invoked by machine learning proponents and unwitting media coverage to obscure the real limitations and particular uses of machine learning, rendering the illusion that the technology is more impressive than it would otherwise appear. By centering the locus of this rupture within the algorithm’s technical substrate rather than its practical social effects, we run the risk of mis-identifying some of the politically-contentious problems with machine learning as technical problems, effectively foreclosing the issue from public debate and delegating data scientists to find a solution. Neither the technical nor practical understanding of algorithms is universally superior, and understanding the limitations of both frameworks is key to applying them to specific issues. Additional insights can be gained from invoking the algorithmic lens to study evolved natural phenomena such as collective coordination in animal migration.

Woodcut of Albrecht Dürer (ca. 1525) demonstrating the first known use of the ray tracing approach.

There are well-established algorithm types such as A* or ray tracing that are not defined in a procedural, stepwise manner but rather by a particular view of the system as defined. For example, A* defines possibilities as a directed graph of actions, ray tracing defines material in the way it reflects light, and even older algorithms can be defined in similar ways. One of the most interesting aspects of algorithms from a user’s perspective is that they allow you to view the world as potential input material. Perhaps perversely, certain algorithms give us a glimpse into a world where materiality is more significant than it previously seemed to be.

Returning to a trans-computational frame of reference (for example the legal perspective presented in the paper), protocols in the most general sense can be thought of as forms of control structures built from components such as algorithms, or indeed other methods of enacting policies and making decisions. As such, protocols may map quite closely onto a legal specification that can provide strict (formalist-oriented if not fully formal) specifications for how resources and agents interact in a system.

Connections
1 of 7
Comments
0
comment
No comments here
Why not start the discussion?