AI Yesterday is a digital zine and transmedia forum that critically engages with AI histories up to and including yesterday. 

Issue 02 / AI FAILS

November 2021
Announcing Issue 02: AI FAILS.

AI Fails is a large and unwieldy subject, one that receives outsize media attention and academic focus.

Its looming presence over the AI stack obscures nuances, trivialities and complexities of AI Fails that may be less visible than bigger media stories.

What do these fails mean? In a conventional sense do they cause harm? Or are they simply technologies that don’t work.

We don’t ascribe values to the fails we cover, rather we illuminate their very existence, giving agency to the mundane. 

Fails can also be fun. And funny, something that often gets lost in the uber-serious way AI is framed. Un-funny or funny, or both we hope you like it.

︎ ACM Creativity & Cognition Workshop ︎

Making AI 

Advancing creative approaches to the design of AI systems through the craft of making them

The objective of this workshop is to rethink ‘making’ AI through a focus on the physical materials involved in designing, producing, and running artificially intelligent systems. The recent manufacturing chip shortage illuminates digital technologies’ physicality and the fragility of the commodity and production networks that underpin the AI systems our cities, governments, and workplaces have come to rely on. Those that are conventionally considered to ‘make’ AI through the design of AI systems are largely divorced from AI’s materiality and the craft of making AI. Corresponding research on AI and creativity focuses primarily on the digital artefacts, potentials and imaginaries AI creates, and less so on the social and material artefacts embedded in its ability to create. We hope to push participants beyond the theoretical knowing of AI materiality to tactile knowing through a practice-based approach to ‘making’ AI. Reorienting the focus of AI to materials and the supply chain as sites of creative intervention could leverage the potential of sensory, tactile experiences to spur re-imaginations of AI technologies and infrastructures. Ultimately, the aim is to advance creative approaches to the design of AI systems through the craft of making them.

This workshop is hosted by AI Yesterday.

Through the workshop we aim to build a meaningful integration amongst people who might not otherwise actively collaborate, seeing how collectively they may characterize, visualize, and describe AI. As we encourage participation outside the academy, we hope to draw on both the workshop’s location in Venice, with a long history of global trade and craft, and the 2022 Biennale as sites of situated community collaboration.

The findings from this workshop will support Issue 03: AI Materials, and relevant methodological insights as research methods for future journal publications.

Interested participants must register for the ACM conference here

Please submit any queries to:


Of Imaginaries and Uncertainty  

March 28, 2022

Our workshop asked participants to outline a definition of AI. There are multiple ways to do this, defining what AI does, what it’s made of, and how we experience it. Also, what we see, how AI appears to us in tangible and intangible forms, what AI can help us with, and how AI scares us. Participants also grappled with another question, how will AI change the way I live, far into the future. 

Researchers have long argued against deterministic thinking around technologies, instead understanding technological possibility with a sense of “agency and contingency” (Jasanoff and Kim, 2015, pg. 3). There is a process of meaning making that happens as individuals, and societies, interact with new technologies. But before we interact, we are also told, and shown, things about the potential of technology.


In 2019, the Barbican in London held a massive exhibit entitled AI: More than Human. A festivalstyle exhibit around the evolution of Artificial Intelligence (AI) and its potential to “revolutionise our lives” (Barbican, n.d.), the exhibit starts off with a timeline, much like the one illustrated by workshop participants, that draws the inception of AI all the way back in time. While our participants imagined AI in relation to (imagined) historical moments like the creation of man, and the creation of tools by man, the exhibit pinpoints the year 1843, when Ada Lovelace develops what some consider to be the first example of an algorithm. 

Elsewhere in the exhibit visitors were invited to play with a robotic dog that responded to touch, to build a smart city using Lego and to enjoy a drink mixed by a robot bartender Makrshakr. Each large, dimly lit room showcased one carefully selected use of AI under a bright spotlight. Moving along the dimly lit rooms, touching, seeing, and hearing about our lives with AI, we’re invited to imagine (and experience) a vision of the world. What Sheila Jasanoff and Sang-Hyun Kim term, a sociotechnical imaginary:

‘Collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology’                                  

In our workshop, participants worked collaboratively to discuss, contest, challenge, and ultimately cobble together a world with AI, presented here in this zine. In the museum, robot prototype, lights, and video created a futuristic experience that draws visitors into a place that is filled with AI. In both cases, the multimedia affordances of Miroboard, to museum spaces, allow us to express futurity through AI in visual ways. 

This can be exciting and worrying, and we can express all sorts of hopes and fears in the imaginaries we produce. But harder to include is uncertainty and chance. Visualising uncertainty in data visualisations and infographics is its own field of study. There are many reasons why uncertainty is not visualised. A study conducted by Hullman (2019) highlights concerns about visualisations becoming too complex or undermining the credibility of the research. Yet there are important reasons to visualise uncertainty. Data visualisations have often been understood by audiences as an objective representation of neutral data (D’Ignazio and Bhargava, 2020). This is not the case. And as STS scholars have described, this isn’t the case with technology either.  

 - Nancy Salem