The Fourth Memory
by Grégory Chatonsky & Yves Citton
TRANSIT vol. 15, no. 1

So-called “artificial intelligence” (AI) brings forth a new category of memory which recursively generates texts, speeches, images and sounds that can resemble “authentic” documents.1 In the automation of expression and representation using the massive data accumulated on the Web, the question of the arts, far from being anecdotal, has become consubstantial with the effects of current statistical models. What emerges is a new realism that destabilizes both past archives and future perspectives. The counterfactual nature of this alien realism unsettles the foundation upon which we previously established our relation to truth. The political consequences of this evolution have been successfully exploited by fascists forces. Progressive movements need a better understanding of counterfactual realism in order to stop losing battles.
Retentions, Protentions, Distentions
According to Bernard Stiegler, we experience our world through three types of retention. Primary retentions are immediate perceptions, such as listening to a musical note. Secondary retentions correspond to what we usually call “memories,” the inner recording of past perceptions, that allow for a temporalization that compares, anticipates and recalls different events, such as notes that follow one another to form a musical melody. Tertiary retentions inscribe sensory events on a material support (media), enabling their technological repetition, for example on a disc where the melody is recorded.2
Technological developments of recent decades have enabled the rise ofwhat can be considered quaternary retentions. The truly unprecedented nature of digitalization is not to be situated in the multiplication of tertiary retentions (data), but in the technical capacity to record, in the form of metadata, the attentional gestures and habits that underpin our primary retentions (what I click on or swipe, the correlations between my digital gestures and those of other users, the time I spend on such and such content, etc.). Quaternary retentions result from the ability to harvest and process these meta-attentional data (data on human attention) in very large numbers through computational processes based on “statistical induction,” i.e. on a dynamic, bidirectional interaction between processing instructions that structure (from above) a set of data and observed proximity attractions, quantified (from below), between data aggregates. Since these quaternary retentions record data about attention paid to preexisting tertiary retentions (recordings), and since they become, through profiling, the basis for the production and circulation of new (AI generated) tertiary retentions, it soon becomes impossible empirically to differentiate tertiary from quaternary retentions—though the conceptual distinction remains relevant.
This entanglement and proliferation of quaternary and tertiary retentions form a fourth memory. The processing performed by so-called “Artificial Intelligence” no longer aims for an identical recall of what was. This fourth memory feeds on the past of retentions to make them possible in its latent space and to regenerate them: these are not the same indexical retentions that come back again and again, but similar retentions, because tertiary retentions no longer exist as discrete, separate units, but rather as clusters of patterns. It is resemblance itself, understood as mimetic representation, that is automated, marking a new stage in the complex process of industrialization.
In fact, AI is fed by large datasets that enable it to calculate a latent space on which probabilities are defined according to Bayesian inference, and which is structured by observable proximities between the objects in it. It is important to recall that, if the images generated with the help of neural networks are credible or realistic, it is not only because they reflect information mined from past images that have accumulated in a mere thirty years on the Web; it is also that they potentially contain all images yet to come.3 And this is why they can be different and realistic at the same time. Realism then becomes the credible anticipation in an inductive space of a possible image.
Can we still speak of memory, retention, archive, history? Secondary and tertiary retention, as the memorization of perceptions and the act of recording what should be disseminated, are not obsolete, of course, but they are overtaken by the dynamics of quaternary retention. These quaternary retentions capture, spawn and shape what is to come by automatically looping the future onto the presence of the past—in combinations that could be described as pretentions (to be understood as leapfrogging from retentions to protentions). In many cases, a significant inversion takes place in such pretentions: what is put into circulation seems to precede what is retained, because this circulation depends on outsourcing to datacenters, whose commercial and infrastructural foundations determine what is retained and how retentions are constituted. These infrastructures are “a priori.” By providing social networks with the material of which, and for which, they are made, we collectively produce our memories by individually adapting them.
This pretentional dynamic, which structures our behavior through an instantaneous to-and-fro between retentions (what we keep from the past) and protentions (what projects us into a pre-formed future), is undergoing a major, epochal transformation. We propose the notion of “quaternary distention” to designate the epoch now opening up with the possibility of pretentions processed by AI systems. Distention refers not only to increasing the volume or surface area of a body by subjecting it to extreme tension, but also loosening the ties that bind a whole or unite several things. Quaternary retentions extend and distend our worldly sensibility (Mark B. N. Hansen), because, starting from tertiary retentions, they further multiply the number of documents by creating retentions of retentions, attention to attention, memory of memories—in short, an inflation of metadata about data.4 Paradoxically, this distension leads to a narrowing: since all media are statically vectorized, they produce a diffuse “sense of reality” in which everything tends to look the same.
The change of name from Google to Alphabet and from Facebook to Meta can be read as a symptom of this meta-ization catalyzed by quaternary pretentions. Distention is a genetically (and generatively) recursive retention. It is not, like classical Stieglerian secondary and tertiary retentions, the repetition of an event. It is the repetition of pretentions piggybacking upon themselves, the automation of their self-producing loops. This singular repetition enables us to understand how mimetic resemblance is repeated and automated as such.
We therefore suggest defining quaternary distensions as media content (images, sounds, texts) generated by a statistical possibilization of previous content through the automation and industrialization of resemblance permitted by generative AI. Databases collect tertiary retentions, which become quaternary retentions once they are put to work in the tracking of human attention intrinsically linked to the automation of resemblance. Unlike “retentions” (from retenere), which consist in retaining present or past affections, “distensions” (from tendere) are characterized by the dual fact that a) they tend, through resemblance, toward potentially disrealistic possible representations (such as human imagination) and that b) this tension is operated by automatic apparatuses on an industrial scale.
Surveying the Disfactuality of Latent Space
It would be naïve to announce the definitive end of (classical) retentions. They continue to operate, but the emergence of a fourth memory that is distended, that metabolizes past retentions and that, taking itself as its own object, becomes exponential, changes the orientation of retentions, as well as the very definition of memory as experience. The fourth memory thus alters the entire structure of the other three, as well as their relationship to each other, with a view to configure a world.
Here is an example: in August 2023, Nancy-based producer Lnkhey published a remix on SoundCloud and YouTube,5 in which Angèle’s cloned voice, thanks to the free software Retrieval-based-Voice-Conversion, sings a song she had never performed.6 Several million people listened to it. Angèle reacted on TikTok: “I don’t know what to think about artificial intelligence, I think it’s wild, but at the same time I’m afraid for my job mdrrrrr.”7 In the video, she sings the remix in playback, then pouts playfully, as if dizzied by the resemblance to this voice that is not her own.
Another example: a film by the Lumières brothers upscaled to a definition of 3840×2160 pixels and 60 fps. The film has not been “restored,” because it is not a question of returning the film to its original state, but of “establishing” it, because elements originally absent have been added. The effect is striking: the film no longer possesses the realism of 1895, but the grain of a video shot in 1971 with a Sony Portapac. Completion—the act of adding to a historical document in order to repair it—brings with it an anachronistic realism that changes the nature of the archive, which is no longer determined by an origin. Completion invents a historicity that did not originally exist, because it has been fed by images taken between 1895 and the present day. The result does not emanate from the most faithful possible retention of the data to be captured in 1895, but from a modulable mix of the given and the statistically probable, determined by the latent space of the dataset.
Realism changes its nature and becomes disfactual, the prefix “dis-” here meaning separation, difference, cessation or defect within the factual, i.e. the facts. This disfactuality ultimately concerns facticity, i.e. the plastic contingency of the correlation between thought and the facts it targets. Images are factitious, but this facticity affects reality as a whole, and this is why it is disfactual: it dislocates something from within. In so doing, it corrodes the factuality on which our confidence in our power to exercise a certain mastery over the world is based.
With artificial intelligence, what we retain, process and metabolize are all past forms of tertiary retention, once they have been massively digitized in binary form and thereby made intercompatible, processable and translatable. This period can be associated with Big Data, as a project to digitize culture, and with Web 2.0, as everyone’s participation in this memorization (i.e., in this harvesting of primary, secondary and tertiary retentions steered towards their reprocessing through quaternary retentions). This period was in fact no more than a preparatory act for statistical induction: we may have thought we were designing our media for human beings, but they were actually intended for machine learning. In fact, there were so many of them that they exceeded our perceptual capabilities, and only machines were still capable of perceiving them. We are perhaps witnessing the emergence of a new form of realism, a realism of realism, which would allow us to better understand the multiplication of alternative and counterfactual truths than the simple promise of a demarcation between truth and fiction, increasingly difficult to hold.
This new metabolization can be analyzed in six stages: 1) tertiary retentions (recordings) and quaternary retentions (which record our attentional gestures, our interpretative reactions, our creative re-elaborations) are 2) accumulated in enormous databases, to be 3) sorted, calculated, associated, approximated by an unprecedented power of computation based on statistical induction, from which 4) latent spaces emerge, out of which 5) a fourth memory generates 6) pretentions in the form of aesthetic objects (a new Angèle song) that are neither “real” recordings, nor “real” creations, but unprecedented entities, quaternary distensions, which we struggle to qualify (models, proxies, products of induction, cognitive syntheses, deep fakes, proofs of concepts?), but which resemble something that could exist, even if they do not exist. The question of realism is addressed here through the concept of possibility.
The Alienation of Credibility
Latent space is our new cultural space, whose products are disfactual. Angèle’s unsung song existed before it actually existed. It existed as a statistic or, as the case may be, a possible. It had to be born into reality through this reprise of reprise. This is the ontological significance of the aforementioned post: “POV: when Angèle’s AI on Saiyan finally becomes a reality,” where the possessive ‘s that separates and links the AI to Angèle expresses this preterition of the cultural latent. Everything exists before it exists. In this strange disfactual anticipation, there’s a new complicit pact with the public. It’s Kaaris having fun with his own AI, and this is by no means the property of a phantasmatic replacement: it is a distance from oneself, a strangeness familiar to modernity, a shift in our apparatus.8 Thanks to the accumulation of the past by the material supports of tertiary memories, we are producing something that has never happened before, but which bears an uncanny resemblance to anything that could happen: Kaaris singing Inspector Gadget or a Disney cartoon. This possibility already has its form of reality, but all the cultural intelligence of our age lies in the amusement shared between the singers and their audiences, in this new rehearsal in which we interpret this as-yet-unrealized possibility that has already taken place at such and such a point in latent space.
If, until now, our culture and its sharing have been determined by tertiary memories, the fruit of the industrial period—following an interim phase in which culture comprised everything from Warhol to 1990s post-cinema—we are certainly entering a new era with quaternary distensions, one in which the aesthetic contract could be one of alienation: we are driven to resemble machinic syntheses that produce media that resemble us. Latent space becomes a space of the possible, containing the past, but also, no doubt, part of the future and the (as-of-yet) incomputable. We could, for example, take a photograph with a camera of some kind and send it to an AI to check that it already exists, and then find it again. It’s no longer just a question of digitization, which renders images discrete (in the form of 0s and 1s, by slicing them up through sampling) and generates variations that can be recombined at will (as synthetic products). Now we are talking about statistical pretentions, which distort our protentions by informing what does notyet exist, according to the thirst for commercial profit of platforms exploiting their privileged access to our attention, or local, open-source software that requires graphics processing power beyond the reach of most people.
This very particular—indeed: disfactual—realism emerges from a fourth memory that belongs to the viewer’s past and, at the same time, to the future of the images created. It is not simply another result of causality: it grafts a possible onto a given, while at the same time haunting the latter through the former, undermining the foundations of our indexical belief (if I hear Angèle’s voice, it is because Angèle must have sung). It is therefore essential to place what might appear to be a simple technological innovation—with its litany of novelties, from GAN, Clip, Disco Diffusion, Zoetrope to Dall-E 2, Imagegen, Parti, etc.—in the general context of uncertainty about factuality, where, according to some polls, 40% of 18-24 year-olds in the USA say they think the Earth is flat. This latency should, of course, be linked to conspiracies, fake news, this strange expressive democratization of opinion where everything thinkable seems to have to be thought by someone, and where everyone seems to think only to react to what they think the other thinks in a bottomless Bayesian anticipation.
We still have to find our bearings in this culture of latent space, and in the paradoxical emotion that grips us as we listen, and listen again, to Angèle’s voice, then return to the AI voice, moving back and forth between the two, unable to decide on our emotion and the world that thus passes through us. It is a new realism and a new historicity, whose structures are emerging—alienating not so much our identities as the very credibility of our cultural world. This realism of the possible undermines the modern relationship between the past and the future.
We can thus define more precisely this emerging fourth memory as a mnemonic regime in which collective memory becomes recursive and self-referential, due to the accumulation of quaternary distensions, destined to occupy an increasingly significant place in the human imagination, as generative AIs are deployed. This fourth memory is populated by possibilities rather than memories. (Vilém Flusser’s amazing insights had described our current situation as early as the 1980s.9)
From Disfactual Possible to Counterfactual Realism
Faced with the omnipresent, suffocating refrain of AI “replacing” humans, Angèle and her audience play a different game from that of appalled lament. Feelings are mixed. There is undoubtedly a little fear, a little astonishment. But above all, there is an amusement in the infinite game of simulacra and resemblances—another name for culture—that neither technocritical pastors nor humanist priests will ever understand. AI is not thought out in advance, as if it were enough to think it through properly to determine how it should be reformed, framed, put into legislation or into a pipe, with an entrance and an exit, a whole logistics reduced to logos, always bound to be trailing a step behind. AIs—to be read here as our Alienated Intelligences—cannot be “comprehended,” i.e., contained within the reassuring limits of our logos. They are experimented with: we alienate them and they alienate us, and that is why artistic practice is a privileged mode of knowledge.10 In this case, they have learned to sing like Angèle, and she has responded by covering “their” common song (thus panicking the high priests of copyright). We were the secret witnesses to this seismic echo. We can be the explorers and (more or less secret) agents of experimental alienations.
After the apogee of the hypermnesic accumulation of memory supports through their digitization and recording in data centers—the ultimate stage of Benjaminian reproducibility—our era is industrializing resemblance itself through the possible. This is undoubtedly why AIs raise questions that cut across and upend so many areas of human activity, and why these questions have been so frequently addressed in the media and by the general public through “the question of Art.” The latter symbolically concentrates in modernity the very “essence of humanity,” as well as the mystery of its interiority, which, as we know, accompanied a certain construction of Western subjectivity, going as far as the will to power and nihilism.
In another TikTok post, we read: “We’ve come full circle.” It is not just a matter of teaching AIs to create images, texts and sounds that resemble us: it is a matter of resembling them. In contrast to reactionary rhetoric, we desire nothing more than to actively alienate ourselves. We do not believe in making AI readable through the transparency of code, nor in cutting and separating ourselves from these flows, nor in regaining an imaginary autonomy or sovereignty. We want to experience that what we believe to exist is also a product of technology and its paradoxical reproduction. We are its reprise: its re-tention. By metabolizing the entire history of our memory media, AIs (understood as “our” Alienated Intelligences) are in the process of constituting a new, fourth memory, where past and future are no longer chronological, but seem to respond to each other by swapping roles—the highly problematic “we” involved in “our” Alienated Intelligences forcing “us” to divide humans into dominant and dominated, white and non-white, exploiters and exploited.11
We need to remobilize Brian Massumi’s prophetic ruminations on the differences between the possible, the probable, the virtual and the potential in order to measure what is happening to us.12 Statistical induction plays with a possible (partially) controlled by probabilities, and liberated by the continuity of latent space (there is always a path from one vector to another). The song unsung by Angèle mobilizes the probable to realize other possibles. Its disfactuality, however, fits perfectly into the protentions of the dominant aesthetic (if not into the anticipated profits a musical trade paying hypocritical homage to the sacrosanct “copyright”). Only experimentation—always vertiginous—with our alienations can hope to extract from the merely disfactual the transformative potential inherent in counterfactuality. The challenge for images, sounds and texts in the age of generative AI is not so much to be “original,” “new,” “real,” “innovative” or “beautiful” (so many terms and values that have taken a nasty beating in just a few years, after having been undermined by a century of modern art). Rather, it is a matter of being counterfactual: of (craftily) experimenting with latent spaces in order to (automatically) manifest credible retentions (in fact: distensions) of realities that have not happened because they run counter to the facts of the dominations in place.
Embodying latent aspirations towards facts that contradict the protentions of the reigning order: isn’t this what we used to call “revolution” in the last century? To take up a distinction that Pierre-Damien Huygue insists on, the experimental, non-instrumental, “artistic” uses of generative AIs are not so much a matter of political action (prattein) as of artistic craftsmanship (poïein).13 Not so much “doing the revolution” (in the sense of acting to bring about a revolution) as fabricating or manufacturing counterfactual objects that make us see, hear and think, with the force of realism, some of the counter-worlds our societies carry (and repress). The fact that deep fakes make us fear a world of “post-truth”—in a fantasy which is the political counterpart of the economic replacement of the human by the machine—certainly reflects a very real problem: it is indeed essential to be able to preserve a certain social relationship of trust with regard to our access to factuality. No information is possible without credibility. But our anxieties and anti-conspiracy crusades are just as much a reflection of the inability of progressive forces to understand the political potential of realist forms of counterfactuality—at a time when fascists forces are shamelessly exploiting the wellspring of disfactual unrealism and illiberalism. Although often idle, our debates about AI will not have been in vain if they help us identify—and engage in—the (not so) new terrain of struggle of counterfactual realism.
Bibliography
Flusser, Vilém. Into the Universe of Technical Images. Minnesota University Press, 2011. First published 1985.
Hansen, Mark B. N. Feed Forward: On the Future of 21st Century Media. University of Chicago Press, 2014.
Huyghe, Pierre-Damien. “Qu’est-ce que faire dans l’urgence ?” Recordings of the 2022-2024 seminar. http://pierredamienhuyghe.fr/recherches.html#urgence.
Massumi, Brian. “On the Superiority of the Analogue.” In Parables for the Virtual. Duke University Press, 2002.
Quessada, Dominique. Parasite. Essai sur le bruit digital. PUF, 2023.
Stiegler, Bernard. Technics and Time, vol. 1-3. Stanford University Press, 1998-2010.
Wynter, Sylvia. “Unparalleled Catastrophe for our Species, an interview.” In Sylvia Winter: On Being Human As Praxis, edited by Katherine McKittrick. Duke University Press, 2015.
1 This article was published first under the title “Quatrième mémoire” in the French journal Multitudes, 96, Fall 2024, p. 189-197. It has been revised for publication in TRANSIT. This research has been supported by the EUR ArTeC financed by the French National Agency for Research (ANR) through the PIA ANR-17-EURE-0008.
2 See Bernard Stiegler, Technics and Time, vol. 1-3 (Stanford University Press, 1998-2010).
3 For more details see for instance Anna Longo, Le jeu de l’induction. Automatisation de la connaissance et réflexion philosophique (Mimesis, 2022), as well as Antonio Somaini, ”A Theory of Latent Spaces” in The World through AI (Jean Boîte Edition, 2025). More on this can be found on https://chatonsky.net/futur-image/.
4 Mark B. N. Hansen, Feed Forward: On the Future of 21st Century Media (University of Chicago Press, 2014).
5 https://www.youtube.com/watch?v=EiV1YxtbfcE.
6 https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/fr/README.fr.md.
7 https://www.tiktok.com/@angele_vl/video/7265090543191936288.
8 https://x.com/booska_p/status/1689304038393683968?s=20.
9 Vilém Flusser, Into the Universe of Technical Images (1985; Minnesota University Press, 2011).
10 See Dominique Quessada, Parasite. Essai sur le bruit digital (PUF, 2023).
11 See Sylvia Wynter, “Unparalleled Catastrophe for our Species, an interview,” in Sylvia Winter: On Being Human As Praxis, ed. Katherine McKittrick (Duke University Press, 2015).
12 Brian Massumi, “On the Superiority of the Analogue,” in Parables for the Virtual (Duke University Press, 2002).
13 Pierre-Damien Huyghe, “Qu’est-ce que faire dans l’urgence ?,” recordings of the 2022-2024 seminar, http://pierredamienhuyghe.fr/recherches.html#urgence.