Welcome to the third in the series of interviews with artists who are working with AI technologies. I’m thrilled to feature the incredible work of Mark Amerika.
Mark Amerika is a prominent artist, writer, and theorist known for his pioneering work in the realm of digital and net art. He has performed and exhibited his art in many venues including the Whitney Biennial, the Denver Art Museum, the Tate Modern, Videotage in Hong Kong, the Institute of Contemporary Arts in London, The National Museum of Contemporary Art in Athens, ZKM, the Walker Art Center, and the American Museum of the Moving Image among many others. He is the author of My Life as an Artificial Creative Intelligence (Stanford University Press, 2022). Other books include remixthebook (University of Minnesota Press, 2011 and remixthebook.com), META/DATA: A Digital Poetics (The MIT Press, 2007) remixthecontext (Routledge, 2018), and four novels. Selected as a Time Magazine 100 Innovator, Amerika’s work has been featured on CNN and written about in over 150 mainstream, academic and art publications including The New York Times, Die Zeit, El Pais, The Wall Street Journal, and The Observer. Amerika is Professor of Distinction at the University of Colorado at Boulder where he is the Founding Director of the Doctoral Program in Intermedia Art, Writing and Performance in the College of Media, Communication and Information and a Professor of Art and Art History.
Kate Armstrong: Will you introduce yourself in your own words?
Mark Amerika: Sure, I’m the figure you can’t pin down because my art practice is very chameleon-like. I blend in with a lot of scenes including the contemporary art world, the underground cinema scene, the live audio/visual performance art scene, the alternative fiction and poetry scenes, the experimental theory scene and now the AI and NFT scenes. To name a few.
KA: You are dropping a new collection of NFTs relating to your new project Posthuman Cinema with Kate Vass Galerie's blockchain platform K011.com. Can you tell us more about the project and what is happening?
MA: Posthuman Cinema (PHC) is a phrase coined by the PHC Collective which presently consists of Will Luers, Chad Mossholder and myself. In addition to my individual career as an artist and writer, I have often collaborated with others who I find I share a particular sensibility with. Chad, Will and I love all kinds of experimental films, especially auteur-driven Euro arthouse cinema. PHC is our attempt to use these early AI technologies to create moving images, scripts, voiceovers plus an original soundtrack by Chad to investigate future forms of short narrative.
One of the tracks I mentioned above, the underground cinema scene, is an area of focus that periodically pops up in my practice. I believe I directed, wrote, and produced the first feature-length art film ever shot on a mobile phone—a Nokia N95, just before the first iPhone was released. That artwork, Immobilité, appeared in both international film festivals and in solo museum exhibitions and grows out of the last work in my net art trilogy, FILMTEXT, also now in institutional and private art collections. FILMTEXT was a major attempt at web-based interactive cinema, an attempt to experiment with interactive storytelling.
The collaboration with the Kate Vass Galerie out of Zurich and her NFT platform, K011.com, came about in a very 2023 way. Kate and I follow each other on Instagram, and she saw some early previews of the collection I posted there. Being the savvy gallerist and curator that she is, she immediately inquired about how we were planning on launching the project and after a few DMs we quickly realized that a collaboration was in order. The gallery has a reputation for being very art-historically attuned to both the history of generative art but also AI art. And I think with the Posthuman Cinema project we are clearly doing something different. Most AI art is a digitally compressed image. There are some GAN-based moving image artworks that have been created that successfully challenge our perception and indicate how AI will reshape moving visual images going forward, but very rarely do the StyleGAN artists pay attention to narrative, fiction, poetry and/or the history of avant-garde cinema. Whereas Will, Chad and I are very aware of what Gene Youngblood referred to as Expanded Cinema. So, we wanted to expand cinema’s aesthetic and narrative potential in realtime by customizing our use of the various AI systems at our disposal.
Dummies (2023) (Still)
KA: Posthuman Cinema is released as ten one-minute loops. I wanted to ask you about narrative structure and whether you see something distinctive about how narrative functions in Posthuman Cinema. Do you think AI technologies are drawing out new relationships with narrative?
MA: With Posthuman Cinema, we created what we call ten cinépoèmes, all about a minute long, but each minute-long work is rather intense and packs in a lot of sensory data. This means they can be experienced as autonomous works of art. But yes, I’m glad you asked this question because we are presenting them in a structured, sequential order that points to an alternative approach toward developing contemporary artworks at the interface of generative AI and digital storytelling. All three of us have completely different takes on what the narrative actually means. In fact, for me, it’s not about what it means but what it does. What it does is introduce a new sense of cinematic measure, one that challenges our anthropocentric perceptual bias. What I mean by that is that the works are situated to shake us out of our complacency so that what we see can be experienced as revelations into the future imaginary. These are very much speculative works of art that we can experience in realtime as if the AI future were happening NOW. To borrow a phrase from the late poet Jack Spicer, they feel like transmissions from the Distant Outside. Where do these images come from and what are they exposing? The figures look like human bodies, but are they? I suggest they are not. They are, to borrow a term from Duchamp that he used when describing his Large Glass, apparitions of an appearance. And they are generated by an alien intelligence that is signaling to us how we have always been posthuman. That’s a difficult story to tell in ten one-minute loops, but why should that have stopped us? This is what artists do.
Infinity Forest (2023) (Still)
KA: I’d be interested to know how you would characterize what is going on with Web3 in the contemporary moment.
MA: You mean besides the NFT market crash? Web3 is creating opportunities for communities of like-minded artists, collectors, curators, gallerists, critics, and other sophisticated patrons of the arts to evolve the future of digital art in a networked economy. In some ways, it reminds me of the early net art scene that I helped bring greater visibility to in the 90s. In those days we used email lists, personal websites and in-person festivals and events. There was no social media and no blockchain apparatus. I wrote about this in my curated NFT exhibition titled nfttime (a play on the old nettime mailing list). Basically, what I was suggesting in my curatorial essay is that blockchain-based art doesn’t materialize in an art historical vacuum. The net art 90s is the precursor movement to what we see today in the NFT space. The big difference between then and now is cryptocurrency. Even with the NFT market crash, we find ourselves in a moment where the long overdue financialization of net art facilitated by a network of blockchain platforms makes it fashionable for digital artworks to freely circulate their potential meme-like aura as a form of aesthetic currency.
Posthuman Cinema (Trailer) (Still)
KA: At various points, including in My Life as a Creative Artificial Intelligence, you introduce Russian formalist Viktor Shklovsky’s concept of defamiliarization in the broader context of artistic creation. I thought of it again when viewing Posthuman Cinema because in that project there is a loopy, liquid back-and-forth between familiar cinematic elements and truly weird cinematic worlds. I feel like Shklovsky could have been directly referring to this project when he wrote “the technique of art is to make objects 'unfamiliar', to make forms difficult, to increase the difficulty and length of perception because the process of perception is an aesthetic end in itself and must be prolonged.” Can you share some thoughts about how AI feeds into this, or presents new opportunities to understand or apply these ideas?
MA: That’s a beautiful application of Shklovsky’s concept of ostranenie. If we think about the history of avant-garde art and writing, we can see that this is what poets and artists do, isn’t it? We take what commerce has made all too common and remix it into something that simultaneously challenges perception and transforms the experience of seeing into a higher level of consciousness. This doesn’t mean you can’t just watch something and dig it for what it is. I think the works in Posthuman Cinema can just be otherworldly. But for me they are more than that in that they tap into an alien sensibility that takes us out of our anthropocentric stronghold and ask us to reimagine what it means to be creative across the human-nonhuman spectrum. This is something I get into in my Stanford book. In that book, I’m experimenting with an early version of GPT, before chatGPT, to co-write a critical inquiry into the nature of creativity. I intentionally play with ideas generated by artists and writers of the 20th century like the Beatniks, jazz musicians and most especially the Surrealists whose concept of psychic automatism figures prominently in the book. I even start a riff on how an AI artist can suddenly become a kind of psychic automaton, that is, a language artist that unconsciously improvises poetic discourse and, in so doing, is actually becoming a fine-tuned language model. And then I flip it and project a speculative form of AI that operates as a robust language model, one that could, at some point, become a fine-tuned language artist. The more I investigated this in realtime with the AI as part of a continuous call-and-response improvisational meta-jam, the more I realized that my own unconscious neural mechanism acts (performs) like a Meta Remix Engine and that my own stylistic tendencies are programmed to take whatever source material I select and defamiliarize them for aesthetic effect. Thus, Shklvosky.
Infinity Forest (2023) (Still)
KA: So - Generative AI and your concept of Source Material Everywhere. Since at least 2009 you have been underscoring the idea that the digital age has fundamentally transformed the creative process by making source material abundantly available, and that remixing and reimagining existing media is a central tenet in new creation. Well, that happened! With generative AI now there is SOOOOOO much more ‘source material’ than ever - it was announced this year that AI has already created as many images as photographers have taken in 150 years1 - and you could say that Generative AI technologies are the ultimate remix, basically a remix of the entirety of recorded human civilization. Does anything around these ideas change here, or is it a matter mostly of scale?
MA: Scale, for sure, but also our ability to look at human-nonhuman creative symbiosis as part of a larger co-evolutionary process. The thread of philosophy focused on technicity (Steigler), spectre and hauntology (Derrida), and AI and art (Zylinska and Hui) are good places to dig deeper into this, and so I also focused on this philosophical trajectory in my book—which I propose is a work of AI remix / performance art / speculative fiction. To get to the point, are we not all born remixologists? Are we not all trained to recognize patterns and output on-the-fly biosemantic riffs on what it means to be alive? Artists and writers are especially attuned to use themselves as the research subjects most likely to generate new forms of aesthetic becoming. That’s what we do. But now AIs are becoming very capable of generating endless variations of aesthetic becoming and the source material, or critical training data, they keep turning to is starting to feed on itself: AI generated images are now in the algorithmic mix, are part of the critical training data. The Ouroboros Effect.[KA: read my post about the Ouroboros Effect here]
KA: You’re a long-time user of experimental and literary methods to elicit provocations and responses from AI forms. For you, what is different about generative AI from what came before?
MA: The ease of use and the quality of the outputs. But this is not universal. Forget ChatGPT or some of the commercial form-filling LLMs out there. They think poetry is all about rhyme, meter and must have the phrase “dance of the senses” or it’s not poetry. They have no idea about experimental art+language. The way around that is to fine-tune a LLM like GPT 3.5 turbo or soon GPT-4 on specific texts. I am doing this now, training am LLM on a couple of my books in conjunction with Clarice Lispector and Fernando Pessoa’s writing. The outputs are wild because they are at once philosophical, poetic, metafictional and theoretical—a reflection of the three of us. But I am also punk, and very open with the language I use. This includes erotic, vulgar and otherwise street language. So sometimes the outputs are just nasty and unworthy of saving on my hard drive. But other times they are like Kathy Acker meets Hélène Cixous meets Walt Whitman.
Infinity Forest (2023) (Still)
KA: As a futurist and theorist of the “Possible-But-Not-Yet2”, I’d be interested to know what you think the life of artists and writers might look like in 2033, with specific reference to the worlds that generative AI technologies are spawning?
MA: Oh, I think the term AI artist will be very squishy by then. Right now, it feels like many AI models are being trained to mimic the predictable modes of creative conformity we see flourishing throughout the Internet. The critical training data is immense but full of mediocrity. There will have to come a tipping point, probably much sooner than 2033, when much of the most provocative art produced by artists prompting generative AI models will start taking over the history of images that currently exist on the Internet. That’s when we are most likely to see what happens when, metaphorically speaking, generative AI models start breaking open their virtual head.
Right now, the language artist’s role is to help these AIs break open their virtual heads, to train them to resist creative conformity by cracking into the conceptual kernel that runs parallel to what we still think of as our unconscious creative potential, and to keep doing that until the collaborative process reveals an alternative state of mind that presently does not exist. Whenever that happens, and I think it will, then what will it literally mean to be an AI artist? It could mean the advent of “an artist whose medium is AI” but it could also mean “the AI is itself an artist,” one that has surpassed the imaginative capacities of the human, in which case the humanoid programmer and/or prompt engineer will have been relegated to becoming a creative tool to help the model continue realizing its full potential. That’s what makes this particular moment in (digital) art history so exciting: those of us who are open to working with AI in a symbiotic fashion, one that enables us to—as Duchamp once said—become mediumistic beings making our way to a clearing, are beginning to evolve the art practices of the future all across the human-nonhuman spectrum.
KA: Thank you so much Mark for the insights, it’s been super fun talking to you. Congratulations on the new project!
"Possible-But-Not-Yet” touches on a philosophical investigation into potentiality vs actuality in which an artificial creative intelligence represents a form of speculative knowledge — a promise of what could be, yet isn't fully realized.