First, for those of you who are unfamiliar, pink slime is the name that has been given to “LFTB”, or “Lean Finely Textured Beef”, which is basically a disgusting meat byproduct that they use as a filler. I like how the wikipedia entry notes that the term was not coined by the food industry.
By the way do NOT do an image search for pink slime.
The term has recently been used in relation to journalism in this article about how generative AI is being used to churn out a stream of meaningless content that is flooding the internet with hundreds of articles a day in order to generate ad revenue and mine user data.
The concept has recently been explored by Matthew Kirschenbaum in the Atlantic, which he calls the “textpocalpse”. (Link to a good interview here, for those without an Atlantic subscription.) For Kirschenbaum textpocaplypse is a grey-goo scenario in which we will increasingly find ourselves drowning in machine-generated text. Impacts he foresees are a loss of human craft, an increased commodification of text, and a decrease in our ability to critically evaluate what we read. But along with these come other shifts in our relationship to writing, which introduce abundant creative potential for critical and practice-based experimentation with emerging technologies.
Part of Kirschenbaum’s conclusion is that not only will there be a next level magnitude grey goo tsunami of text, but that the grey goo will be personalized. Noting that this is already in play in individualized advertising, he predicts a future in which websites etc could “literally be rewritten in response to individual users.”
This came up in one of my previous posts in reference to user interfaces and another one about infinite recursion.
But to take it further, what happens when our textual pink slime starts to feed on itself? What effects can we expect when my personalized version of a pest control website (which replaces the word “pest” with “spatially spontaneous non-human animal” so as to spare my delicate feelings) gets reabsorbed back into the predictive AI model?
What is the technical term for existential threat-level disorientation?
So as it turns out this is exactly what has happened in the image side of things. There is a new paper from Rice and Stanford where researchers have shown that feeding AI-generated content back into AI models causes them to consume themselves in an ouroborous-like manner, like a snake or dragon eating its own tail.
By the way my favourite part of the ouroborous myth is the concept of the hoop snake, a creature that actually wheels around attacking people.
The ouroborous has historically been used as a symbol of infinity, and sure enough, the more you try to picture it, the crazier you will feel. But anyway in this paper the scientists have been able to observe the visual impacts of this AI-enabled infinity, and they have called it Model Autophagy Disorder (MAD).
"Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease." Link
Among SO many other acute scholarly research questions, mine is - hey is this like Pop will eat itself?