Today a thought experiment about generative AI text-to-image capabilities in a superdisciplinary medium-term future.
Emphasis on thought experiment.
Today, on these platforms, you enter a text-based prompt and the AI produces an image.1
But anyway, the text of these prompts are designed to elicit a certain result from the AI. This text-to-image capability creates a necessary, even total, link between text and image, at least at the level of the singular instance or specific interaction. Whether simple or incredibly detailed, the prompt induces a response from the AI that is output in image form.
So in future, what if the texts in question are understood more broadly than being a specific prompt, or, what if we start to treat any or every text as a prompt. What if a future outcome is that we as a culture connect these things as a matter of course. Everything that is textual is also used to produce an image.
We would put a legal document into a text-to-image generator and the resulting image would be as pertinent to the legal document as the terms of the agreement.
We would live in a whole world of documents and texts that have a parallel existence in image form.
Could you take a policy document and see what image(s) it produces? What if you could tell something was wrong with it if it produced a naff fantasy fairy kingdom.
What if, when designing a building, the plan isn’t right until the resulting images are too. We could walk into a crappy condo and know right away that its image equivalent has 15 fingers.
You write a social marketing campaign and the text is so specific it produces its own imagery.
You write a novel and when the novel is used as a prompt, it produces an AI generated video that completes the perfection of the novel.
Gesamtkunstwerk without the Nazi part?
(Of course we know that we are already way beyond the single-click image generation situation, and that learning how to prompt effectively is a focal point for any discussions about AI. For the purposes of this discussion, take that as a given.)