What is the relationship between time and perfection when it comes to how an AI improves, grows, adapts?
For a mega platform like Google Gemini, can training ever be complete?
But just conceptually, would it even be possible - given the complexity of the world, the data it spawns, the technologies themselves, and the multiple worlds made manifest by these models - for training to be complete?
If it were possible to complete the training of an AI, how would we know?
Would we look at it as complete when the AI stops getting it wrong?
What is the difference between never getting it wrong and always getting it right?
Will “getting it right” mean that an output spits perfect realism, or some truthy layer cake that “we all” understand, or maybe when the sophistication of it as a creation tool is such that it is able to output absolutely anything a creator asks for?
What references and vocabularies should we use to discuss the
perpetual imperfection of something like a model? Maybe we turn to a principle like Wabi-Sabi, which “nurtures all that is authentic by acknowledging three simple realities: nothing lasts, nothing is finished, and nothing is perfect.”[8] and where the principles of transience and imperfection are what creates beauty in an object.
Will we look back on these early days of creating with Generative AI tools as a historical moment in aesthetic outputs, maybe akin to glitch aesthetics?
Could we imagine a stream of images rolled back to a certain moment, like a wayback machine that shows what the technologies were doing - or able to do - at that time?
How should we be thinking about this stream of outputs as something that exists in history?