So apparently it is now possible to create a really good replica of someone’s personality by having them converse with an AI for around 2 hours. New research from Stanford and Google DeepMind is focused on capturing a person’s uniqueness using qualitative interviews and teaching a language model to behave accordingly. This new you is called a simulation agent.
This is a different thing than a chatbot or “tool-based agent” because simulation agents are more adaptable and complex and can mimic human-like behaviour in nuanced interactions and situations.
I have dreamt of having a robot self for such a long time, mostly when doing mandatory DEI training, being in a breakout room, or doing anything with Microsoft 365.
But this goes way beyond - this other self will know your preferences and personal style enough to actually act like you.
I find it super interesting that one use case for this is to conduct social science research that would otherwise be unethical, ie) that once you have AI models that can behave like real people, you can do whatever you want to them MK Ultra-wise and no one is harmed.
Which is kind of crazy - it almost seems like a victimless crime but in other ways may be a beautiful new mechanism with which to train and refine psychopaths. (If you can see it you can be it?)
Questions:
What would this do to participatory design research?
Does it make sense to imagine more than one simulation agent trained on a single person?
If so, would 10 simulation agents respond the same way as each other?
At what point and in what settings would the activities of a replicated self represent a conflict of interest?
(If you thought that the bureaucratic quagmire of research ethics boards were hamstrung by indecision and overanalysis before…)
Who is going to be the first artist to collaborate with their simulation agent and will they still need a project manager?