Previously, I wrote about Jakob Neilsen’s formulation of how AI introduces the first major shift to UI principles in 60 years. This is related to that but just … more.
I just saw the demo for Rabbit, which a new approach to AI-enabled actions and a new hardware device.
The idea is that app-based interfaces, now being about 15 years old, are outdated, and that even with the massive potential of LLMs there is still something left to solve with regard to how apps, including things like ChatGPT, interact with each other and translate commands into action. With Rabbit, Founder Jesse Lyu (below) posits what he calls a Large Action Model (LAM), in opposition to a Large Language Model (LLM). So - the idea that you can ask it things, and it can really, actually do those things for you, even complex or multiplatform/multifunctional tasks.
So many things to say about this but the first is that this will bring massive changes to User Interface design. What will that even mean now? We’ve already been watching natural language start to dominate human computer interaction, but this is next level.
Secondly, I’m so curious how this will/can translate into creative production. There’s been such a massive proliferation of AI-based tools that can do various (mind-boggling) things in the field of image production, coding, generative video, etc. but if you can bundle them all together within a natural language interface that is truly sophisticated and reliable, what are those possibilities?
This is amazing! Hope I live long enough to have one.