Hey, it’s Fiona!
We’ve spent the last couple of weeks exploring AI agents as your new audience. But there’s a second shift happening that changes not just who we design for, but what the interface itself even is.
I’m talking about generative UX/UI.
That phrase gets thrown around a lot, so let’s take a closer look at what this means. Generative UX/UI is the practice of designing digital interfaces that adapt or generate themselves in response to user context, behaviour, or AI agents.
At first it sounds futuristic, a bit sci-fi. But the reality is, you’ve probably already brushed up against it. And it’s going to be a big part of the future of UX design in the next decade.
A Spotify moment
Think about Spotify’s AI DJ. It’s not just picking songs; it’s shaping your interface experience in real time. Instead of you scrolling through a static list of playlists, the “DJ” narrates, changes the sequence, and presents music differently depending on what you’ve listened to before.
Notion is another example. You start with a blank page, but the system suggests blocks, layouts, or even full templates because it has “read” the context of your writing. It’s a generative interface because it’s not serving you the same fixed set of choices; it’s actively shaping itself to what it thinks you need.
These may look like small tweaks, but they hint at something much bigger: the end of the interface as a fixed object.
Why this is such a shift
As designers, we’ve been trained to think of screens as something we lock down. We arrange buttons, choose the headline, map the journey. Even with responsive design, the structure stays fundamentally the same across devices.
Generative UX/UI breaks that assumption. Two users might see entirely different interfaces. A booking flow might reorder itself depending on whether you’re travelling with children or using loyalty points. A shopping experience might bring product comparisons forward if an AI agent is driving the search, while showing lifestyle imagery if a human is browsing casually.
It’s not about personalisation as we’ve known it — swapping a first name into an email. This is the interface itself behaving more like a living system.
Quick pulse check
I’m curious how you feel about this shift.
Click the one that fits you best. It’s anonymous, and I’ll share the results in next week’s issue so you can see how others are feeling about this shift.
The challenges nobody talks about
Of course, there’s a downside. Humans crave predictability. If the checkout looks different today than it did yesterday, will people still trust it? What happens when the “AI guess” about what you need is wrong and the interface hides the very option you were looking for?
And then there’s bias. If generative interfaces build themselves based on past behaviour, they risk hard-coding inequalities into the product.
Loss of predictability: inconsistency undermines trust.
Hidden bias: adaptive interfaces can reinforce stereotypes.
Testing complexity: when there’s no final state, QA shifts from screens to rules.
That’s the uncomfortable reality. But it’s also where the most important design work sits.
Stable vs Flexible Zones in Generative UX
The temptation is to panic and assume everything has to change overnight. It doesn’t. What helps is to reframe your role. Instead of obsessing over pixels, think about principles.
Which parts of your product must remain stable for trust, safety and usability? Pricing pages, navigation, core actions like “Pay now” - those shouldn’t reinvent themselves every five minutes. But there are other areas where flexibility could actually enhance the experience: content recommendations, learning paths, even onboarding flows.
If you start mapping your product in terms of “stable zones” and “flexible zones,” you create a way to experiment without losing the predictability that humans need.
That’s the mindset shift: design the rules of the system rather than the one perfect screen.
Why I’m excited anyway
This isn’t the first time we’ve had to rethink our craft. Remember the painful transition from desktop-first to mobile-first design? At first it was awkward. But once we embraced it, the creativity that followed was incredible. Entirely new interaction models appeared.
Generative UX/UI is that kind of shift. Yes, it feels messy and uncertain. But it also opens the door to experiences that are more responsive to real context than anything we could hand-craft.
Imagine an onboarding that bends to the exact questions a user has in the moment. Or a dashboard that reorganises itself around the decision you’re trying to make.
That’s the prize.
Try this this week
Take a flow you’ve already designed (maybe your checkout, maybe your sign-up). Imagine you had to brief an AI system to generate variations of that flow for different contexts. What rules would you need to write down? What’s sacred, and what can flex?
That single thought experiment is your starter kit for generative UX/UI.
If you’re hungry for more, I’ve linked a few of reads I rate:
Talk soon,
Fiona

Fiona Burns
Work with me
Alongside writing Beyond the Screen, I help founders and product teams design digital products their users (and AI agents) can’t ignore.
That might mean validating an early idea, shaping the first version of a marketplace, or redesigning a website so it’s easier for both people and machines to understand.
If you’re building something new and need UX/UI support, head over to my website to see how we could work together.