Hey – it’s Fiona!

Last week we looked at adaptive interfaces and the art of deciding what should bend and what should never break. But that naturally raises a deeper question: when technology adapts, who gets left out?

Bias isn’t just an academic issue. It seeps into everyday product design, often without us realising it. And once AI is in the mix, those small oversights don’t stay small. They get amplified across thousands of decisions, shaping who gets recommended, who disappears, and who feels excluded.

Where bias comes from

Most of the time, bias isn’t deliberate. It slips in through design choices, data gaps, or the way agents process information. Three common sources stand out:

  • Data bias: training sets skewed towards one group, leaving others poorly served (e.g. voice assistants that struggled with women’s voices, or dermatology AI misreading darker skin tones).

  • Design bias: adaptive logic that assumes too much about what’s “best,” hiding options and reducing user agency.

  • Agent bias: AI agents amplifying companies with the cleanest metadata, not necessarily the best quality.

Each one feels minor in isolation, but together, they create products that look useful but exclude whole groups of people.

Why this matters now

Humans can sometimes work around quirks, but when bias feels unfair, trust collapses. And with AI agents acting as the first filter between people and products, the margin for error is tiny.

A human might stumble across your product by chance. An AI agent won’t. If your site hides key facts in PDFs or uses vague labels, the agent moves on and the human never even sees you. That isn’t just about SEO. It’s about visibility, access, and trust.

Four principles for ethical design

One way I think about ethics is the same way I think about usability: not as a box to tick once, but as a constant lens. Four principles help keep bias in check:

  • Representation: audit who’s in your dataset and who’s missing, including edge cases.

  • Transparency: explain decisions clearly to humans and structure them for agents.

  • Choice: keep options visible or at least discoverable. Don’t lock people into the system’s guess.

  • Accountability: design reversibility, so users can recover quickly when the product is wrong.

These aren’t abstract values. They’re as practical as writing service details in plain text instead of hiding them in an image, or making sure schema markup matches what your site actually says.

Seeing bias in the wild

Think about an onboarding flow in a SaaS product that hides advanced features “for simplicity.” For new users, this may be fine, but experienced users might struggle to reach the tools they need. Or a marketplace that ranks providers on “responsiveness,” unintentionally penalising those with different working patterns.

In publishing, I’ve seen independent journalists vanish from AI-curated feeds because their articles lacked metadata, even though their reporting was credible. The bias wasn’t in the content, it was in the system’s rules.

These examples aren’t edge cases. They’re reminders that bias often hides in the smallest details of our design choices.

A quick exercise

Take one decision your product makes automatically (e.g. a “best match,” a recommendation, or an adaptive rule). Ask yourself: who benefits most? Who might be disadvantaged? And if an AI agent repeated this decision thousands of times, would it reinforce inequality?

If you pause on any of those, you’ve spotted a bias risk worth fixing.

Why I’m optimistic

It’s easy to see ethics as a brake pedal. But in practice, building for fairness often makes products better. Interfaces become clearer, structures more consistent, and trust grows deeper.

And in a world where most competitors are chasing speed or flashiness, trust is the differentiator that compounds over time. It’s what will make both humans and AI agents choose you.

Until next week, I’d love to hear your thoughts:

Where have you seen bias creep into products — obvious or subtle?

Hit reply and tell me.

Talk soon,
Fiona

Fiona Burns

Work with me

Alongside writing Beyond the Screen, I help founders and product teams design digital products their users (and AI agents) can’t ignore.

That might mean validating an early idea, shaping the first version of a marketplace, or redesigning a website so it’s easier for both people and machines to understand.

If you’re building something new and need UX/UI support, head over to my website to see how we could work together.

Keep Reading

No posts found