A friend told me recently about trying to use a virtual try-on feature on a major retailer’s website. The idea was straightforward: upload a photo, see how a jacket looks on you. What the interface never considered was that she could not see the result. No alt text. No description. No way to make the experience meaningful. The feature existed. It simply did not exist for her.
That experience is exactly what Neha Puri and I set out to document in our paper, Designing AI for Real Users: Accessibility Gaps in Retail AI Front-Ends. And I am genuinely pleased to say that paper has been accepted into a workshop at CHI 2026.
Why CHI Matters
CHI is not a minor event. The ACM Conference on Human Factors in Computing Systems is the most prestigious academic conference in human-computer interaction. It has run since the early 1980s. Acceptance rates for full papers consistently sit around 20 percent. The conference attracts thousands of submissions from researchers, practitioners, and technologists across the world, and it has shaped how the computing industry thinks about the people who actually use the systems it builds.
Getting accepted to any part of CHI is a significant achievement. Getting a paper into a dedicated ethics workshop there, in front of an international, cross-disciplinary audience, means this work is being taken seriously at the highest level of the field.
The Workshop: Ethics at the Front-End
The workshop is titled Ethics at the Front-End: Responsible User-Facing Design for AI Systems. It ran on 15 April in Barcelona, with a keynote from Professor Ben Shneiderman, one of the most cited and respected voices in HCI.
The framing of the workshop is important. Most ethical scrutiny of AI focuses on what happens behind the scenes: the model architecture, training data, bias in decision-making, governance frameworks. That scrutiny is necessary. But it leaves an enormous gap. The ethics of how AI is actually experienced by real users, through real interfaces, has received far less attention. The workshop exists to address that gap.
What Our Paper Found
Neha and I examined three types of retail AI front-end: virtual assistants, virtual try-on systems, and hyper-personalised recommendation engines. These are not experimental technologies. They are live, customer-facing systems used by millions of people every day.
What we found is that despite their intuitive and inclusive framing, these systems are built around an assumed user: sighted, hearing, neurotypical, able to use a standard interface without modification. People with vision impairments, hearing differences, motor disabilities, cognitive differences, speech differences, and sensory differences are systematically marginalised, not by deliberate exclusion, but by default assumptions baked into the design.
The paper puts it plainly: these AI front-ends implicitly assume an ideal user body and mind.
While much of the AI ethics discourse has focused on back-end systems, the decisions made at the front-end, about how AI is experienced, are also values-laden decisions. They reflect choices about who the system is designed to serve.
From BBEB to Barcelona
I have been writing about digital accessibility on this blog for some time now. Posts about screen reader compatibility, colour contrast, accessible PDFs, WCAG standards, the commercial and regulatory case for inclusion. That writing has had a purpose beyond the posts themselves.
Within Intertek, it has helped build awareness. It has created conversations that would not otherwise have happened. It has contributed to an environment where accessibility is not an afterthought but a question asked early, including in how we think about new AI services and the experiences they create.
This paper is a direct product of that environment. Sustained, consistent advocacy for accessibility, in writing, in conversation, in the work itself, creates conditions for collaboration. That collaboration produced research that is now being presented at the most important conference in its field.
Takeaway
If your organisation is building or procuring AI tools, the question to ask is not only whether the model is fair. Ask whether the interface is accessible. Ask whether your virtual assistant works with a screen reader. Ask whether your personalisation engine adapts to users who cannot see, hear, or interact in the way the designer assumed. The ethics of AI do not stop at the model. They extend to every pixel of the experience you put in front of your users.
Sources and further reading:
Neha Puri and Tim Dixon, Designing AI for Real Users: Accessibility Gaps in Retail AI Front-Ends — the full paper
Ethics at the Front-End: Responsible User-Facing Design for AI Systems — the CHI 2026 workshop page
ACM CHI 2026, Barcelona — conference homepage
Much ethics discourse revolves around the design of back-end systems, but the design of what users experience at the front-end entails many values-laden decisions too. (Ethics at the Front-End workshop, CHI 2026)
unknownx500





