Summary: A response to Jakob Nielsen’s blog post declaring accessibility to have failed and proposing an artificial intelligence-based solution. I critique his claims and his solution, and identify what I perceive to be a fundamental misconception of what accessibility is.
I have observed that the more important you become, the more susceptible you are to promoting bad ideas. When you become so successful that you achieve Thought Leader status, you end up proclaiming that things are broken and that you have a novel solution. And like most novel solutions, it’s probably based on a basic misunderstanding of the problem you’ve identified.
Why does this happen? My hypothesis is that as you gain distance from the day-to-day practice of your field, you begin to lose track of the real-world constraints and consequences of your work. When you’re deep in the midst of doing the work, there’s definitely a risk of not being able to see the forest for the trees. But likewise, once you reach a high enough altitude the forest begins to blur into splotches of green that lose their nuance.
This is, of course, just a personal observation with an untested hypothesis. I would never be so presumptuous as to proclaim my musings as some kind of universally-applicable law.
This is the uncharitable lens through which I read Jakob Nielsen’s Accessibility Has Failed: Try Generative UI = Individualized UX, published yesterday on his Substack. Nielsen, of course, is one of the most prominent names in digital usability. His words are widely read and, more importantly, widely repeated. As the title of his blog post says, he has determined that the project of making websites accessible has failed and identifies two reasons:
- Accessibility is too expensive.
- Accessibility is doomed to create a substandard user experience.
The solution he proposes is providing people with experiences tailored to their needs, as generated by some kind of artificial intelligence process.
I’m going to quote Nielsen’s post as I criticize it, but I do want to encourage you to read the whole thing if you haven’t already. It’s a great example of well-intentioned but poor accessibility thinking coming from a legend of the industry.
The cost of accessibility
Nielsen writes:
Accessibility is too expensive for most companies to be able to afford everything that’s needed with the current, clumsy implementation. There are too many different types of disabilities to consider for most companies to be able to conduct usability testing with representative customers with every kind of disability. Most companies either ignore accessibility altogether because they know that they won’t be able to create a UX that’s good enough to attract sufficient business from disabled customers, or they spend the minimum necessary to pass simplistic checklists but never run the usability studies with disabled users to confirm or reject the usability of the resulting design.
There are a lot of claims made in this paragraph that are worth examining.
Testing with disabled users
Nielsen is correct about one thing: there are many types of disability!
I would also agree that companies should conduct usability testing with a representative sample of users that includes those with disabilities—something too many companies choose not to do. As an accessibility specialist, I absolutely never tell anyone “this is accessible” because it’s not in my power to make that true. A thing becomes accessible when the individual who tries to use it is successfully able to accomplish their task. Testing with a small group of disabled humans isn’t a guarantee either, but it gets you closer to the mark.
But it’s not necessary to routinely conduct usability testing with every kind of disability, and it’s silly to suggest that this is a barrier holding companies back. Different disabilities impact your interaction with a computer in different ways, and are not always relevant to an interaction pattern you might be testing. Likewise, disabilities may impact your interaction in similar ways, meaning testing with one subset of users may be sufficient to inform your design choices for a broader group.
And you don’t have to take my word for it. That advice is consistent with How to Conduct Usability Studies for Accessibility, a research methodology document written by Kara Pernice and a relatively unknown UX practitioner named Jakob Nielsen. Although it’s over a decade old, the guidelines provided by the authors are still a solid starting point for approaching research with disabled users.
All of that is to say, doing research with people with disabilities requires an understanding of the diversity of experiences associated with their disabilities. This understanding informs what you test, with whom you test, and what questions you ask. But that should be true of user research with all populations. If you don’t already know something about who you’re testing with, you’re probably not ready to put a prototype in front of them.
Heydon Pickering’s recent talk at axe-con feels relevant here too, but that leads to a bigger discussion than I’m ready for in this blog post.
“Simplistic checklists”
Nielsen also dismisses “simplistic checklists,” which I assume is a swipe at the W3C’s Web Content Accessibility Guidelines. On the other hand, I suspect many people wouldn’t call WCAG “simplistic” so perhaps he was referring to something else? It’s not clear.
In any case, WCAG certainly does have deficiencies. It is not sufficient to ensure an accessible experience for all users. But it’s a well-trodden path of patterns to which users are accustomed that ensures some baseline consistency in how websites behave and how assistive technologies are supported.
Beyond that, when used by an experienced practitioner, WCAG is a tool for identifying things beyond just “letter of the law” conformance. WCAG provides a series of pass/fail tests, but as a sum of its parts it also describes a philosophical approach for ensuring accessible outcomes. An example: While success criteria 3.3.4 and 3.3.6 have a relatively narrow scope related to user-submitted data, testing for and applying those rules prompts a broader understanding of techniques to preserve a user’s agency over their experience.
The business of accessibility
I also want to highlight a questionable assertion snuck in by Nielsen in this section:
Most companies either ignore accessibility altogether because they know that they won’t be able to create a UX that’s good enough to attract sufficient business from disabled customers
I suspect Nielsen is invited to more meetings with executives than I am, so perhaps he’s right. But companies do lots of technically and organizationally challenging things to capture more of the market. “We would be accessible but those disabled people are just too hard to please” seems unlikely.
If accessible experiences are too expensive for most companies to implement successfully—a claim that I don’t actually believe to be true—then what might be the cause of the trend he’s identifying? Perhaps it’s broken a product development process. When accessibility is an afterthought, flawed design and code decisions may never be addressed until they’re the subject of lawsuits and bad press. Accessibility practitioners talk about the need to “shift left” for exactly this reason. It’s a lot cheaper to fix a problem when it’s spotted on a whiteboard than when it’s spotted in production.
I’ll note that I linked to a “shift left” post from Deque that in turn references Nielsen’s own 10 Usability Heuristics. As my friend Crystal Tenan likes to say, accessibility and usability are best friends. The ideas around usability that (rightly!) made Jakob Nielsen such a powerful influence in our industry are very much compatible with the practices encouraged by accessibility specialists.
Substandard user experiences
Nielsen continues:
Accessibility is doomed to create a substandard user experience, no matter how much a company invests, particularly for blind users who are given a linear (one-dimensional) auditory user interface to represent the two-dimensional graphical user interface (GUI) designed for most users.
He expands on the one-dimensional versus two-dimensional dichotomy later in the post, while pitching his generative UI solution:
Traditionally, the computer made a single graphical user interface to represent the underlying features and data. A sighted user would simply use this GUI directly. A blind user would first employ a screen reader to linearize the GUI and transform it into words. This stream of words would then be spoken aloud for the user to listen to. This indirection clearly produces a terrible user experience: with 2D, the sighted user can visually scan the entire screen and pick out elements of interest. In contrast, the blind user is forced to listen through everything unless he or she employs a feature to skip over (and thus completely miss) some parts.
Nielsen’s complaint here is about the screen reader user experience, which he conflates with all of accessibility. But setting that aside, he’s highlighting an interesting difference in the sighted and non-sighted experience. I actually gave a training presentation around this difference once, and it’s something I like to bring up whenever I can. For a screen reader user interacting with an interface, the sequence in which information and actions are presented matters a lot more than for users with full vision.
Consider a form with important restrictions about when or how to submit entries. If a message that says “Read this before you submit” in a big red box comes immediately after the submit button, there’s a good chance that sighted users will see it, read it, and adjust their behavior accordingly. But screen reader users don’t get the visual cues that the box is there and don’t have any reason to expect critical information is provided after what would otherwise be the end of the workflow. They’re more likely to press the submit button and never know that message ever existed. An experience that’s designed for sighted users isn’t fairly represented to blind users.
In that example, there are lots of technical solutions that don’t rely on over-complicated AI tools—and they may be more or less successful for your users. You could make your big red box keyboard focusable and mess with the tab order using tabindex
to ensure it gets announced before the submit button. You could programmatically associate your big red box with one of your form inputs using aria-describedby
. You could do any number of other questionable things with ARIA.
But I also have a really wild idea, an outside-the-box totally bonkers approach that I suggest to the designers I work with: Before you design your interface, you could make a plain text outline. Then you can look at your outline and notice that a critical piece of content is in the wrong place in the user’s workflow. And then you could just… move it somewhere better.
Identifying and solving the real problem
Nielsen says “blind users… are given a linear (one-dimensional) auditory user interface to represent the two-dimensional graphical user interface (GUI),” but that’s not actually true! When screen reader software announces the contents of a webpage to blind users, it’s faithfully presenting a linear document—the HTML file that defines the contents of the page—as a linear stream of information. If the process of styling that document for sighted users results in a garbled, unintelligible document, then that’s the problem that should be solved.
And that’s not a problem that needs to be solved with AI. It can instead be solved with a more thoughtful design process:
- Open Microsoft Word or Google Docs before you open Figma or Sketch.
- Outline your page, presenting information and interactive elements in a hierarchy and a sequence order that makes the most sense for how your users will interact with them.
- Preserve that structure when you start your visual design work.
- If you have to deviate from how your outline is organized to achieve a desired visual experience, annotate your work to ensure that when it’s coded the intended structure is preserved in code.
This is the “shift left” ethos in action, and frankly just a good approach to UX design. Well-structured content benefits all users and is an easy practice to learn.
Accessibility is about a lot more than just ensuring screen reader users receive information in a sequence that supports task completion. But most accessibility challenges can be resolved through thoughtful design work. No after-the-fact robots are needed, just processes that incorporate accessibility into the product lifecycle from an early stage.
Generative UI and individualized UX
Suppose you wanted to use those robots anyway. Here’s what Nielsen proposes:
“Generative UI” is simply the application of artificial intelligence to automatically generate user interface designs, leveraging algorithms that can produce a variety of designs based on specified parameters or data inputs. Currently, this is usually done during the early stages of the UX design process, and a human designer further refines the AI-generated draft UI before it is baked into a traditional application. In this approach, all users see the same UI, and the UI is the same each time the app is accessed. The user experience may be individualized to a small extent, but the current workflow assumes that the UI is basically frozen at the time the human designer signs off on it. I suggest the term “first-generation generative UI” for frozen designs where the AI only modifies the UI before shipping the product.
I foresee a much more radical approach to generative UI to emerge shortly — maybe in 5 years or so. In this second-generation generative UI, the user interface is generated afresh every time the user accesses the app. Most important, this means that different users will get drastically different designs. This is how we genuinely help disabled users. But freshly generated UIs also mean that the experience will adapt to the user as he or she learns more about the system. For example, a more simplified experience can be shown to beginners, and advanced features surfaced for expert users.
Moving to second-generation generative UI will revolutionize the work of UX professionals. We will no longer be designing the exact user interface that our users will see, since the UI will be different for each user and generated at runtime. Instead, UX designers will specify the rules and heuristics the AI uses to generate the UI.
He continues:
With generative UI, an AI accesses the underlying data and features and transforms them into a user interface that’s optimized for the individual user. This will likely be a GUI for a sighted user, and for a blind user, this will be an auditory user interface. Sighted users may get similar-looking UIs to what they previously had, though the generative UI will be optimized for this user with respect to reading levels and other needs. For the blind user, the generative UI bypasses the representation of data and features in a 2-D layout that will never be optimal when presented linearly.
Besides creating optimized 1-D representations for blind users, generative UI can also optimize the user experience in other ways. Since it is slower to listen than to visually scan text, the version for blind users can be generated to be more concise. Furthermore, text can be adjusted to each user’s reading level, ensuring easy comprehension for everybody.
In other words, the future Nielsen imagines goes like this:
- User navigates to a website.
- Some kind of automated tool detects information about the user, including their disability status.
- The automated tool generates a user interface that’s most appropriate for the user. This includes rewriting and omitting content, or perhaps even removing entire interface elements.
- The user has a happy experience completing whatever user flow is supported by the interface provided.
Beyond the hand-waving that often happens when people talk about artificial intelligence, I don’t think this will work. I also don’t think it should work. You may sense my concerns from how I’ve restated Nielsen’s proposal, but I’ll make those concerns a little more explicit.
Challenges of detecting disability
We have a major technical hurdle right from the top: There is no reliable method by which an automated tool can detect a user’s disability status. Assistive technologies are not typically detectable by script, and not all disabled users are users of assistive technologies. Even users who often use assistive technologies may not use them all of the time, or may not use them as their only mechanism for interacting with the page.
Example: A low-vision user may use a mix of screen magnification, screen readers, and no assistive technology at all—potentially mixing or switching interaction methods within a single interface. Humans are complicated, and detecting their preferred mode of interaction seems tricky (let alone making automated choices for what interface is “right” for them).
Likewise, many disabled users do not use assistive technologies. Nielsen suggests that content could be rewritten by AI to match the user’s comprehension level, but he doesn’t provide any mechanism for identifying those users’ needs. How do you detect a learning disability via JavaScript? For that matter, how do you detect color blindness? How do you detect a repetitive stress injury?
So “disabled” is not synonymous with “assistive technology user,” and that becomes especially clear when you start considering the world of temporary and situational disabilities. Nielsen’s project needs some way of collecting disability data from the users.
The first and easy option would be to just ask them! Perhaps you could indicate your disability status on some kind of browser settings page. But many people would refuse to disclose their disabilities without a clear reason to do so, because “disabled” is a culturally loaded term. The United States Centers for Disease Control summarizes this concisely:
- Stereotyping: People sometimes stereotype those with disabilities, assuming their quality of life is poor or that they are unhealthy because of their impairments.
- Stigma, prejudice, and discrimination: Within society, these attitudes may come from people’s ideas related to disability—People may see disability as a personal tragedy, as something that needs to be cured or prevented, as a punishment for wrongdoing, or as an indication of the lack of ability to behave as expected in society.
There is power in defining your own identity, and many people with disabilities do assert their identity and build community around their experience. But users who fear discrimination, or users who just choose not to identify with a particular label, may be unwilling to volunteer the data needed to make generative UI work.
If the users won’t provide it for us, then there’s a temptation to try to collect that data by monitoring user behavior across many websites and identifying patterns that correspond with disabilities. This kind of surveillance program runs into massive data privacy problems, especially in the European Union. It would also be, to my mind, a clearly unethical approach.
Paternalism
If the goal is to provide users with an individualized user experience that meets their needs, then the tools for achieving this already exist. Right now, users can:
- Listen to a website
- Speak to a website
- Read a website in large print
- Read a website with distractions removed
- Explore a website with personalized colors, typography, and layouts
- Enter data and interact with a website in ways that match their body
These things are called assistive technologies: screen readers, voice recognition software, browser zoom, Reader Mode, custom style sheets, and a broad class of alternative input devices.
All of these things depend on well-structured HTML designed and built with accessibility in mind. These tools can be made better, and there may even be ways that artificial intelligence can make them better too. But the bold vision that Nielsen describes—users getting the interface that’s right for them—already exists. It’s just determined at the user level, with individuals choosing the right interaction model on their own.
Except that’s not actually Nielsen’s bold vision at all. What he’s describing is much more problematic:
Besides creating optimized 1-D representations for blind users, generative UI can also optimize the user experience in other ways. Since it is slower to listen than to visually scan text, the version for blind users can be generated to be more concise. Furthermore, text can be adjusted to each user’s reading level, ensuring easy comprehension for everybody.
In other words, some users get the full experience, the one with all the words, all the context, and all the options. But if Nielsen’s AI thinks you have a disability, you’ll get a different experience, a simpler experience that’s more appropriate for people like you. It’s an ugly kind of paternalism with a new AI twist. Since it’s just too hard to follow accessibility best practices—plain language, a coherent document structure, support for a broad set of user agents—let’s just design UIs for us regular people and let the algorithm dumb it down for the edge cases.
That may read as a particularly unkind way of reading of what Nielsen writes. But it’s a reading that’s informed by the long history of discrimination against people with disabilities. The societal assumptions that disabled people need able-bodied people to decide what’s best for them gave rise to the Independent Living movement. The central tenets of that movement—self-determination and self-respect—are deeply embedded in modern digital accessibility.
What accessibility is, really
Early in his post, Nielsen takes a swipe at people who work in accessibility:
Where I have always differed from the accessibility movement is that I consider users with disabilities to be simply users. This means that usability and task performance are the goals. It’s not a goal to adhere to particular design standards promulgated by a special interest group that has failed to achieve its mission.
I actually think “accessibility movement” is kind of funny, as is the claim that we’re a “special interest group.” Both of those imply a level of cohesion that doesn’t seem possible in an industry where our most-used phrase is “it depends.”
Regardless, I think this reveals a fundamental misconception around what accessibility is about.
Nielsen’s statement that he “considers users with disabilities to be simply users” rhymes nicely with the “I don’t see color” approach to addressing racism. The goal is for everyone to be treated equally, so let’s treat everyone equally: “usability and task performance are the goals.”
The reality of the world we live in is that disability matters to people’s lived experiences. It isn’t the only thing that matters, and it may not matter in every situation or every context. But your disability may impact your ability to earn a living, your legal status, how you interact with others socially, and how you use technology.
This is true for every marginalized group. This is human diversity. Collapsing everything down into “all users are just users and the only thing that matters is task completion” ignores reality. And if you put in the effort to understand your users and their needs, you might not notice when your designs fail them.
Accessibility as a practice is not, as Nielsen says, just “[adhering] to particular design standards.” It is an effort to understand how people with disabilities use technology and ensure that the things we build will work for them too. It’s an extension of the design approaches Nielsen pioneered, but applied to a particular class of users who might otherwise be left out. We do adhere to standards like WCAG, but those standards are informed by the technologies and behaviors that have been observed.
Closing thoughts
Nielsen’s post made me angry, and I’ll acknowledge that I have some self-interest in preserving accessibility as a discipline. I work for a company that takes accessibility seriously, and that company works in an industry whose regulatory requirements prioritize accessibility. But it’s easy to imagine my peers working in other spaces having to justify their employment to a manager who says “well Jakob Nielsen says AI will solve this.”
But beyond my own self-interest, I’m honestly surprised by the idea promoted by Nielsen in his post. His argument boils down to: “Accessibility is too hard for designers. Let’s just give it to AI and wash our hands of the whole thing.” It shows a complete lack of faith in the whole idea of design as a way to solve problems, and a lack of faith in UX designers to understand disability and make informed design choices.