A List Apart

Menu

Working with Others: Accessibility and User Research

Issue № 225

Working with Others: Accessibility and User Research

by Published in Accessibility · 45 Comments

The Web Content Accessibility Guidelines (WCAG) 1.0 are the W3C’s official standards for producing accessible web content. The Web Content Accessibility Guidelines Working Group does not publish information about what user research its members used to create WCAG 1.0. Similarly, many of the web’s hundreds of accessibility experts do not conduct—or at least do not cite—research that validates their advice.

After personally observing users with disabilities interacting with websites in unexpected ways, I have come to believe strongly in the value of user research—and to suspect that we really don’t know quite as much about real-world accessibility as we think we do.

The missing link

Since the WCAG-WG doesn’t publicly list the studies on which its recommendations are based, I asked a reputable member of WCAG-WG what kind of user research WCAG 1.0 was based upon. He answered that “WCAG are based on many things,” which sounded good, but didn’t really answer the question. Exactly what were those “many things”?

In a later response, the working group member cited the well known NielsenNormanGroup research report on users with disabilities.  The problem is that the NNG study is dated 2001, but WCAG 1.0 was published in 1999.

So we have no user research officially mentioned in our beloved guidelines, and my attempt to get this information directly from the source came to nothing. We may assume that user studies were indeed used in the making of WCAG 1.0, but we can’t examine them ourselves. Furthermore, this lack of publicly discussed research has resulted in a highly concentrated conversation about technical points and scarcely any talk about real-world user behavior.

The following examples are a few of the puzzling uses of the web that I noticed were not covered by WCAG guidelines. These experiences are merely personal observations based on only a few users, but even this limited sample suggests that current accepted wisdom on content accessibility is incomplete.

title and h1

As I observed a blind web user navigate through a few pages, he reported that hearing the h1 content on top of the page was boring and redundant for him. Because his screen reader read the content of the title element first, the title element served as the actual title of the document for him, and the h1—which merely repeated the content of the title element—was useless. Of course, this was only true when the title element contained useful and pertinent information.

Given this information, a good guideline might suggest that the title  element contain basic orientation information, including the name of the site and of the specific page in the site. The h1  should then be preceded by links to the main areas of the document, like “go to: content, main navigation, secondary navigation, footer,” to allow blind users to skip potentially redundant information (a repetitive h1).

WCAG doesn’t explicitly say this; the guidelines say that “repeated groups of links” should present a skip link. This may be true, but it isn’t enough, and even very rudimentary user testing uncovers a need for more detailed guidelines in this area.

What do you mean, nav should come first?

The same blind user exhibited an unexpected behavior as he attempted to find a specific link on a web page. He knew information he was trying to find and expected this information to be present in the first few links of the page. As the page navigation block came after the page content, he listened to the first few links within the content and followed the one that sounded “not so bad” to him.

This is an important behavior to note, because the link the user was seeking was actually in the navigation section, below the content section. During this user’s session he never saw the nav section, because his strategy of navigation was based on the assumption that the main links should be at the beginning of the page. Much conventional accessibility advice states that page content blocks should be presented first, but some actual user research suggests that screen-reader and text-browser users expect navigation to come first. This doesn’t mean that navigation should always come first in practice, but it does demonstrate that research sometimes uncovers faulty assumptions about accessibility

And of course, content order is not covered by WCAG 1.0 at all.

Size matters, but so does boldness

Another example pertains to low-vision users. A few years ago, I asked Franco Frascolla, an expert on the informatic problems of low-vision users who also has limited vision himself, to review a site I was working on.

To my surprise, Franco told me the text couldn’t be sufficiently enlarged on some areas of my site.  When I tried to compare my site with others that he judged to work properly, I had a hard time figuring out what the problem was. At the default size, the text on “good” and “bad” pages often looked similar, but after a few trials, I had an insight. The problem wasn’t only the size of the text, but also the boldness of the characters at the enlarged size as seen on Internet Explorer for Windows.

Internet Explorer for Windows is the only browser that puts an upper limit on the number of times you can enlarge website text: it allows for only five levels of text size.  Unfortunately, IE is the most widely used browser by low-vision users— especially the ones who aren’t informatics experts. If at the “largest”  font-size the text is not large enough, then low-vision users simply cannot read it.

But more than that it turns out that while size matters, boldness also matters. In fact, a bigger text that is not bold, is less readable than a text of the same size that is bold. Compare the text in the following image and you’ll see what I mean.

Screenshot of google.com on IE/Win with the largest text size.

Google as viewed with IE/Win with the highest text magnification allowed by that browser.

In this example, most of the text is bold or extra-bold—but not all of it. “Advanced search,” “Preferences,” “Language Tools,” and “(c) 2006 Google” are large,  but not bold enough. They’re not readable for most low-vision users. And how about the button labels?

It’s easy to tell whether a particular site passes this test; review your site with IE6, choose “view > text size > largest” and ask yourself “is all my text bolded now?” Suppose that the text in the footer is larger, but not enough to become bold; then it’s not readable for most low-vision users.

Is there an official guideline that covers this very basic problem for low-vision users? Not at all.  Low-vision users are covered neither by Section 508 nor by the national Italian law. Even basic user testing, though, can uncover problems like these.

The need for user research about accessibility

The above examples come from my personal experience observing web users with disabilities. Are they extreme examples of users in very rare situations? Probably not, and though we don’t know exactly how many users are affected by these problems, we can assume that they’re likely quite common.

Given that the W3C has spent more than eight years discussing WCAG 1.0 and 2.0, I expected these—and probably many other common situations—to be addressed by our guidelines, but they are not.

So many experts, such little research

What’s the reason for this lack of testing-centered official guidelines? I don’t think it’s because we are deliberately omitting things. I think it’s because we, as experts, are using the wrong method. How do we assess our guidelines? By discussion, for the most part. This might be a good method for many technical recommendations, but may not be the right method for establishing guidelines concerning real-life user experience. I propose that the right method would be observing users with disabilities, talking with them, and conducting both formal and informal research with them. We could document the research so that it would be replicable and publish results so that we can stop relying on dubiously researched assumptions.

Why hasn’t this been done,  at least not in a visible fashion? I think it’s due to the technical background of most people participating in the discussion. Budget issues may also discourage research, but a lot of user research has been done in an inexpensive way in the past.

A sociologist might say that it could also be a matter of political issues: of power. User research is sometimes counterintuitive. The results may put in crisis some of our assumptions. We may need to reorganize our thinking and rewrite our guidelines based on real user experience.

What do we need? Testing! When do we need it? Now!

Regardless of the reasons for the lack of attention to the real user experience, it’s important to start doing more user research with disabled users now. We need to improve our understanding of what’s important in accessibility, and to include this information in our guidelines. This way we can also evaluate why some surveys of disabled users seem to be so far from our expectations.

In short, we need less discussion and more user research. Especially when our guidelines form the basis of national laws, we need to ensure that they’re founded on real user experience. And in the meantime, accessibility experts, let’s conduct—and publish—more user research to support our recommendations.

About the Author

45 Reader Comments

Load Comments