A List Apart

Menu

Deafness and the User Experience

Issue № 265

Deafness and the User Experience

by Published in Industry, Accessibility, Interaction Design, Usability49 Comments

How many times have you been asked this question: if you had to choose, which would you prefer to be: deaf or blind? The question illustrates the misconception that deafness is in some way the opposite of blindness—as though there’s some sort of binary representation of disability. When we look at accessible design for the deaf, it’s not surprising to see it addressed in a similar fashion: audio captioning is pretty much the equivalent of alt text on images for most designers.

Captioning by itself oversimplifies the matter and fails many Deaf people. To provide better user experiences for the Deaf, we need to stop thinking of deafness as simply the inverse of hearing—we need to understand deafness from both a cultural and linguistic perspective. Moreover, to enhance the online user experience for the deaf, we must understand how deafness influences web accessibility.

Little “d” deaf and big “D” Deaf: the distinction

You might have noticed that I’ve been interchanging little “d” deaf and big “D” Deaf in this article. It’s an important distinction—one that the Deaf community makes regularly.

Little “d” deaf describes anyone who is deaf or hard of hearing (HOH) but does not identify with the Deaf community. The Deaf community uses big “D” Deaf to distinguish themselves as being culturally Deaf.

The Deaf community is considered to be a linguistic and cultural minority group, similar to an ethnic community. Just as we capitalise the names of ethnic communities and cultures (e.g., Italian, Jewish) we capitalise the name of the Deaf community and culture. Since not all people who are physically deaf use Auslan and identify with the Deaf community, the d in deaf is not capitalized when we are referring to all deaf people or the physical condition of not hearing.

The Australian Deaf Community is a network of people who share a language and culture and a history of common experiences.

Australian Association of the Deaf

Collective deafness

An interesting thing has happened on the web in the last 18 months—the web community has become more aware of deafness and how it influences accessible design practices.

First, Joe Clark launched The Open & Closed Project (OCP)  in November, 2006. Second, in early April, The OCP launched the Captioning Sucks! site.

The Open & Closed Project suggests two methods of presenting accessible media for the deaf and hard of hearing:

  • Captioning is the transcription of speech and important sound effects.
  • Subtitling is a written translation of dialogue.

Consider Wikipedia’s definitions of transcription and translation:

  • Transcription is the conversion into written, typewritten or printed form, of a spoken language source, such as the proceedings of a court hearing. It can also mean the conversion of a written source into another medium, such as scanning books and making digital versions.
  • Translation is the action of interpretation of the meaning of a text, and subsequent production of an equivalent text, also called a translation, that communicates the same message in another language.

Captioning and subtitling rely on written language to convey information.

As a transcription, captioning is simply the written form of spoken words and sound effects, including slang, colloquialisms, modifiers, and wordplay—which, as we’ll see below, can be very difficult for deaf, HOH, and Deaf people who struggle with English as a second language.

Subtitling, which is a translation, provides an opportunity to use words that are closer to the signs a Deaf person would use. However, it is important to note that typically, native sign languages have no natural written form.

It’s great that The OCP and Captioning Sucks! sites have drawn attention to deafness and accessible media, but it is important to understand that there is more we can do—particularly for the Deaf and hard of hearing audience.

Don’t get me wrong; research into captioning and subtitling is an important thing that will, no doubt, improve access to information for many people—not just deaf, HOH, and Deaf people. Captioning and subtitling improves the user experience of cinema, television, and the web for all kinds of people: anyone in a noisy environment, office workers in bee hive cubicles, migrants, teens addicted to earbuds, anyone with partial hearing, and even Deaf people.

But the Open & Closed Project doesn’t address the needs of the big “D” Deaf community as well as many people think it does. Maybe it isn’t supposed to. But it’s important to understand why captioning isn’t the most ideal method of supporting many Deaf people in accessing online content. Until the web community understands why, we won’t be able to address it adequately.

Because of limited awareness around Deafness and accessibility in the web community, it seems plausible to many of us that good captioning will fix it all. It won’t. Before we can enhance the user experience for all deaf people, we must understand that the needs of deaf, hard of hearing, and Deaf users are often very different.

It’s a visual thing

Native sign languages aren’t simply a gestural representation of spoken language; sign language is a visual-spatial language, without a natural written form. Grammar and syntax are very different from that of spoken languages, and rely heavily on facial expression to convey essential meaning and emphasis. While many Australian Deaf people, for example, use English as a second language, Auslan (Australian Sign Language) is their primary language. For this reason it’s important to recognize Deafness primarily as a culture, rather than a disability.

During a language class, a Deaf teacher once told me:

We are not disabled and Deafness is not a disability; it’s the perception of many hearing (people) that we are disabled, and that is our disability.

Rather than thinking of Deaf users as disabled, simply understand that the dominant language in their country is not necessarily their primary language.

Phonetics, slang, and wordplay present challenges

What does phonetic based language mean to a Deaf person? The word “comfortable” is a great example of this. An old joke often shown to hearing sign language students is the mythical sign “come-for-table.” As you can see, pronounced quickly, it sounds like comfortable, but when signed it could literally mean “have you come for the table?” but never “comfortable.”

Consider also the phrase once in a blue moon, which means “occasionally” or “every now and then.” When taken literally, the meaning becomes ambiguous and even confusing. Think too about the way we use language in e-mails, text messages, and even advertising. Much of our shorthand and many of our colloquialisms are based on phonetics. For example, with CU l8tr, “C” sounds like “see,” but it doesn’t look like it. Jokes that rely on a play on words can have similar problems. Take, for example, one of my favorites:

Did you hear about the prawn that walked into a bar and pulled a mussel?

In hearing this joke, pulled a mussel could easily mean strained a muscle or dragged a mussel, but what it actually means here is “picked up” or “met.” So as you can see, it’s not hard for meaning to become confused.

Lost in transcription and translation

Let’s suppose we’re talking about providing accessible content for an English television sitcom with a Deaf audience.

Captioning is perfect for the post-lingual deaf or hard-of hearing audience; it presents content in an accessible format, in the primary language of the user. However, as captioning is a transcription, for the Deaf audience, content is presented in the user’s second language, one with which the user may have little or no fluency. While captioning provides better access to content for the Deaf than if there were none, it’s important to remember that there is a big difference in the needs of those who can’t hear (deaf) and those who speak another language altogether (Deaf).

In “What Really Matters in the Early Literacy Development of Deaf Children,” [1] Connie Mayer cites several studies that address the literacy gap present in the Deaf community:

Yet it remains the case that 50% of deaf students graduate from secondary school with a fourth grade reading level or less, [2] and 30% leave school functionally illiterate. [3]

The frequently reported low literacy levels among students with severe to profound hearing impairment are, in part, due to the discrepancy between their incomplete spoken language system and the demands of reading a speech-based system.” [4]

Keep in mind too that English, for example, has the highest number of synonyms of any language. Signed languages have very few in comparison. Sign language relies heavily on facial expressions and body language to provide meaning to language. So where we would say, “careful, the pie is extremely hot,” we might sign, “careful, the pie is very hot,” with a more pronounced facial expression on “very” to infer extreme heat. What this means is that the user with low to moderate fluency in English has to concentrate a lot harder, particularly when dialogue (captioning) is moving quickly.

Thus, captioning alone, as a transcription of spoken English, complete with its slang, colloquialisms, and wordplay, is not a perfect solution to the problem of creating accessible websites for the Deaf.

Alternatively, if we employ subtitling, we’re providing a written translation of a language for which there is no written form. (And therein lies the problem.) So how do we best provide a written translation for a language that has no written form?  We provide sign language interpreting instead, as is sometimes seen on news broadcasts and current affairs programs. Where this isn’t possible, subtitles for the Deaf and hard of hearing, with notations on sound effects, would be most accessible.

There seems to be a perception by some people that subtitles for the Deaf use dumbed-down language. However, I’ve always perceived the language to be based on the English equivalent of the signs that would have been used had an interpreter been present. Of course this means that the grammar continues to follow an English pattern, but it seems to me that the subtitles are likely to be more accessible to a wider audience.

So what’s the solution?

Like with most things, there isn’t a single, fix-all solution to the issue. However, as socially-conscious designers, we’ve worked to understand the issues. Now, we can make an honest attempt at addressing them.

Writing for the web

Taking heed of all those Writing for the Web 101 tips you’ve seen is a good place to start and will enhance site readability for a wide range of users, including the deaf. Sign language is a very direct language, where the main point is stated first and then expanded upon—much like the “inverted pyramid” or journalistic style of writing that we so often recommend for writing on the web. Some other considerations are:

  • Use headings and subheadings.
  • Write in a journalistic style: make your point and then explain it.
  • Make one point per paragraph.
  • Use short line lengths: seven to ten words per line.
  • Use plain language whenever possible.
  • Use bulleted lists.
  • Write with an active voice.
  • Avoid unnecessary jargon and slang, which can increase the user’s cognitive load.
  • Include a glossary for specialized vocabulary, e.g., medical or legal terminology, and provide definitions in simpler language.

Language learners, or anyone doing the usual page scan for highlights, will benefit—and users with cognitive and learning disabilities will find it helpful too. As with all web documents, the content should be marked up as standards-focused, semantic, and valid HTML.

Multimedia

Where possible, for web-based multimedia, the ideal solution is to incorporate sign language interpretation with the video as picture-in-picture, as this provides a synchronized presentation. However, this can be a very time consuming and costly process. And as sign language is specific to certain regions, it will be more appropriate in some situations than others. As an alternative, sign language interpreting can be recorded and provided in addition to the audio and transcript or captioning.

Alternately, a combination of captioning (to transcribe sound effects) and subtitling (written translation, with a focus on users with sign as a primary language) is most effective. Where this isn’t possible, a transcript of the dialogue will suffice; transcripts provide the user with an opportunity to print out the dialogue and read it at a comfortable pace.

Remember that the purpose of subtitling is to convey meaning, not to test the language skills of the audience. It is more important to convey the meaning and sentiment of audio content than to transcribe it verbatim.

Take action now

Transcribe all conference podcasts and make the content available in an accessible format. Organize an interpreter for your next presentation—record the translation and make it available online. Read one of the books listed below. Most importantly, whenever you have the chance, gain awareness of your local Deaf community. I’ll be surprised if that doesn’t make you want to learn a few signs yourself.

Suggested Reading

Nora Ellen Groce—Everyone Here Spoke Sign Language: Hereditary Deafness on Martha’s Vineyard (Harvard University Press, 1985).

Harlan Lane—When the Mind Hears (Vintage, 1989) and The Wild Boy of Aveyron (Harvard University Press, 1979).

Oliver Sacks—Seeing Voices: A Journey into the Land of the Deaf (University of California Press, 1989).

References

[1] Mayer, C. “What Really Matters in the Early Literacy Development of Deaf Children.” Journal of Deaf Studies and Deaf Education12.4 (2007): 411-31. (full text)

[2] Traxler, C. “The Stanford Achievement Test, 9th Edition: National Norming and Performance Standards for Deaf and Hard-of-Hearing Students.” Journal of Deaf Studies and Deaf Education 5.4 (2000): 337-48. (full text)

[3] Marschark, M., Lang, H., Albertini, J. Educating Deaf Students: From Research to Practice. New York: Oxford University Press, 2002.

[4] Geers, A. “Spoken Language in Children with Cochlear Implants.” Advances in Spoken Language Development of Deaf and Hard of Hearing Children—Spencer P, Marschark M, eds. New York: Oxford University Press, 2006. 244–270.

49 Reader Comments

Load Comments