Deafness and the User Experience
Issue № 265

Deafness and the User Experience

How many times have you been asked this question: if you had to choose, which would you prefer to be: deaf or blind? The question illustrates the misconception that deafness is in some way the opposite of blindness—as though there’s some sort of binary representation of disability. When we look at accessible design for the deaf, it’s not surprising to see it addressed in a similar fashion: audio captioning is pretty much the equivalent of alt text on images for most designers.

Article Continues Below

Captioning by itself oversimplifies the matter and fails many Deaf people. To provide better user experiences for the Deaf, we need to stop thinking of deafness as simply the inverse of hearing—we need to understand deafness from both a cultural and linguistic perspective. Moreover, to enhance the online user experience for the deaf, we must understand how deafness influences web accessibility.

Little “d” deaf and big “D” Deaf: the distinction#section2

You might have noticed that I’ve been interchanging little “d” deaf and big “D” Deaf in this article. It’s an important distinction—one that the Deaf community makes regularly.

Little “d” deaf describes anyone who is deaf or hard of hearing (HOH) but does not identify with the Deaf community. The Deaf community uses big “D” Deaf to distinguish themselves as being culturally Deaf.

The Deaf community is considered to be a linguistic and cultural minority group, similar to an ethnic community. Just as we capitalise the names of ethnic communities and cultures (e.g., Italian, Jewish) we capitalise the name of the Deaf community and culture. Since not all people who are physically deaf use Auslan and identify with the Deaf community, the d in deaf is not capitalized when we are referring to all deaf people or the physical condition of not hearing.

The Australian Deaf Community is a network of people who share a language and culture and a history of common experiences.

Australian Association of the Deaf

Collective deafness#section3

An interesting thing has happened on the web in the last 18 months—the web community has become more aware of deafness and how it influences accessible design practices.

First, Joe Clark launched The Open & Closed Project (OCP)  in November, 2006. Second, in early April, The OCP launched the Captioning Sucks! site.

The Open & Closed Project suggests two methods of presenting accessible media for the deaf and hard of hearing:

  • Captioning is the transcription of speech and important sound effects.
  • Subtitling is a written translation of dialogue.

Consider Wikipedia’s definitions of transcription and translation:

  • Transcription is the conversion into written, typewritten or printed form, of a spoken language source, such as the proceedings of a court hearing. It can also mean the conversion of a written source into another medium, such as scanning books and making digital versions.
  • Translation is the action of interpretation of the meaning of a text, and subsequent production of an equivalent text, also called a translation, that communicates the same message in another language.

Captioning and subtitling rely on written language to convey information.

As a transcription, captioning is simply the written form of spoken words and sound effects, including slang, colloquialisms, modifiers, and wordplay—which, as we’ll see below, can be very difficult for deaf, HOH, and Deaf people who struggle with English as a second language.

Subtitling, which is a translation, provides an opportunity to use words that are closer to the signs a Deaf person would use. However, it is important to note that typically, native sign languages have no natural written form.

It’s great that The OCP and Captioning Sucks! sites have drawn attention to deafness and accessible media, but it is important to understand that there is more we can do—particularly for the Deaf and hard of hearing audience.

Don’t get me wrong; research into captioning and subtitling is an important thing that will, no doubt, improve access to information for many people—not just deaf, HOH, and Deaf people. Captioning and subtitling improves the user experience of cinema, television, and the web for all kinds of people: anyone in a noisy environment, office workers in bee hive cubicles, migrants, teens addicted to earbuds, anyone with partial hearing, and even Deaf people.

But the Open & Closed Project doesn’t address the needs of the big “D” Deaf community as well as many people think it does. Maybe it isn’t supposed to. But it’s important to understand why captioning isn’t the most ideal method of supporting many Deaf people in accessing online content. Until the web community understands why, we won’t be able to address it adequately.

Because of limited awareness around Deafness and accessibility in the web community, it seems plausible to many of us that good captioning will fix it all. It won’t. Before we can enhance the user experience for all deaf people, we must understand that the needs of deaf, hard of hearing, and Deaf users are often very different.

It’s a visual thing#section4

Native sign languages aren’t simply a gestural representation of spoken language; sign language is a visual-spatial language, without a natural written form. Grammar and syntax are very different from that of spoken languages, and rely heavily on facial expression to convey essential meaning and emphasis. While many Australian Deaf people, for example, use English as a second language, Auslan (Australian Sign Language) is their primary language. For this reason it’s important to recognize Deafness primarily as a culture, rather than a disability.

During a language class, a Deaf teacher once told me:

We are not disabled and Deafness is not a disability; it’s the perception of many hearing (people) that we are disabled, and that is our disability.

Rather than thinking of Deaf users as disabled, simply understand that the dominant language in their country is not necessarily their primary language.

Phonetics, slang, and wordplay present challenges#section5

What does phonetic based language mean to a Deaf person? The word “comfortable” is a great example of this. An old joke often shown to hearing sign language students is the mythical sign “come-for-table.” As you can see, pronounced quickly, it sounds like comfortable, but when signed it could literally mean “have you come for the table?” but never “comfortable.”

Consider also the phrase once in a blue moon, which means “occasionally” or “every now and then.” When taken literally, the meaning becomes ambiguous and even confusing. Think too about the way we use language in e-mails, text messages, and even advertising. Much of our shorthand and many of our colloquialisms are based on phonetics. For example, with CU l8tr, “C” sounds like “see,” but it doesn’t look like it. Jokes that rely on a play on words can have similar problems. Take, for example, one of my favorites:

Did you hear about the prawn that walked into a bar and pulled a mussel?

In hearing this joke, pulled a mussel could easily mean strained a muscle or dragged a mussel, but what it actually means here is “picked up” or “met.” So as you can see, it’s not hard for meaning to become confused.

Lost in transcription and translation#section6

Let’s suppose we’re talking about providing accessible content for an English television sitcom with a Deaf audience.

Captioning is perfect for the post-lingual deaf or hard-of hearing audience; it presents content in an accessible format, in the primary language of the user. However, as captioning is a transcription, for the Deaf audience, content is presented in the user’s second language, one with which the user may have little or no fluency. While captioning provides better access to content for the Deaf than if there were none, it’s important to remember that there is a big difference in the needs of those who can’t hear (deaf) and those who speak another language altogether (Deaf).

In “What Really Matters in the Early Literacy Development of Deaf Children,” [1] Connie Mayer cites several studies that address the literacy gap present in the Deaf community:

Yet it remains the case that 50% of deaf students graduate from secondary school with a fourth grade reading level or less, [2] and 30% leave school functionally illiterate. [3]

The frequently reported low literacy levels among students with severe to profound hearing impairment are, in part, due to the discrepancy between their incomplete spoken language system and the demands of reading a speech-based system.” [4]

Keep in mind too that English, for example, has the highest number of synonyms of any language. Signed languages have very few in comparison. Sign language relies heavily on facial expressions and body language to provide meaning to language. So where we would say, “careful, the pie is extremely hot,” we might sign, “careful, the pie is very hot,” with a more pronounced facial expression on “very” to infer extreme heat. What this means is that the user with low to moderate fluency in English has to concentrate a lot harder, particularly when dialogue (captioning) is moving quickly.

Thus, captioning alone, as a transcription of spoken English, complete with its slang, colloquialisms, and wordplay, is not a perfect solution to the problem of creating accessible websites for the Deaf.

Alternatively, if we employ subtitling, we’re providing a written translation of a language for which there is no written form. (And therein lies the problem.) So how do we best provide a written translation for a language that has no written form?  We provide sign language interpreting instead, as is sometimes seen on news broadcasts and current affairs programs. Where this isn’t possible, subtitles for the Deaf and hard of hearing, with notations on sound effects, would be most accessible.

There seems to be a perception by some people that subtitles for the Deaf use dumbed-down language. However, I’ve always perceived the language to be based on the English equivalent of the signs that would have been used had an interpreter been present. Of course this means that the grammar continues to follow an English pattern, but it seems to me that the subtitles are likely to be more accessible to a wider audience.

So what’s the solution?#section7

Like with most things, there isn’t a single, fix-all solution to the issue. However, as socially-conscious designers, we’ve worked to understand the issues. Now, we can make an honest attempt at addressing them.

Writing for the web#section8

Taking heed of all those Writing for the Web 101 tips you’ve seen is a good place to start and will enhance site readability for a wide range of users, including the deaf. Sign language is a very direct language, where the main point is stated first and then expanded upon—much like the “inverted pyramid” or journalistic style of writing that we so often recommend for writing on the web. Some other considerations are:

  • Use headings and subheadings.
  • Write in a journalistic style: make your point and then explain it.
  • Make one point per paragraph.
  • Use short line lengths: seven to ten words per line.
  • Use plain language whenever possible.
  • Use bulleted lists.
  • Write with an active voice.
  • Avoid unnecessary jargon and slang, which can increase the user’s cognitive load.
  • Include a glossary for specialized vocabulary, e.g., medical or legal terminology, and provide definitions in simpler language.

Language learners, or anyone doing the usual page scan for highlights, will benefit—and users with cognitive and learning disabilities will find it helpful too. As with all web documents, the content should be marked up as standards-focused, semantic, and valid HTML.

Multimedia#section9

Where possible, for web-based multimedia, the ideal solution is to incorporate sign language interpretation with the video as picture-in-picture, as this provides a synchronized presentation. However, this can be a very time consuming and costly process. And as sign language is specific to certain regions, it will be more appropriate in some situations than others. As an alternative, sign language interpreting can be recorded and provided in addition to the audio and transcript or captioning.

Alternately, a combination of captioning (to transcribe sound effects) and subtitling (written translation, with a focus on users with sign as a primary language) is most effective. Where this isn’t possible, a transcript of the dialogue will suffice; transcripts provide the user with an opportunity to print out the dialogue and read it at a comfortable pace.

Remember that the purpose of subtitling is to convey meaning, not to test the language skills of the audience. It is more important to convey the meaning and sentiment of audio content than to transcribe it verbatim.

Take action now#section10

Transcribe all conference podcasts and make the content available in an accessible format. Organize an interpreter for your next presentation—record the translation and make it available online. Read one of the books listed below. Most importantly, whenever you have the chance, gain awareness of your local Deaf community. I’ll be surprised if that doesn’t make you want to learn a few signs yourself.

Suggested Reading#section11

Nora Ellen Groce—Everyone Here Spoke Sign Language: Hereditary Deafness on Martha’s Vineyard (Harvard University Press, 1985).

Harlan Lane—When the Mind Hears (Vintage, 1989) and The Wild Boy of Aveyron (Harvard University Press, 1979).

Oliver Sacks—Seeing Voices: A Journey into the Land of the Deaf (University of California Press, 1989).

References#section12

[1] Mayer, C. “What Really Matters in the Early Literacy Development of Deaf Children.” Journal of Deaf Studies and Deaf Education12.4 (2007): 411-31. (full text)


[2] Traxler, C. “The Stanford Achievement Test, 9th Edition: National Norming and Performance Standards for Deaf and Hard-of-Hearing Students.” Journal of Deaf Studies and Deaf Education 5.4 (2000): 337-48. (full text)


[3] Marschark, M., Lang, H., Albertini, J. Educating Deaf Students: From Research to Practice. New York: Oxford University Press, 2002.


[4] Geers, A. “Spoken Language in Children with Cochlear Implants.” Advances in Spoken Language Development of Deaf and Hard of Hearing Children—Spencer P, Marschark M, eds. New York: Oxford University Press, 2006. 244–270.

About the Author

Lisa Herrod

Lisa Herrod began her working life as a sign language interpreter before moving to the web in 1999. Owner of Scenario Seven, a user experience consultancy in Australia, her work now focuses on a taking an integrated, holistic approach to usability and accessibility. Lisa is co-lead of WaSP’s International Liaison Group.

49 Reader Comments

  1. Sometimes little bits of text trigger me and they linger: ??deaf, HOH, and Deaf people who struggle with English as a second language??.

    This was something that I simply overlooked. The Deaf have a different langauge as their first langauge, so reading in any language is already a second (or third) language to them.

    Thanks for making me aware of this, I will bear it in mind.

    All in all a great article, thanks again.

  2. Thanks for your timely article, Lisa. I’m interested to track down any examples, not of the alternate [accessible] versions of a given video, but rather, of useable, standards-compliant [attractive] instances that demonstrate exactly the type of manifestation of accessibility considerations you mention.

    The JW Media player (http://www.jeroenwijering.com/?item=Making_Video_Accessible), for example, allows multiple audio tracks to be toggled on and off (and the now expected closed caption toggle), along with external playlists (presumably to allow a considerate online video publisher to make switching between alternate versions a snap).

    To assist keyboard-only users, HTML controls interacting with the embedded Flash media player are also a possibility (in addition to obligatory downloads in alternate formats), but could potentially be intrusive to users who have mouse control for direct embedded media manipulation.

    It sounds as though a developer putting together a wishlist to help meet any accessible video requirements would need to ask for:
    – Transcript
    – Transcript – audio description
    – Transcript – extended audio description
    – Timed closed captions (presumably W3C’s Timed Text XML)
    – Timed closed captions – audio description
    – Timed closed captions – extended audio description
    – Alternate audio track – audio description
    – Alternate audio track – extended audio description
    – Alternate video track – sign language (possibly multiple regions’ worth)
    – Alternate download formats (presumably WMV / AVI / etc.)
    – Alternate quality versions for downloading of all of the above
    – Possibly a MediaRSS feed describing all of the above
    – An embedded media player (something like JW player) capable of:
    — Control via HTML elements
    — Sending video player state feedback to HTML text fields (ideally via ARIA-type controls)
    — Same-page playlist control over alternate video versions (and their respective audio, captions and transcripts)

    And make it good-looking, valid XHTML, and search-engine friendly, to boot.

    (Did I miss anything?)

    It’s quite a shopping list – perhaps this is an ‘ideal’ collection for online video that might be deemed high-value and/or high-traffic.

  3. Thanks for your timely article, Lisa. I’m interested to track down any examples, not of the alternate [accessible] versions of a given video, but rather, of useable, standards-compliant [attractive] instances that demonstrate exactly the type of manifestation of accessibility considerations you mention.

    The “JW Media Player”:http://www.jeroenwijering.com/?item=Making_Video_Accessible, for example, allows multiple audio tracks to be toggled on and off (and the now expected closed caption toggle), along with external playlists (presumably to allow a considerate online video publisher to make switching between alternate versions a snap).

    To assist keyboard-only users, HTML controls interacting with the embedded Flash media player are also a possibility (in addition to obligatory downloads in alternate formats), but could potentially be intrusive to users who have mouse control for direct embedded media manipulation.

    It sounds as though a developer putting together a wishlist to help meet any accessible video requirements would need to ask for:
    * Transcript
    * Transcript — audio description
    * Transcript — extended audio description
    * Timed closed captions (presumably W3C’s Timed Text XML)
    * Timed closed captions — audio description
    * Timed closed captions — extended audio description
    * Alternate audio track — audio description
    * Alternate audio track — extended audio description
    * Alternate video track — sign language (possibly multiple regions’ worth)
    * Alternate download formats (presumably WMV / AVI / etc.)
    * Alternate quality versions for downloading of all of the above
    * Possibly a MediaRSS feed describing all of the above

    An embedded media player (something like JW player) capable of:

    * Control via HTML elements
    * Sending video player state feedback to HTML text fields (ideally via ARIA-type controls)
    * Same-page playlist control over alternate video versions (and their respective audio, captions and transcripts)

    And make it good-looking, valid XHTML, and search-engine friendly, to boot.

    (Did I miss anything?)

    It’s quite a shopping list — perhaps this is an “˜ideal’ collection for online video that might be deemed high-value and/or high-traffic.

    [Apologies for the near-duplicate comment.]

  4. First of all, me, as a deaf webdesigner myself, I am sooo happy that there is an article about deafness in the famous A List Apart series!!

    Lisa got the point. Listen to her, she is right! We Deaf are neither disabled nor retarded. It’s just a matter of the point of view: A stair makes the person on the wheelchair disabled. A ramp doesn’t. If everyone can’t hear, then it’s “normal” not to hear. The majority defines the minority.

    Enough about philosophy, let’s get real:
    Solutions on the web for the Deaf are not impossible and not a matter of money. Like Lisa says, written text should be kept short and sexier. It’s not only a benefit for Deaf people, it’s also good for you “Normalos”, for your hearing folks.

    In Germany and in Spain there are experiments to translate written text into a signing avatar automagically. I am sure, in the near future, there will be a smart multimedia object which delivers the content over the proper channel in the proper language. I hope the future versions of HTML 5 or XHTML 2 will support this!

    I could write more. I am happy for now, that’s all.

    Greeting from Auckland

    M.

  5. My wife and i actually met in ASL class, (we are hearing) – and i was on track to become a part time interpreter. One of the things that always bothered me was peoples perception of the deaf community.

    In elementary and middle schools, where they have a “Deaf Program” they only teach PSE (Pigeon Signed English) where they actually replace the english word for the same sign. Where in High school, and in the Deaf community, we all Signed ASL (American Sign Language). Not only did the schools not teach Deaf students the language in their world, they tried to force the normal english grammer into their language.

    Sorry this is a long post, but it hits a nerve… I always likened it to Teaching someone spannish by just replacing words, and not reordering meaning.. so instead of signing in ASL “Hello, How You” — they make them sign out Hello, How are you.. or instead of “Store, You Go?” (with the facial expression) they would teach “Are. You. Going. To. The. Store.”

    Anyway, im done with my rant. Thanks for bringing this topic up to the Web community. and i hope it opens some eyes.

    Phil

  6. As a web designer and a mother of a deaf child I have come across totally inaccessible design many times.
    Many of those times I needed to sit with my son to interpret sites for him so he could use them. In fact the worst ones are the biggest corporation with sites geared towards children entertainment.
    I believe it is all due to the lack of understanding of what deafness really is and that, as you mentioned, is not just the opposite of hearing.
    Spread the word sister!!

    A big thank you from Canada,

    FayeC

  7. Hi Lisa,

    I’m a deaf web developer from Australia. I know a little Auslan and a bit about the deaf culture.

    I like it when you bring up the difficulties that deaf people face when using the web (many people have no idea!), and the several suggestions you’ve brought up. Well done, keep it up!

    Cheers,
    Marty

  8. 0.38% of America’s population is deaf. That’s 38 people in a stadium of 10,000, yet 50% of our time is spent blogging about them. I think if they spent as much time as we assume they do on the internet, they’d be insulted by all the wasted efforts. People already code sites with pure text/css now, and if there is Flash, it’s SIFR. I think this holy deaf talk is just trendy.

  9. The need to translate sound-based jokes and slang for the Deaf makes complete sense to me, but it seems a little offensive to suggest that the written language should be made plain by stripping it of synonyms. When “extremely” is meant, sometimes “very” is lesser.

    There is no reason why a Deaf person should be incapable of learning synonyms, and I think it’s insulting to suggest that, due to having a signed language as their primary tongue, the Deaf can’t have just as large a textual vocabulary as anyone else. Maybe I’m missing something, but widespread illiteracy among the Deaf seems to be given as a reason not to write beautiful prose rather than as a problem that needs to be solved by the Deaf community embracing a love of literature and literacy.

    As a voracious reader, I don’t really consider written language to be all that connected to its auditory equivalent. Words quickly become pictograms. I’m reading a wonderful book now by Steven Brust and I have no idea how any of the names are pronounced, nor do I care, because I’m not hearing the text in my head, I’m reading it. A book on tape is as much a translation of the text as a sign-language interpreter at a reading is.

    I mean, if transcription isn’t sufficient — barring the aforementioned soundbased jokes and slang — then what’s next? Special Deaf translations of the Complete Works of Shakespeare?

    A Midsummer Night’s Dream
    Act 5, Scene 1

    PHILOSTRATE No, my noble lord;
    It is not for you: I have heard it over,
    And it is nothing, nothing in the world;
    Unless you can find sport in their intents,
    Extremely stretch’d and conn’d with cruel pain,
    To do you service.

    vs.

    A Midsummer Night’s Dream DEAF EDITION
    Act 5, Scene 1

    PHILOSTRATE No, my good man;
    It is not for you: I have heard it over,
    And it is nothing, nothing in the world;
    Unless you can find they were joking,
    Very stretch’d and conn’d with mean pain,
    To do you service.

    It just won’t do.

  10. On re-reading that, I never got around to stating one of my main point clearly:

    Written language is a second language to “Normalos” too.

    Yes, those that are taught to read via phonics have a leg up initially, because phonics are a system for converting written language into spoken language, albeit an incredibly complex system for English once you move past the “Dick and Jane” primers. But this leg up is fleeting and, IMHO, ruins readers who never make the cognitive leap to understanding written language completely independently of audible language.

    Written language should be everyone’s second language. It should be the lingual equalizer.

  11. Having been working on a series of articles about usability and accessible design, I found myself wanting for a fresh perspective when it came time to address the deaf community. I only regret I have little to add to the topic after reading this article.

    The awareness of the subject (web) and audience (deaf)is solid, and I’m grateful that the issues have been brought to light.

    Personally I’ll be sending every Government web designer I know to this article. I’ve found that the government approach to 508 is very limited. Your article spells out the full depth of the what it is to be deaf and online.

    Thank you.

    @Phillip I appreciate your perspective as well. I know a few interpreters, and because of this discussion I plan to utilize their knowledge to address any accessibility issues in the futures.

  12. On third thought, as someone who is currently teaching his son to read, learning language is #$%@! hard and maybe it _is_ nearly impossible without the “leg up” of phonics.

    I’m not really sure where you would begin, short of treating Latin roots as pictograms and memorizing them, which has got to be rather hard if you’re not already acclimated to reading.

    I mean, I can’t think of any way around it for the Deaf short of a video dictionary in which all of the definitions were in ASL. With maybe a seperate ASL guide a Latin roots as a preface. And even that is an uphill slog.

    I suppose I should be astounded that so many Deaf persevere and attain literacy at all.

  13. We should keep in mind that captioning reaches a larger market than hearing impaired. People who have no access to speakers, for example, don’t get to watch videos with spoken words.

  14. @ Dan_Guy:
    Regarding the translation to “Deaf”? Shakespeare, I believe that the point is more than just the mere words on paper. Ms. Herrod said that:

    bq.So where we would say, “careful, the pie is extremely hot,”? we might sign, “careful, the pie is very hot,”? with a more pronounced facial expression on “very”? to infer extreme heat.

    So it is not just the shift from “extremely hot”? to “very hot”?, but the shift from “extremely”? to “very with a more pronounced facial expression on “˜very'”? to infer extremeness. The actual text-based word is only part of the meaning that is being conveyed… as in much of “normalo”? language, the body language conveys meaning.
    We netizens are, at the core, used to this. We quickly invented the smiley face and the frowny face to try to express non-verbal cues that we couldn’t convey using just our text. Soon, the smiley and the frowny weren’t enough, and we invented the big-smile smiley and the winking smiley, and the sticking-out-the-tongue smiley, and a host of others… even before we moved from text-based BBSes to the Internet and the myriad image-based smileys that are out there now.
    My point isn’t that we need to add smileys to our text; my point is that I don’t think it is too much of a stretch for all of us “normalos”? to remember that our typed words alone don’t accurately communicate our full message.
    Ms. Herrod’s article reminds me that there are many, many points to think about when I *am* relying on my typed words alone to try to communicate, especially over the web. As a web worker, I have known that I need to take the time to think about what I put in alt text — will “girl with red ball”? really convey any meaning to the blind reader? Did I even have any real meaning to convey when I placed the image of the girl playing with her ball on my page?
    Now I have more to think about — always more : -) I need to think about whether my play on words is appropriate — not only for the Deaf community in my own community, but also for someone for whom spoken English is a second language. I don’t think that I need to dumb my text down, I just need to think about what the true meaning is, and choose the words I use and the tone that they create is appropriate.
    Have fun with your son and his adventures into the written word. It is, as you say, more complicated than we often remember : -)

  15. I’m sorry I don’t understand – how is deafness not a disability. When some members of a species lack an ability that the species in general has, isn’t that the definition of a disability?

  16. Still struggling to find words for my feelings in response to this article, I read Joe Clark’s response, and he put it better than I ever could:

    bq. If Deaf people aren’t disabled … then nobody is required to accommodate them. If Deaf people are merely a linguistic minority, then they have to get in line alongside other linguistic minorities.

  17. I am Deaf. Smile. I wish there was more of an emphasis on captioning as there is for accommodating vision loss on the web. I am oral Deaf, and an oral deaf success at that, so most people don’t recognise that I am Deaf and rely heavily on lipreading. I bookmarked this, it’s fantastic!

  18. A ListApart is an online magazine catering to “people who make websites”. And the list of names widely known and associated with web accessibility and therefore presumably familiar to the readers of A List Apart is pretty short. Topping that list by a wide margin is Joe Clark, founder of the Open & Closed Project; author of Building Accessible Websites and, over the years, several articles published right here in A List Apart.
    I have read the first 16 paragraphs or so of this article about four times now looking for any textual clue that would lead me to believe that Lisa Herrod didn’t strain to “drop” Mr. Clark’s and the OCP’s names mostly as a weaselly rhetorical device designed to:
    1) put the ideas presented in this article on a par with Mr. Clark’s (they aren’t)
    and to:
    2) establish a quick frame of reference for and relevance to A List Apart’s readership by horning in on Mr. Clark’s reputation in the field of Web Accessibility.
    Like I said, I’ve looked and looked but I can’t find a reason to disbelieve that’s not the case here.
    And hey, no biggie. I am not Mr. Clark’s agent. This aspect of the article would be fine with me, except for the fact that the views of Mr. Clark and the OCP are most definitely mischaracterized and the differences between captioning and subtitling get a poor explanation.
    A quick bounce over to the OCP web site will confirm.
    I conclude with a sort of paraphrase from the article:
    Perhaps A List Apart’s editorial standards are not as high as many people think they are.

  19. Joe Clark brought up very good points. I’m a fan 🙂

    I like the idea of captioning things first, it’s crazy to think otherwise.

    As a deaf person, I can tell you that many Australian deaf people are bi-lingual (e.g. Auslan and English) and they are more likely able to use captions, which is better than nothing.

    But the only thing I need to correct Joe from “the article here”:http://openandclosed.org/docs/ALA265/ is that Auslan users does not speak Auslan. It doesn’t have a spoken or written form, just visual. (A spoken language such as English is complementary to Auslan to cover the written, spoken and visual forms.)

  20. First of all, thank to everyone who has read the article so far, I’m really touched that it’s had such a positive impact on most of you. Thanks also to everyone that’s emailed me privately too.

    There are some really great points made here in your comments, so I’m going to respond to them collectively where I can.

    *Gordon*

    First of all, thanks for sharing such a fantastic resource. I haven’t seen that before so I’m looking forward to playing around with it on the weekend. Have you used it at all and if so, what do you think of it?

    The wish-list you’ve put together certainly looks as though you’ve covered most things. What I’d love to see is a video publisher that provides very simple PIP (picture in picture) and what about a collaborative tool? Something that would allow multi layers that various people could contribute to.

    One of the big expenses financially and tie wise is the process of transcribing, interpreting or translating and then pulling all of the resources together into one accessible format.

    Imagine if (and there may be) there was a tool that allowed for collaboration, where users could switch on and off various layers to display the information that best suited them. We regularly see people contribute translations of articles online at the “International Liaison Group”:http://www.webstandards.org/action/ilg/WaSP.

    If anyone knows of a tool that allows this, I’d love to hear about it.

    Thanks Gordon, you’ve really got me thinking about that now 🙂

  21. *Hi Dan*,

    Thank you so much for persevering and coming back to post three times. It’s great to see that the article has got you thinking, which was the whole point 🙂

    It’s really interesting that by your third post you started talking about using pictograms and a video library of sign to learn words. Yours is a great post 🙂

    Some interesting research was conducted in Australia by “Linda Komesaroff”?: http://www.deakin.edu.au/research/admin/pubs/reports/database/dynamic/output/person/person.php?person_code=komesli in the early 90’s around literacy levels of the Deaf. One of the findings was that Deaf adults who came from a Deaf family typically displayed significantly higher literacy rates than Deaf adults from hearing families. This was attributed to the fact that the former already had a solid grasp of their first language (Auslan) before they started to learn their second (English). Going back to your earlier point about pictograms and video libraries, I see those as being of a similar nature to a Deaf child learning English from a native Auslan user.

    I hope that was clear? 🙂

  22. I found this article *fascinating*. Thinking about sign language (or rather, specific kinds of sign language) as truly a different language in its own right, and not merely a representation of another language, completely changes how I think about accessibility.

    Thank you.

  23. bq. I have read the first 16 paragraphs or so of this article about four times now looking for any textual clue that would lead me to believe that Lisa Herrod didn’t strain to “drop”? Mr. Clark’s and the OCP’s names mostly as a weaselly rhetorical device designed to: 1) put the ideas presented in this article on a par with Mr. Clark’s (they aren’t) and to: 2) establish a quick frame of reference for and relevance to A List Apart’s readership by horning in on Mr. Clark’s reputation in the field of Web Accessibility. Like I said, I’ve looked and looked but I can’t find a reason to disbelieve that’s not the case here.

    We have tremendous respect for Joe Clark and his work, “having frequently published him here”:http://www.alistapart.com/authors/c/joeclark/ . ALA has been a forum for some of Joe’s most important statements on accessibility; we support his work here and elsewhere.

    Lisa Herrod “is an expert”:http://www.alistapart.com/authors/h/lisaherrod in her areas of accessibility, with a background in working with people who are deaf. Experts frequently disagree, especially when their backgrounds and focuses of work differ.

    By her work and expertise, Lisa has earned the right to share her views on the limitations of captioning as they relate to the culturally deaf community. By his work and expertise, Joe has earned the right to “respond as he did”:http://openandclosed.org/docs/ALA265/ . We find merit in Lisa’s article and merit in Joe’s response.

    Alas, I cannot say the same about your comment, and I’m not sure by what right of work or expertise you make it. I’ve looked and looked but I can’t find a fact in it.

  24. This is a fascinating article. I had never thought of the hurdles that the Deaf community may encounter when reading a website. The need to translate a phonetic joke hasn’t occurred to me before, although, in retrospect, I have come across the problem when watching subtitled films that were not written in my native language.

    “Dan”:http://www.alistapart.com/comments/deafnessandtheuserexperience?page=1#9 makes some good points, though. I know that the web isn’t exactly a shrine of beautiful English, but should we stop trying to beautify our writing because Deaf people might read it?

    Finally, if:

    bq. The Deaf community is considered to be a linguistic and cultural minority group, similar to an ethnic community.

    then I think Joe Clark “says it”:http://openandclosed.org/docs/ALA265/ best:

    bq. Their sign-language translation merely becomes one of the many possible translations the producer of a film, video, or television program could provide.

    Is this a step forward for the Deaf community, or a step backwards?

  25. About 2 years ago I took a 3 ASL (American Sign Language) classes as well as a Deaf Culture class and learned a great deal about the community as a whole… With the knowledge I have received (and experience) I have learned how important .508 compliance is, and being as how I do web design for a major college, this is a great thing to have.

    I really hope to dive in deep with web accessibility following both ADA and .508 compliance and begin a mastery of it, but just have not had the time necessary to do so yet.

    thank you for this fantastic article and the insight you have shared.

    -Wes

  26. One of the primary aims of my writing this article was to highlight the gap in our knowledge as socially aware, accessible web developers. Another was to raise awareness about the different needs of deaf people in online environments.

    Just as we aim to design for a range of physical disabilities such as cerebral palsy, quadriplegia and RSI (repetitive strain injury), or types of sight impairment like “Monochromacy”?:http://en.wikipedia.org/wiki/Monochromacy, “Retinal Pigmentosa”?:http://www.cnib.ca/en/your-eyes/eye-conditions/retinal-pigmentosa/Default.aspx and blindness, we should also consider that there is more than one way in which impaired hearing or deafness diminishes a person’s access to information.

    We are after all, really just talking about improving and enhancing access to information for all users, without bias.

    Captioning and subtitling are essential in providing access to information for many people, offline and on. However, it is imperative that we understand that there are many people for whom this is not an ideal solution. Do they belong to a minority group? Yes. Is it important that they consider their culture as being more relevant than their inability to hear? Yes. But that doesn’t mean we should completely disregard their need for equitable access to information.

    As I said, “it’s important to recognize Deafness primarily as a culture, rather than a disability.”? While recognising them primarily as a cultural group, it is still important that we consider their needs when designing and building websites.

  27. It sounds like many of the things we can do to help the Deaf community in our text content would also be useful to other non-deaf readers who speak English as a foreign language. Is that a fair assessment?

  28. I’m a post-lingually deaf web designer/developer in the US. It’s impossible for me to find work because of people’s attitudes. I can talk but hearing? Not happening.

    “We are not disabled and Deafness is not a disability; it’s the perception of many hearing (people) that we are disabled, and that is our disability.”

    Couldn’t be truer. I look forward to the day when people start seeing the potential in everybody. Objectively. Just because someone hears doesn’t mean they listen.

    “We don’t have any experience with deaf people” is something I hear often. I reply “Do you mean you don’t have a single person in this office who doesn’t listen to a thing anybody says?”

    Listening is wasted on the hearing.

  29. As a deaf web developer with an interest in web accessibility, this topic interests me on a number of levels. Here, I’ll limit myself to commenting on one aspect of the discussion.

    Some of the comments are the equivalent of raised eyebrows and eye rolls over the fact Deaf people don’t consider themselves disabled. So, in an nutshell, here is a bit of insight to the psyche of Deaf Culture.

    Deaf culture—like all culture—has a rich, fascinating and painful history. Deafness is hereditary. Deaf people have deaf children. In less mobile eras, there were rural communities with large populations of deaf people. One such community was on Martha’s Vineyard off the coast of Massachusetts. On this island, deafness was so prevalent that all the members of the community knew and used the regional sign language to communicate. The result was that Deaf people were fully integrated into the community and held leadership roles.

    (For more on the Martha’s Vineyard Deaf, Lisa Herrod cites Nora Ellen Groce’s excellent book, Everyone Here Spoke Sign Language: Hereditary Deafness on Martha’s Vineyard. It’s one of my favorite books.)

    As education for the Deaf became formalized, the deaf children of Deaf parents were sent away to schools for the Deaf. “Oralism” an educational philosophy championed by Alexander Graham Bell (yes, the Telephone guy) was popular at the time. Oralism was a program in which deaf children were are taught to speak the sounds and words of a language they could not hear. The method of teaching deaf children to speak often involved force—and, did not yield the best results in spoken language.

    With Oralism being literally forced upon them, Deaf people took refuge in their Sign Language and community. Their world outlook became clarified: They believe they are whole. They believe they do not need to be “changed” or “fixed.” They believe, therefore, they are not “disabled.”

    Joe Clark makes an interesting point that if Deaf People reject the term “disabled,” their proper place is alongside other linguistic minorities. I can’t speak for Deaf People, but I believe that is where the Deaf view themselves.

    Joe Clark also make a valid point that there are regional dialects of sign language. Thus when providing communication access to a global audience, sign language may not be the most effective use of resources. That said, government sites could consider using their nation’s sign language.

    Thank you Lisa and ALA for the thought-provoking article and discussion.

  30. I’ve read the article with interest, but the Anglocentrism of it all puts me off. Non-native English readers have just as much difficulity in understanding slang, or reading complicated texts. I think even native speakers profit from the bulleted list of requirements, because it is a list that applies for writing to ANY audience.

    Apart from the captioning and subtitling of content, which is more or less considered a different subject by this article, I really fail to see the difference in approach to accessibilty for the web in almost any other context.

    I can imagine already the way I’m going to be slashed about this comment, but please do understand that I’m absolutely not saying anything about the cultural and social aspects of d/Deafness, but solely about applying skills to make a web page accessible.

  31. *Hi Martijn*

    I understand that presenting the article from an anglocised perspective might be off-putting for some readers. It is something that we discussed in the initial stages of my writing the article. I actually tried writing it with a generic perspective but some concepts were too difficult to explain clearly.

    Also, as my studies and training were in Australia, I have a much deeper understanding of “Auslan”:http://en.wikipedia.org/wiki/Auslan than other sign languages. For more of a European perspective, Harlan Lane’s _When the Mind Hears_ (see recommended reading list) documents the French deaf education system of the 1700’s brilliantly.

    I agree with your other points and it’s great to see that you’ve taken from the article what I was trying to convey.

    There are general accessibility considerations that apply to a wide range of users that will also address many needs of Deaf users. Conversely, designing with Deaf users in mind will enhance the accessibility of content for a wide range of users that extends much further than the deaf/Deaf and HOH.

    My aim was to highlight the unique differences of the Deaf and the ways in which we can provide a better online experience.

  32. @Jeffrey Zeldman
    Thank you for the air-clearing response to my post.
    You’ve earned the right with me, as a reader, to believe that what you tell me is so, is so.
    If my take on the piece was off-base, then it is I who retract.

  33. Sarcasm is definitely not intended but this article flew right over my head. There are those with various levels of deafness, apparently, and those with various levels of ignorance. Mine, after reading this article falls into the category of “A Lot”.

    Resources to examples, simple straight forward examples, would help tremendously in comprehension and implementation, e.g. a simple audio/music file with explanations that address the issues discussed.

    PS– Speaking of accessibility, for those who can never remember login information, a link that would e-Mail that info would help.

    Thanks.

  34. bq. I wish there was more of an emphasis on captioning as there is for accommodating vision loss on the web.

    I guess the difference is that vision loss affects pretty much every website, whereas hearing loss only affects a few. I have never needed to incorporate sound into a website, so I have never needed to accommodate deaf, Deaf or hard of hearing people. Every website I write, on the other hand, has visuals – mostly text, some tables, some graphics – so it is vital that I know how to write these to serve someone with vision loss.

    It’s not that deaf people are any less deserving of attention and consideration than blind people, but that their needs are – in terms of the internet – a heck of a lot less.

  35. First off, I’m HOH, so technically part of little-d deaf. I recently took an ASL class and found how much of the cultural aspect of Deafness I have dealt with, but never realized. I also learned how much I didn’t know about the Deaf culture. However, I’m going to attempt to address this question from the standpoint of a person who has been involved on both sides of the coin.

    James Edwards brings up the question of whether deaf people are really disabled. The disability question is highly sensitive, but has a matter of perspective to it.

    To the Hearing world, the deaf are disabled. They must generally interpret or translate their communications to written or other form. They must accomodate that they cannot just poke their heads in a person’s office, and ask a question. In this sense, they are more correctly *disadvantaged*, in that extra steps must be made to accomodate their lack of hearing (or greatly reduced hearing).

    However, the general perception among most hearing people is that those who cannot hear are stupid / mentally incompetent / etc.

    Deaf/HOH individuals are not — barring additional medical problems — mentally incompetent. In essence, they have been _forced_ to be “stupid” by those who refuse to help them to overcome the disadvantage. In actuality, they are quite intelligent, and generally understand things on a much more complex level than those of us in the hearing world do, simply because they must figure out how it works, rather than being told.

    In short, being _disadvantaged_ (referring to the fact that there are obstacles to overcome in the world around them), and being _Disabled_ (mentally and/or physically incompetent) are two vastly different things, and the deaf are, for the large majority, not Disabled. They are disadvantaged (mostly by the ignorance of others around them).

    I hope that this helps individuals who don’t understand the reason why the disability question is so sensitive, and why Deaf individuals become so upset when the deaf are referred to as “Disabled”. I also hope it clarifies the issue.

  36. It seems what you’re calling for, Lisa, is localization. Localization of web sites is a much bigger issue in Europe than the US or Australia — are your visitors French, German…?

    On the one hand, I’d consider localizing my web site for Deaf people using the same metric that I’d use when I would consider localizing for any other culture. (If I had such a metric — I don’t generally worry about localization in my current day job.)

    But I also object to the notion that I should feel compelled to localize for the Deaf when I don’t localize for the Mexican, or the Chinese. If Deafness is not a disability, but a culture, why should this particular community bubble to the top for attention ahead of others — many of which are more populous?

    Deaf culture does not preclude D/deaf people in English-speaking countries from learning English. ASL might be an American Deaf person’s first language, but English is the language primarily spoken in the USA. As you attest, being Deaf does not mean you’re stupid. No argument there. Consider this, then: We expect Japanese people in the US to learn English…we don’t generally localize our TV programming or web sites for the Japanese. Same with the French. (Some TV shows have SAP for Spanish-language translation, but that’s pure marketing.) Doesn’t it follow, then, that it’s the Deaf person’s obligation to learn English to participate in English-speaking media, just like any other culture does?

    I’ve been a web accessibility advocate for a dozen years, and a parent of a deaf child for some time less than that. My son is fluent in English. When he sees the word ‘comfortable’ in television captions, he knows precisely what it means. He enjoys watching American television and read books written in English, just like all the other kids his age. If you want to do that, though, you have to learn English.

    All that said, I think web localization is a massive, important issue, and when we discuss localization, we should consider Deaf culture right up there with the Greeks. 🙂

  37. As a bilingual Deaf person, I am sometimes concerned how simplified “Deaf” English subtitles are perceived – inevitably as “dumbed-down” English – which reinforces the misconception that Deaf people are retarded or have very low intelligence. I would prefer all subtitles and/or captions be more like literal translations with transcribed sound effects – I view media with full English captions as I do understand English very well. For other Deaf people who do not have very good English comprehension skills, translation through Auslan would definitely be the better choice rather than “Deaf” subtitles.

    I am part of a group of volunteers that produces a TV programme for the Deaf community on community channels – called “SignPost”. We intentionally made the programme accessible to Deaf and hearing viewers, as well as Hearing Impaired viewers, by adding English captions and voice-over translations. However there are rare instances where there is no Auslan on the screen and this year we are looking into solutions such as super-imposed interpreters on the screen – it is very true that sometimes we (even being Deaf ourselves) forget that there are Deaf people who cannot understand captions.

  38. My partner works with people with disabilities and, as a web designer, we’re always talking about how to come about new ideas using software that we can create interesting programs for childeren with learning disabilities. One of the main problems from a developer’s point of view, is building applications that take into consideration all the different types of users and all their varying abilities and disabilities, eg. childeren with Cerebal Palsy have a diverse spectrum of physical abilily and some of them have intellectual disabilities. The problem becomes how do you create an application that is accessible for all users?

  39. The author, Lisa Herrod, makes the case that web designers should consider the Deaf community as a distinct culture versus a “˜disabled’ population. The article is focused on the author’s experience with the deaf community and her understanding that the Deaf community is a “linguistic and cultural minority group, similar to an ethic community.”? She uses the big D, as you would the big I for the Italian community, the big C for the Chinese community and so on.

    This is an important understanding as the present understanding from the web community is that deafness is the opposite of blindness. If we make audio captioning available, as we do alt text on an image for the blind, we have solved the usability issues for the deaf.

    This oversimplification is the wrong approach for designing web content for the Deaf community. Herrod does point out that in the last 18 months, the web community has become more aware of deafness and how it influences the design of web pages. But this focus is still on captioning, “transcription of speech and important sound effects”? and subtitling which is “written translation of dialogue”?.

    Herrod goes further to make several points that all web designers should consider.

    The first point is that in the Deaf community they “speak”? sign language. Sign languages are “˜visual-spatial’ and many elements of these languages do not have a natural written form. Deaf people rely heavily on facial expression to convey essential meaning and emphasis, and there is no direct written translation for these meanings and emphasis.

    Another important point is that direct translation will not always work, phonetic based language i.e. English, can be mid-leading. Herrod uses the example of the phrase, “once in a blue moon”?. This phrase means occasionally or once in a while, but when translated into a signed language, the meaning of blue moon can be ambiguous or misleading.

    English as a language uses many synonyms while there are very few used in signed languages. Sign languages rely more on facial expression or body language than it does on other words with similar meaning.

    These are all important points that present a challenge for anyone designing for the web.

    Herrod does make a case for several solutions to this problem. There is no single solution, but there are several things that can make a website more user friendly for the Deaf community:

    “¢ Reference Writing for the Web 101
    “¢ Use more multimedia

    These suggestions are very important and would make any website that is being targeted to the deaf community more usable for them. The use of multimedia is really key, as a deaf person, I always rely on any medium with video or visual access. It is hard to understand from written language what is the meaning. Also, as a Japanese native speaker, the point of a discussion always come first, this makes communication much easier. Once you establish what you are talking about, it is easier to comprehend.

    This is a very good article because Lisa Herrod has really attempted to explain what the Deaf community is about. The concept that it is a community, with their own language should be the starting point web design.

  40. Hi James,
    Thanks for your comment on the article.

    Yours is an interesting question:
    bq. _How do you create an application that is accessible for all users?_ bq.

    The problem is that it isn’t really possible. I think the best we can do is focus on making a site that is accessible to as many people as possible.

    Accessibility specialists such as Brian Kelly and Liddy Neville have been writing about building for ‘Adaptability’, which focuses on creating accessible sites taking an holistic, inclusive approach that considers the primary user groups, purpose and context.

    It’s really interesting reading.

  41. Hi Nori

    Thanks for such a great summary of the article, I’m glad you liked it and that as a Deaf person, you agree with my approach.

    Asian sign languages are fascinating, particularly for the palm writing of certain words. Does Japanese SL use palm writing for certain characters?

    All the best,

    Lisa

  42. Thank you, Lisa, for the article. However, for most part you are biased towards sign language which is accessible only locally to a tiny group of Deaf people – like Kerri Hicks explained in her #39 post:

    “If Deafness is not a disability, but a culture, why should this particular community bubble to the top for attention ahead of others — many of which are more populous?”

    Should we try to squeeze 5 different sign language interpreters (ASL, BSL, etc.) into one screen to “translate” one spoken English and on top of this being covered with captions – one row in “normal” written language and another row in “plain” language with a glossary for “specialized” vocabulary? ;0) That would be very interesting and indeed very confusing to watch.

    I totally agree with Joe Clark that captioning is the main thing that we have been fighting for years and is accessible to much more people than just those who have hearing loss. It is the number one priority we should focus on right now.

    Do you have any user research or any hard data to prove how many Deaf people are actually saying they “cannot” read and are demanding sign language interpreters for online videos vs captioning?

    Here’s Marlee Maitlin’s presentation on behalf of NAD (National Association for the Deaf) and on behalf of 36 million deaf, Deaf, and hard of hearing Americans:

    http://www.nad.org/issues/civil-rights/communications-act/21st-century-act/marlee-matlin-fcc-field-hearing-testimony

    Marlee Maitlin signs, but CAN READ AND WRITE in English. So do millions of D/Deaf and HOH Americans. Not only we can read captions, but also we can write emails, chat online, send text messages, read websites.

    I don’t care if a person with hearing loss can speak or sign – as long as s/he can have a good command of written language. Only with those skills you will be able to succeed in the mainstream society.

    I totally agree with Dan Guy saying: “Written language should be everyone’s second language. It should be the lingual equalizer.”

    I also agree with Martin Smales that many deaf people in Australia are bi-lingual (Auslan and English) – so there are many of them here in the USA. Many college educated Deaf signers I have been personally meeting (including those from Deaf families) are well familiar with slangs and idioms of English.

    I understand that certain websites that cater to Deaf users – such as Deaf organizations, sites selling products and services for Deaf people, Deaf forums, etc. – would need vlogs or signed videos which would make a perfect sense. However, including sign language interpreters for such mainstream sites as CNN, BBC, etc is insane.

    By the way, I personally can speak, read, write, sign in several languages and I am PROFOUNDLY deaf.

    Your ending doesn’t make sense either about online accessibility: “I’ll be surprised if that doesn’t make you want to learn a few signs yourself.”

    It would have been more useful if you spent most of your article educating web specialists and website owners about how to find and use resources for captioning and transcriptions. Many of us are frustrated enough trying to explain to hearing website owners why we need them. On top of this you are making it more confusing for hearing people with advertising this sign language and Deaf culture thing and more frustrating to the majority of us who need captions – which in turn would increase SEO for those website owners.

    Maybe you should end instead that you will be surprised if with all benefits for SEO it doesn’t make hearing people want to make more videos captioned??

  43. P.S. I would like to also add that there are many D/deaf and HOH people who can speak, read, write, and sign more than one language. So no excuses.

  44. I don’t think the analogy of reading text as a second language holds up. Someone who first learns sign, and then learns to read text is only slightly different from someone who first learns to speak and then learns to read text. No one is born reading text.

    For one, the signing is the foundational communication, and for the other, the speech is. Still, excellent article; good food for thought.

  45. With more people aging, deafness is getting more frequent. The senses, especially the eyes are ears and prime sensors for us and I know through a relative, how hard it is when these disappear. Simple things become almost impossible.

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career