Shades of Discoverability

If your working week is anything like mine, I’d wager the term “discoverability” comes up often. Typically we use it when asking if a feature explains its presence and function. Will users encounter and understand it properly? Discoverability feels like a straightforward concept: if someone doesn’t realize what a product can do, she’ll never get the most from it.

Article Continues Below

I’ve had several of these conversations recently, and while they always lead to interesting territory, they’ve also felt somehow imprecise. Many modern digital products enable complex, emergent behavior, not just pure task completion. We’re building habitats, not just tools; yet we often think of discoverability only in terms of task execution. I think this framing—that either something is sufficiently discoverable or not—is too narrow, and I’d like to state the case for a more nuanced understanding.

Designers frequently rely on established patterns and controls to help communicate function. But some of the old faithfuls are starting to lose their potency. Scrollbars are steadily vanishing from view, and hover states are meaningless in a touchy world. Flat design also attracts criticism for harming discoverability; the argument goes that it discards visual cues that communicate what a product can do and how to interact with it.

Let’s look more closely at some different ways to communicate function. I use three categories, listed here in order of strength.

1. Explicit cues#section2

Explicit cues are direct instructional prompts: “What’s new” boxes, help text, arrows, coach marks. So long as they’re well written, they’re clear and unambiguous. The downside is they intrude into the experience, which means they attract designers’ ire. We’ve all heard comments like “If you need instructions, your design has failed.” This is dogmatic nonsense, but we can’t deny that explicit cues are crude. They’re also easy to design poorly, in which case they hamper the user experience more than help it.

2. Implicit cues#section3

Here, the inherent properties of an element help to explain its purpose. There are more flavors of implicit cue than you may think:

Static visual cues (affordances)#section4

The shape, texture, alignment, or another visual property of the element at rest helps to suggest its function. This is of course what James Gibson and subsequently Don Norman termed “affordance.” A shiny drop-shadowed button with a pointy arrow can imply progression in a multi-step process, and so on.

Designers spend a lot of energy sculpting static affordances, and for good reason. However, there are other ways to provide implicit cues.

Motion (kinetic response)#section5

Once in motion, an object has new ways to suggest its nature. Does it move freely in one direction, but stiffly in another? Does it rotate, swing, slide, or fold? Does it stick to other parts of the interface?

In his article Look, and Feel, Dan Wineman argues that things usually only move after the user has decided to interact, meaning kinetic response isn’t a direct replacement for static affordance. Very true. But motion still has terrific power to explain potential function. Kinetic responses form a large part of that vague thing we call “feel,” and as such, motion design has become a focus for some significant digital companies.

Audio response#section6

Due to the web’s early excesses of auto-playing MIDI and ads, audio is still not always welcome in digital products. That’s a shame. Audio is particularly good at providing implicit cues, from microbleeps like scrapes, pops, and buzzes right up to glorious fanfares.

At their simplest, audio cues can suggest when a user is interacting with something in the right or wrong way. People usually interpret a high or rising tone as positive, and a low or falling tone as negative. This simple knowledge alone can help you add an extra dimension to, for example, a drag and drop interaction or form validation.

Text#section7

Although we’ve covered instruction as an explicit cue, a well-chosen label can also have subtler implications. Labels may use metaphors to help people understand function. A button marked Address Book hints at certain behaviors: unfolding, searching, updating. A control stamped with PANIC BUTTON suggests something else altogether.

3. Discovery through use#section8

Users also stumble across features through everyday use. Sometimes this is the happy result of an accidental mistap or errant keypress, but more often it’s sparked by a hunch and some “I wonder…” experimentation. Thus people use their experience of previous products to form assumptions about new ones. Pinched to zoom a photo? Perhaps that works here too.

Gestural inputs make this guesswork discovery particularly common. The principle of direct manipulation encourages users to explore, swipe, and twist. Since we’re still in the infancy of mainstream touch interfaces, gestural standards are still incomplete: a swipe may work on App A but be useless on App B. As a result, we often see in touch-interface usability tests that people try out speculative gestures. This experimentation gives designers a chance to anticipate how people may play with their apps, and add some considerate “They thought of everything…” moments.

Appropriate discoverability#section9

These categories describe discoverability as something more than just a static visual trait. Instead, discovery happens over time, and relies both on the varied properties of the object and a user’s interactions. So how do we choose from this broader palette?

I’ve listed these discovery methods in order of their strength. Explicit cues are more powerful than implicit cues, which are in turn more apparent than accidental discovery. But with power comes intrusion. Cues cause clutter. If all your elements shout, you drown out the ones that really need to speak to the user.

Clearly we can’t provide explicit cues for everything. Forcing every element to explain itself can only result in a slew of infantilizing walkthroughs. Similarly, not everything can have an implicit cue either. Static affordances offer elegant economy of communication, but they’re not always appropriate. Even implicit cues can add complexity. A key motivation of the flat design movement is backlash against excessive cues: ridges, drop shadows, and pointy arrows that weren’t really necessary to convey information.

That said, it’s most important that users understand your key features. And, hell, if that means you need instructions and arrows, go for it. Bluntness has its virtues. Where explicit cues fail is when they’re deployed for trivial controls that only deserve implicit cues or accidental discovery.

Once you’ve handled your most important features, look for elegant ways to explain the availability and function of mid-level features. Drip these introductions in as they’re required, rather than trying to convey everything within the first minute. This progressive disclosure allows users to build up their mental models over time: any modern console game will provide instructive examples. Also consider different ways to provide implicit cues. Motion, audio, and text could all make your app more understandable without visual untidiness.

For less important features, it’s time we let go of forced discoverability and simply allow people to come across them naturally. This may not be an easy sell to a team reviewing your sketches or mockups, but perhaps this framework will help explain the hierarchy of discoverability. Sometimes it’s okay to discover things by accident: just ask Archimedes.

Further reading#section10

Dan Saffer, The New Era of Non-Discoverability.

2 Reader Comments

  1. Thanks for moving the discoverability discussion into the modern era. I find the distinctions between static visual cues, motion, audio, and text to be helpful.

    One blurring of those distinctions is the occasional intersection of visual and motion cues — that is, the use of subtle movements before any interaction has taken place to communicate affordance. For instance, the first time I viewed the search results screen of the AirBnB iPhone app, there was a subtle animation in which each result jumped slightly to the right before quickly snapping back into place. This effectively communicated to me that I could perform a right swipe on a result to access additional actions. A more discussed example would be iOS’s “slide to unlock” color animation.

    Whether these fall into the “static visual cues” category because they preempt any interaction, or in the “motion” category because they involve animation (or are just accepted as having characterises of both) is probably beside the point. But having a framework for implicit clues, and knowing how to appropriately use and combine the cues, is very valuable.

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career