Have you ever had that sinking feeling when you realize—just a split second too late—that you shouldn’t have clicked “Okay” in the “Are you sure you want to quit?” dialog?
Yes? Well, you’re in good company—everybody has had a similar experience, so there’s no need to feel ashamed about it. It’s not your fault: it’s your software’s fault.
Why? Because software should “know” that we form habits. Software should know that after clicking “Okay” countless times in response to the question, we’ll probably click “Okay” this time too, even if we don’t mean to. Software should know that we won’t have a chance to think before accidentally throwing our work away.
Why should it know these things? Because software designers should know that we form habits, whether we want to or not.
Habit formation is actually good thing: it saves us the trouble of having to think when confronted with interface banalities and it lessens the probability that our train of thought will get derailed. In the case of the “Are you sure you want to quit?” dialog, our hands have memorized close-and-click as a single continuous gesture. That’s good, because most of the time we don’t want to think about the question—we just do the right thing. Unfortunately, our habits sometimes make us do the wrong thing: we don’t even have time to realize our mistake until after we’ve made it.
So, as designers we are led to a general interface principle: If an interface is to be humane, it must respect habituation.
What about making the warning harder to ignore? A subtle warning will get passed by, so let’s pull out all the stops: we’ll blink the screen and play a loud stretching noise to ensure that the user is paying attention. Try as we might, it still won’t work. The more in-your-face the warning is, the faster we’ll want to get away from it (by clicking “Okay”) and the more mistakes we’ll make. The thing is, no matter how fully in-your-face the computer presents the warning, we’ll still make the same mistake—clicking “Okay” when we don’t mean to. But it’s still not our fault: as long as it’s possible to habituate to dismissing the message, we’ll habituate, and then we’ll make mistakes.
What about making the warning impossible to ignore? If it’s habituation on the human side that is causing the problem, why not design the interface such that we cannot form a habit. That way we’ll always be forced to stop and think before answering the question, so we’ll always choose the answer we mean. That’ll solve the problem, right?
This type of thinking is not new: It’s the type-the-nth-word-of-this-sentence-to-continue approach. In the game Guild Wars, for example, deleting a character requires first clicking a “delete” button and then typing the name of the character as confirmation. Unfortunately, it doesn’t always work. In particular:
- It causes us to concentrate on the unhabitual-task at hand and not on whether we want to be throwing away our work. Thus, the impossible-to-ignore warning is little better than a normal warning: We end up losing our work either way. This (losing our work) is the worst software sin possible.
- It is remarkably annoying, and because it always requires our attention, it necessarily distracts us from our work (which is the second worst software sin).
- It is always slower and more work-intensive than a standard warning. Thus, it commits the third worst sin—requiring more work from us than is necessary.
In the Guild Wars example, points two and three aren’t particularly apropos because deleting a character is an infrequent action. However, if we had to type the name of a document before being allowed to exit it without saving, we would find it very burdensome.
What have we learned? That interfaces that don’t respect habituation are very bad. Making the warning bigger, louder, and impossible-to-ignore doesn’t seem to work; any way we look at it, warnings lead us into a big black interface pit. So let’s get rid of the warning altogether.
Undo to the rescue
Merely removing warnings doesn’t save our work from peril, but using an “undo” function does. Let me say that again: The solution to our warning woes is undo. With a robust undo, we can close our work with reckless abandon and be secure in the knowledge that we can always get it back. With undo, we can make that horrible “oops!” feeling go away by getting our work back.1
Because we form habits, we’ll never be able to guarantee that we won’t have an “oops!” moment. Instead, designers must accept that it will happen and design for it. Whenever we have the opportunity to throw away work, the computer must allow us to undo our actions.
This leads to one of the most basic and important mantras of interface design: Never use a warning when you mean undo.
Google Mail is a outstanding example of this mantra. When you delete an e-mail, it immediately gives you an option to undo that action. How humane! This neatly sidesteps the issue of warnings (as well as the visibility issue of undo). When we make a mistake (which we are bound to do) it isn’t very costly because we can just undo it. With undo, we spend less time worrying and more time doing work.
Of course this is only one layer of undo—and Gmail goes even further. After you delete a message, it isn’t gone forever… it sits in the trash so that you can retrieve it if you decide later that you didn’t actually want to delete it.
Alas, this is a lesson that Google Calendar hasn’t learned yet. And, as predicted, I’ve deleted events that I didn’t mean to. Sometimes I’ve deleted the wrong event, which is particularly bad because then I’m not even sure what I’ve deleted. Without undo, there is no way to find out.
Actually, even Google Mail hasn’t fully embraced the lesson. When you remove a label from Gmail, up comes one of those wretched warnings. Why is it that Google can get it so right in one place, and mere clicks away get it so wrong? Perhaps because “warn-think” is so ingrained that it takes a heroic effort to break free. Even companies which are generally bastions of good design, like 37Signals, get this one dead wrong.
Using a warning instead of an undo is the path of least resistance from a programmer’s standpoint, and it doesn’t require any new thinking from a design standpoint. But that isn’t an excuse for our computers to be inhumane.
Warnings cause us to lose our work, to mistrust our computers, and to blame ourselves. A simple but foolproof design methodology solves the problem: “Never use a warning when you mean undo.” And when a user is deleting their work, you always mean undo.
Oh, and the next time you see a warning used instead of undo, send the designer of the application/website a nice e-mail suggesting that they implement an “undo” feature instead. Send them a link to this article. Let’s see if we can’t change the way people design on the web—and in the process make everyone’s computing life more humane and less frustrating in one little way. Let the war on warnings begin!
- 1. The idea of the “oops factor” came out of a conversation with Laura Creighton.