Inline Validation in Web Forms
Issue № 291

Inline Validation in Web Forms

Web forms aren’t great conversationalists. They ask a bunch of questions, then wait until you answer them all before they respond. So when you register for that cool social network or use an e-commerce site, it’s pretty much a monologue.

Article Continues Below

You can blame most forms’ poor etiquette on the way they’re built. Web forms that use a basic submit-and-refresh model of interactivity don’t respond until you hit the “submit” button—but it doesn’t have to be this way. Real-time inline validation can help people complete web forms more quickly and with less effort, fewer errors, and (surprise!) more satisfaction.

Inline validation gives people several types of real-time feedback: It can confirm an appropriate answer, suggest valid answers, and provide regular updates to help people stay within necessary limits. These bits of feedback can be presented before, during and / or after users provide answers.

Putting inline validation to the test#section2

To better understand the design considerations behind inline validation, I worked with Etre, a London-based usability firm, to test 22 average users on six variations of a typical web registration form. Aramys Miranda developed the form we used with our users, who ranged in age from 21 to 49.

Basic registration form with no distractions
Fig. 1. The basic registration form we tested had no distractions so people could focus on the task of “creating an account.”

Of our six forms, the control version validated input only when someone clicked its “create account” button. The other five versions used different methods of inline validation. We measured success rates, error rates, completion times, satisfaction ratings, and standard eye-tracking metrics for each variation. We presented each form randomly to minimize familiarity bias.

What did we learn about inline validation?#section3

Our participants were faster, more successful, less error-prone, and more satisfied when they used the forms with inline validation. Eye-tracking also showed that they “fixated” on the forms with inline validation less frequently and for less time, which shows that they found these forms easier to process visually than the forms without inline validation. This was likely because they didn’t have to reread the entire form after submitting it to resolve any errors—instead, they resolved errors as they went along.

As you can see in the video below, people got a response from the control version of the form only after they completed it to their satisfaction, and clicked the “create account” button. At that point, any errors were shown until the form was resubmitted. Submitting and resubmitting forms to check answers can lead to a form of the frustrating and often unsuccessful behavior sometimes called “pogosticking.” Pogosticking is common when we ask people to provide an answer they may not be able to guess correctly the first time. Selecting a unique username, for example, often involves pogosticking: no one can know beforehand which usernames a website has available, so they guess, click “create account,” find out the username they want is taken, guess again, click “create account,” again, and so on.

Video 1. The control version of our form. Note how error messages are only visible after the form is submitted.

Our inline validation forms worked differently: they gave real-time feedback as people filled in answers, using lightweight and direct success messages (green checkmarks) and error messages (red triangles and text) next to the form input fields. You can see the difference in Video 2 below.

Video 2. The best-performing version of our form used inline validation to provide real-time feedback immediately after people answered questions.

When compared to our control version, the inline validation form with the best performance showed compelling improvements across all the data we measured. Specifically, we saw:

  • a 22% increase in success rates,
  • a 22% decrease in errors made,
  • a 31% increase in satisfaction rating,
  • a 42% decrease in completion times, and
  • a 47% decrease in the number of eye fixations.

Participant comments highlighted their strong preference for getting real-time feedback from our form:

“You’d rather know about your mistakes as you go along.”

“It’s much better than getting all the way down and hitting ‘submit,’ only to find out that it doesn’t like your username. It’s much better when it tells you as you go along.”

These results highlight how inline validation influences web forms. But what’s the best way to achieve these results? When and how should we validate user answers?

Use inline validation* when the answers aren’t obvious#section4

Not all web form questions are equal. There are some things, such as given names, that we know instantly. Other things, such as new passwords, take more thought. When you consider using inline validation, you must first understand the questions the form asks. This was evident in our test results.

In the first half of our web form, we asked questions people knew the answers to: first name, last name, e-mail address, gender, country, and postal code. In the second half of the form, we asked questions that were harder to answer correctly the first time. We had participants select a username (how could they know what was available?) and a password (with strict formatting requirements). It wasn’t surprising that we observed different behaviors in the first and second half of the forms.

Only 30%-50% of our participants saw the validation messages (Figure 2) in the first half of our forms—whereas 80-100% of our participants saw the messages in the second half. This is probably because people did not need or expect confirmation for correct answers in the first half of the form. Confident in their responses to these simple questions, most people paid no attention to the validation messages when they appeared.

Validation messages on one of the forms we tested.
Fig. 2. Validation messages on one of the forms we tested.

In contrast, in the second half of the form, when our participants completed more difficult questions (such as username and password), they were less confident in their answers and therefore more inclined to seek confirmation. Also, they were more likely to hesitate, giving them ample opportunity to spot the validation messages (including those already on display in the first half of the form). The eye-tracking gaze path below (Figure 3), illustrates this behavior. The validation messages to the right of the input fields got a lot of visual attention in the second half of the form but none in the first half.

Visual attention mapping of validation messages.
Fig. 3. Visual attention was paid to the inline validation messages that appear to the right of input fields in the second half of this version of the form.

These observations seem to indicate that inline validation is most useful for input fields that are difficult to complete easily. The fact that participants were confused when simple questions were marked “correct” supports this interpretation:

“Are you telling me I entered a valid name or my name correctly?”

“Is this a confirmation of a correctly formatted postal code or my correct postal code?”

These types of participant questions caused some minor problems during testing. Our participants knew we had no way to know their correct name or postal code, so they knew that the green check mark didn’t mean “correct.” But what did they think it meant? They weren’t sure—and that was the problem. Not knowing what the message meant, our participants stopped to ask questions of the moderator instead of confidently answering what were very easy questions.

We can blame the green check mark symbol that implies “correct,” for some of the confusion, but we also learned that we can avert this confusion altogether by reserving inline validation for questions people need help with. Inconsistency, however, may be the disadvantage of this approach. If success messages appear alongside every form field except for simple fields, people may assume the data they entered is invalid if no success messages appear. As a result, they may try to “correct” perfectly valid input in these fields. We’re not sure if this would be a big problem, but it’s certainly something to consider.

Testing when to show inline validation*#section5

Once you know where inline validation can help, the next step is to put it into action. To better understand when to show inline validation messages, we tested a few variations in the top half of our form. (See Video 3 below.)

After#section6

In this version of the form, we displayed a validation message (success or error), after the user indicated that she was done answering a question by moving on to the next one. (This is validating “on blur” in technical speak.)

While#section7

In this variation, we displayed (and updated) a validation message while the user answered each question. (That is, “on key press.”)

Before and while#section8

In this version, we displayed a validation message before the user answered each question—that is, as soon as they focused each form element—and then while they answered the question. (This is validating “on blur and on keypress.”)

Video 3. The three inline validation variations we tested: after, while, and before and while.

For username and password questions, we used the “while” method with a short delay in each version we tested. Our early prototyping work revealed this method made the most sense for questions with strict boundaries, such as the set of usernames currently available or the required formatting for a secure password.

After method helps users to complete forms more quickly#section9

When we used the “after” method in the first half of the form, participants completed the form seven to ten seconds faster than when we used the “while” and “before and while” methods respectively. Why? Here’s what happened when we used the “while” and “before and while” methods: When several participants noticed an error message while trying to answer a question, they entered one additional character into the input field, than waited for the message to update. If the updated message continued to show an error, they entered another character, then waited for the validation message to update again, and so on, resulting in longer average completion times.

The “before and while” method not only caused longer completion times, but also produced higher error rates and worse satisfaction ratings than the other inline validation variations we tested. Our participants articulated their strong distaste for this methodology:

“It’s frustrating that you don’t get the chance to put anything in [the field] before it’s flashing red at you.”

“When I clicked in the First Name field, it immediately came up saying that [my first name] is too short. Well of course it is! I haven’t even started!”

“I found it quite annoying how red crosses came up when you hadn’t finished typing. It’s just really distracting.”

These negative reactions, longer completion times, and error rates illustrate that validating inputs prematurely can be harmful. Instead, when you validate open-ended questions, give feedback after the user finishes providing an answer. Or in situations in which people need help sooner, give feedback while they work toward an answer, but use an appropriate delay so premature error messages don’t frustrate them.

Testing how to show inline validation#section10

In each of the variations that tested when to show inline validation, we always placed success and error messages to the right of input fields. But to learn more about how to show validation messages, we also tested two additional variations: One where success messages faded after a brief delay and one that placed them inside the input field. In each version, the error messages faded only after the user resolved the error.

Three variations of displaying validation messages
Fig. 4. The three variations of displaying validation messages we tested.

Of these options, our participants fared best with persistent messages. These always-visible elements reassured participants that the fields they completed successfully stayed that way. As I mentioned earlier, though, we observed some confusion about the meaning of the green check mark: Does it mean “correct” or “valid”? As a result, you might try adding explanatory text (such as “complete” or “done”) to affirm success. You might also experiment with different validity indicators. This might prevent the confusion people had with the green check mark we used to represent correctness.

Not fade away—keep success messages prominent outside form fields#section11

When success messages faded away, some participants worried that they had done something to cause once-valid fields to become invalid. Persistent messages also enabled those who wanted to “check each field as they went” to do so, while accommodating those who wanted to ”check all of the fields at the end.”

We observed that fading messages were also easily missed because the vast majority of our participants were “hunt and peck” typists—as opposed to touch-typists—which meant that our users watched their fingers on the keyboard instead of the screen when entering data. As a result, they often missed messages that were on-screen for only a few moments.

Displaying validation inside form fields failed to deliver any substantial benefit. The positioning of these messages was—necessarily—inconsistent. (While it’s possible to make validation messages appear inside form fields, it is much more difficult to display them inside standard drop-down lists, so we displayed them outside these elements.) In-field validation was not any more noticeable than messages displayed outside the fields. In fact, the messages inside input fields were almost as far away from where people entered data as the other messages we tested (placed to the right of input fields). Had the messages been positioned closer to the inputs, they might have performed better.

The result: much gain, less pain#section12

Our testing provided some great insights. It also raised opportunities to more fully explore where, when, and how we should use inline validation to further alleviate web form pain so people can complete them and get to what they really want to do online. Which, trust me, isn’t filling in web forms!

Thanks#section13

* I’d like to thank Etre for all their work scripting, running, and reporting on this study and Aramys Miranda, who coded all the variations we tested.

42 Reader Comments

  1. As for any confusion caused by validation marks next to ‘easy’ questions, could this be solved by scrapping the ‘valid’ mark altogether? Jakob Nielsen touches on something similar to this in a column about “usability mistakes in the movies”:http://www.useit.com/alertbox/film-ui-bloopers.html :

    “After all, you design for authorized users. There’s no reason to delay them with a special confirmation that yes, they did indeed enter their own passwords correctly.”

    Any thoughts on this?

  2. Did you test placing the checkmarks to the left of the questions? I generally prefer them there: they line up more easily (so I can scan upward in a straight line rather than jumping around) in left-justified lists, and most lists I see have check marks to the left (much like bullets).

    I’d also like to test the theory that giving the check box three states might help: unchecked, checked, and error. That might give a quick and easy way to figure out which need attention, along with the explanation to the right. Of course you’d need a way to make sure the user doesn’t try to check the box.

    Finally, it should not come as a surprise that most people didn’t look directly at the green check. I found it quite easy to use peripheral vision to notice the green check appearing, and didn’t need to shift to it.

  3. Regarding the before, while, and after options, has anyone tested the ‘while’ option only when a user is in invalid-marked field? (I would also suggest _not_ using a “please wait…” indicator in this case.) To me it seems to play into the game-y aspect of filling out forms (_”make it all green!”_), but it can also be considered as a different use case — fixing invalid data.

  4. I have seen such forms several times, but—I could not tell why—I never implemented it in my websites. Now reading this article I see the advantages of using inline input validation. Not only it is faster, but also it discharges the server and of course the user’s internet connection. Moreover it is less frustrating when you are notified instantly and at the right position as when you have to wait and search for the incorrect input. Thank you for the article, it made me think about using this method.

  5. Really well put together piece of data.

    We’re about to embark on a pretty major piece of e-commerce development for a pretty niche audience in terms of usability and this study into different types of validation is more than useful to me.

  6. This research is an excellent contribution.

    Some of this basic validation can be undertaken by client-side JavaScript loaded at the same time as the page (e.g. a particular number type, format or range) and others might require separate AJAX-style requests (e.g. the username is not already taken). If the data is sensitive, designers and developers must be careful that the inline validation doesn’t prove to be a security weakness. For example in the username check, it’s quite likely the functionality can be abused to perform username enumeration i.e. identify some or many valid usernames before perhaps attempting to guess matching passwords.

    It is also worth being aware of the types of validation that shouldn’t have this inline approach used – the password input validation is probably checking length and the characters included, not whether the correct password matching the username has been typed. So the comments about correct/valid/complete/done need to be applied consistently and consideration given to what is displayed when the subsequent server-side REVALIDATION identifies a problem e.g. the postcode is of the correct format, but doesn’t match anything in the Royal Mail’s postcode database, or appears to be in a different town, region or country, or the user missed some previous step in a multi-stage form.

  7. This was a really good piece of advice that I had not thought of before… when a user gets a ‘valid checkmark’ after the first three fields, and suddenly the fourth doesn’t have one (even if it’s a field that doesn’t technically need to be validated), it could slow down the user… so good advice there.

    Thanks!

  8. While reading this article I came up with something I’m going to try next. This should help when using validation marks only on difficult inputs (like username, password, etc. as mentioned in the article). One can put a green valid mark when, for example, the username is correct and valid, and changing also the way the text field looks (hiding the borders, changing tha bgcolor, …). And in fields where the application can’t find out if the data is correct (like last name, phone number, etc.), you can still change the input’s appearance without showing the green valid mark.
    What do you think about this alternative? I’ve just came up with this…

  9. Thanks for the kind words and suggestions for further testing. One of the further explorations coming from this research is dropping the valid indicator altogether but that does not help with difficult inputs where validation helps people a lot. So instead we considered changing the format /text of the indicator to something more like “that’s a valid answer” vs. “thats a corect answer”. Small difference but important.

    We didn’t place the checkmarks to the left of inputs because of the varying message sizes for error states: “too short, taken, valid, not secure, etc.” the different kinds of messages need some room to be displayed hence to the right of the inputs where they can scale. If we only did images/icons and no text -that would be an option, but then you get no help in remedying errors other than “there is an error here”.

    Not sure how a “while” could be used for only invalid fields? Do you mean after a user leaves a field in an invalid state and then comes back to it?

    You can format inputs after user input (or sometimes even while) but you shouldn’t change the content of their input to make it valid in the vast majority of cases. Here’s an example of the former using Input Masks: http://www.lukew.com/ff/entry.asp?756

  10. Sorry for not being clear, @lukew.

    ??Do you mean after a user leaves a field in an invalid state and then comes back to it???

    That’s exactly what I mean. Think about a situation when an email address is missing a necessary dot or @ sign. When coming back to that field and adding the missing character the validation mark could appear instantly (‘while’ the character is typed), instead of when the user focuses out of the field.

    # The user might not focus out of the field at all. This can be the only invalid field and they could use the mouse to click the submit button, press Enter, or the field could be entirely scrolled off in order to move to another part of the form.
    # The user is actually looking for this confirmation. When entering data for the first time, ‘blinking’ valid/invalid status images and changing help text while typing is distracting, confusing, and can be considered as speaking out of turn. When fixing invalid data it might appear more as a prompt confirmation of what the user is doing to remedy the problem.

    Hope this clears thing up — it’s a small change, but it could elevate the experience. Also, originally from @Trare Bapho:

    ??We didn’t place the checkmarks to the left of inputs because of the varying message sizes for error states??

    This seems like a good idea: line up the checkmarks and invalid-marks on the left, but keep the textual messages on the right.

  11. Hi,

    Thanks for a great article! 🙂

    An idea, why not make use of the label elements to display status messages? That way you dont have to add an extra column on the right side.

  12. Something came to my mind after my last post.

    Is it really necessary to display the valid/confirmed messages at all? It seems to me like they add further problems for the user instead of making it easier, “Are you telling me I entered a valid name or my name correctly?”

    Why not just add inline validation messages when something is wrong?

  13. Ehm, I might be missing something here, but how do I actually code this kind of behaviour in my already existing form? Right now I have a pretty straightforward form on one of my websites that uses php to collect all the data and send it to a specific emailaddress. In the php I just check whether all the necessary fields are filled in correctly.

    Now I already knew this wasn’t the best way to do it as far as usability goes. I really understand the point this article is making and the inline validated forms seem to be a whole lot better, taking away the confusion while filling in. The only problem is: How to do this?
    This article makes an interesting read, but please: share the love!

  14. @2: yeah found that one as well, as well as some basic javascript examples too. So I started playing with the basic javascript ones, since I;m not that experienced with all this. This resulted in a small form which gets checked when the submit button is pressed. This is already quite ok, since the form is not that big (7 fields), but I can’t get it to check each field after focussing on it like in the movie in the article. Right now I use an onsubmit=”return validate(this)” in the form tag. I already found the onblur even handler and tried that on the individual input fields, but no luck. Any ideas?

  15. Thanks for such a detailed and useful article, Luke. Not sure if you got into it in this test, but I’d be curious about eye-tracking with regard to the actual content of validation messages too, beyond just their placement and persistence. Did you see any patterns if validation messages included unnecessary (but maybe brand-appropriate) words like “Please” or end punctuation—is that clutter that impedes the experience? Also, did you notice cratering and folks getting hung up when the text of error messages didn’t perfectly mirror field names?

  16. Thanks for the doing the research on this. I’m just on the verge of releasing a handy little jQuery plugin that will do form validation much more simply than most drop-in options (just tag your form fields with css classes and load the script at the top). I’m happy to have learned that most of my assumptions were correct 🙂

    One thing that people should be aware of is that it is *imperative* to use server-side validation as well. If you rely on the browser to make sure your users are giving you valid input, any hacker worth his salt will figure this out in 2 minutes flat and twiddle your script. Inline validation should never be considered anything but an aid for your users – it is not a security measure. I’m sure that’s a given for most readers but it’s worth reiterating 🙂

  17. Luke, did you try to apply some “typing speed detection” method, so “while” checking can have new life way?

    I’ve thought few times to apply this for a project I working on, but not done yet. Will be great to have any “digits prove” of this method. Maybe you have some another information about this.

    Regards, Anton.

  18. Hi Luke

    Thanks for adding to the knowledge base regarding forms validation, an oft neglected part of the form filling experience. The more data we can get, the better (although the statistician in me would feel much more comfortable if, in the future, you had a sample of at least 30 people!).

    You’re spot on in your article when you talk about how things like ambiguous interpretation of the tick symbol make the development of best practice recommendations for inline validation difficult.

    As I see it, *the one really clear cut case for inline validation is when users have no idea whether their answer will be accepted or not*. The username question is a typical example—nothing the user intuitively knows will give them any indication of whether their answer will “pass”. It’s these cases that lead to pogosticking.

    For all other cases, inline validation may cause more problems than it solves (as you’ve seen). Also, the “gain” is not as significant. Fixing a typo in a postcode is going to take marginally longer after server-side validation than after inline validation but if the error messaging is done well, the impact on the total user experience is likely to be very slim.

    Just my 2c.

    Cheers
    Jessica

    PS – Any thoughts on what the user experience might be like using inline validation in a more complex form? Helpful? Distracting? Noisy?

  19. Firstly, this is a very interesting article. Thanks for sharing this information.

    While the results make absolute sense, I have to question your methodology. I think perhaps your findings might have been even more decisive if you’d chosen a different methodology.

    You see, if you’re testing a form (and particularly looking at errors) then it doesn’t make sense to show the same person multiple versions of the same form.

    If you take 22 people and have 5 forms then that means (with randomisation) that each version is seen by only 4 or 5 participants who have not already seen the form (albeit with different validation). The other 17 times the form is exposed to a participant when they will have completed before. At least once before and up to 4 times before.

    So if I make an error with the 1st form, I’m less likely to make it again with the 2nd when I see the exact same question. I’m certainly not going to make the same mistake 5 times in a row.

    I understand that this would mean increasing the numbers considerably, but I don’t think that simply randomising the order the participants see the forms is an adequate substitute. Particularly if you want to start quoting figures on the results.

    Incidentally, are the forms available? In order to get your numbers perhaps you could do a crowd sourced usability study with a number of volunteer test facilitators?

  20. David Hamill makes an important point about randomisation. The problem is that with a within subjects design (where each participant sees all of the forms), no amount of randomisation is going to avoid bias. Here’s why.

    As a participant, you will quickly learn, on the first form, which input is appropriate. For example, Luke writes that the password field had “strict formatting requirements”. Let’s say it needed to include a number and an upper case letter. The participant will learn this on the first form and then just use the same password (or the same rule) on all subsequent forms. So I’d predict that participants made few, if any, errors on forms they saw after the first one.

    That seriously undermines the stats in the article. “When compared to our control version, the inline validation form with the best performance showed compelling improvements across all the data we measured.” Which form is “the inline validation form with the best performance”? Is this the form that did best overall (e.g. inline validation form 3), or the inline validation form that did best for each participant (which could have been a different form for each participant)? If the latter, then this will almost certainly be the 4th or 5th form as participants are really in the swing of it by then.

    And although Luke says, “We presented each form randomly”, does this include the control form or was it just the inline validation forms that were randomised? This is important, because:

    – If the control form was always presented first, you could get the results Luke reports because of a learning effect.
    – If the control form was always presented last, the results could be due to a fatigue effect.
    – If the control form was presented randomly, then 3-4 of the participants got it last (1 in 6). The other 18-19 participants had an inline validation form AFTER the control form, and so they would benefit from any learning effect.

    No amount of randomisation will control this bias. It needs a between subjects design.

    Nevertheless, it’s still an interesting piece of work!

  21. David Hamill and David Travis make very interesting and salient points about the methodology.

    While we’re on the topic, I thought I would add a slight concern about the use of percentages to describe the improvements in performance from inline validation compared to the control, i.e, “22% increase in success rates”, “22% decrease in errors made” etc.

    Given the relatively simple nature of the form, I would imagine that the number of failures, errors etc would be very small (e.g. less than 10). In such cases, percentages can be misleading as a change of one or two in the raw numbers will lead to large changes in percent.

    Would it be possible to provide the raw numbers, rather than percentages? It would also be interesting to know how much that improvement was down to the “pogosticking” effect.

    Also, could you please explain how you measured “errors made” on the inline validation version? For example, did you count an entry that was changed—in response to the inline validation, but before submitting the form—as an error?

  22. The approach I have been using on my own sites in the past four years is the following:

    1) Do not distract users with ‘approval’ signs (as by the suggestions of Reverend Duck in comment #1 and eyeMac in #13). Only show feedback when something is wrong. As the article shows, in many cases the tick has a disturbing effect on the user, and only showing it when it is actually useful (e.g. to avoid the pogosticking issue of username fields) introduces inconsistencies in the form’s behaviour (‘if this is the only field that gets approved, is anything wrong with the other ones?’). An additional benefit is to reduce the visual clutter in the form.

    2) Always show error messages on blur only (after the user leaves the field), but once they are there, you may (like yuwal suggests in #3 and #11) remove them on key press (while typing). However, if an ajax round trip to the server is involved (e.g., again, when checking if a username is taken), the delay might have a confusing effect if the error message disappears when your input has already changed again, so I’d limit this kind of feedback to validations that can be performed directly on the client and are immediate, e.g. when checking that two passwords match. But then, we have again a problem of inconsistency.

    Another possible approach that can would be interesting to consider is what we might call a ‘delayed while AND after’, which would be triggered on blur as usual but also when there is a sufficiently long pause between key presses. This can be easily implemented by calling the validation function on every key press, but with a onTimeout of a convenient fraction of a second. A subsequent key press would first cancel the previous timeout and then issue one himself with the updated input: only if there is a sufficient pause between key presses the timeout would run out and the validation function would be invoked. This would have the benefit of leaving the user alone while he types and react when he hesitates.

    While in the past years I have relied heavily on this technique for ‘autocomplete’ search fields to avoid excessive screen flickering, I have never, until now, thought of using it for validation. However, because you are no longer responding to the user while he is typing fast, this might be a way to confidently extend the ‘remove error messages while typing’ approach mentioned above to validations requiring an ajax request, and thereby solve the inconsistency issue.

    Such an approach could become quite sophisticated if you throw in the suggestion by small_jam (#20), and detect the user’s typing speed and adapt the validation delay accordingly.

    (Sorry for the überlong comment)

  23. Hi Luke,

    very interesting article, thank you. My question is that when showing success messages next to each answer, how to handle those questions that have a default answer pre-selected (that we strive for everywhere possible)?

    I was considering showing checkmarks next to each answer in a large-scale financial system that has forms everywhere but eventually it created too much consistency questions.

    Bye, Zoli

  24. Real life studies of usability on the web are still very small in number, probably due to the time and costs that goes into it, so this is a very welcome addition. Very clear article as well!

    I know that adding a few numbers or % is very tempting, but when having 22 people as guinea pigs it is better to leave them out, methodologists (like me) are like sharks smelling blood then 🙂 While actually, the observations you write in this article are much, much more useful than any number or percentage.

  25. I’ll add my voice to those saying that you don’t need to mark an entry with a tick.

    Firstly, you reduce the confusion as to whether a tick means “field complete”, “field syntactically valid” or “field correct”. Secondly, your eye-tracking study showed that people didn’t focus on the green ticks at all. They aren’t adding anything to the party.

    I know how to fill in a form. Most people do. Yes, sometimes I will make a mistake, and when I do, it’s useful to be notified of that if the form or server can tell that I have done – but I really don’t need to be told when I have successfully typed a name into a box marked “Name:”

    I’d also agree that, on initial entry, an error message should only be displayed _onBlur_, but on re-entry, it should be amended _onKeyUp_, to ensure users can see when they have entered a correct format.

    (Of course, if you’re going for the easy option and using HTML validation rather than Javascript, you probably won’t have such fine control)

  26. Thanks Luke for an interesting article. I’m just wondering how would inline validation work for someone with a severe vision impairment. Have you done any testing of these forms with people who use screen readers or screen magnifiers?

  27. that’s nice but will those forms be accessible?
    Why not break down the form? showing one field at a time?
    You don’t confuse the user with a lot of information at the same time and you can give feed back for each field individually, also you won’t have problems with lack of space. You can even give some help about how to fill each field.

  28. Thanks all for the comments. Just catching up on all of the new ones!

    @yuval
    Only having the validation appear when someone comes back to the field would mean either they figured out there was an error themselves OR we told them. If we told them, then the potential for inline validation might be missed since we want to correct errors as people complete the form.

    To your second why separate the icons from the messages? They are part of the same communication to the user. Depending on input field format & length – they could end up a good distance apart. Which would not be good!

    @eyeMac
    Labels keep the question people need to answer front and center. I would not override them with validation messages. Also, at the point someone is filling the input field they have moved past the label. Below might be an option but then the form would jump up and down as messages appear. Not good to “bounce” the UI like that as it disorients users.

    “Are you telling me I entered a valid name or my name correctly?” -this was addressed in the article! 🙂

    @Margot Bloomstein
    We didn’t get into any analysis of the messaging text in this study. Lots of other great studies on how to write up error messages out there!

    @small_jam
    Yep. The user name and password fields both have time-tuned delays on the messages we return. There was a good amount of exploration on when to trigger the feedback.

  29. @Jessica
    “Fixing a typo in a postcode is going to take marginally longer after server-side validation than after inline validation but if the error messaging is done well, the impact on the total user experience is likely to be very slim.”

    Actually don’t agree with this. It’s quite a bit longer to submit, re-render the page, have the user notice the error, locate where on the page it is, resolve it, resolves any “protected” fields that were wiped out on resubmit, and then hit submit again. Much more efficient to resolves as the form is being completed!

    “Any thoughts on what the user experience might be like using inline validation in a more complex form?”
    Without knowing what you mean by a more complex form -more questions? hard questions? domain expertise required? etc? I’d have to assume it would be more useful. If something is harder to fill in, helping people fill it in correctly should more useful!

    @DavidHamill
    “So if I make an error with the 1st form, I’m less likely to make it again”
    I’ll let the Etre guys (they are the usability testing experts) comment on this but ee accounted for this by compressing the available namespace for user ID selection so that each person did not get their first selection. Hence there was consistency in the error experience across all forms regardless of order. therefore -getting things right later as you suggest was not a factor in whether or not they experienced the different validation formats.

    @dtravisphd
    “So I’d predict that participants made few, if any, errors on forms they saw after the first one.”
    Not true, as I explained above. We forced username saturation errors each time. So each participant had the error formats come up.

    “does this include the control form or was it just the inline validation forms that were randomised”
    yes.

    @reflekt
    If there is a default answer then no need to validate it. As we saw in the study certain fields do better with inline validation than others 🙂

    @Iza
    We did not test for vision impaired users in this study. However, the input field
    responsible for the error uses a “double visual emphasis” to stand out from
    the rest of the form elements. In this case, the message is red, and we’ve added
    red instructions just to the right of the input field. This doubled-up approach is
    important because simply changing the text color of the label might not
    be enough to be noticed by color-blind people. So to ensure that everyone
    knows where an error happened, we double the visual emphasis. We could
    have opted for an icon and red text or a background color and instructions
    to highlight the inputs responsible for the error as well. Any form of double
    emphasis could work.

  30. @lukew

    I didn’t ment that we should override the label with new text. I was more thinking of adding some kind of icon beside it, and maybe extend the original text with a message.

  31. Hello everyone. I’m Simon from Etre — the user experience company that assisted Luke with this study. Luke asked me if I would stop by and address some of the comments regarding the methods we employed in evaluating the six designs, so here goes (gulp)…

    _Why did you use 20 users and not more?_
    We did use more! We used 22 users 😉

    Joking aside, the simple answer is: time and budget. We tested with 20+ users because this is the minimum number required to obtain “statistically useful” metrics. (As Jakob Nielsen says, “”when collecting usability metrics, testing 20 users typically offers reasonably tight confidence levels”:http://www.useit.com/alertbox/quantitative_testing.html “). We would have loved to have involved more users in order to have improved the reliability of the metrics we recorded but, unfortunately, our schedule and budget just wouldn’t stretch to it. Therefore (and as per almost all other usability studies) all metrics quoted in Luke’s most excellent write-up should be considered indicative as opposed to definitive.

    Perhaps we should all band together and petition one of the larger usability companies like UIE or NNG to conduct a more extensive study? I suspect that they wouldn’t be prepared to make the results available for free however, given the level of investment required. (I like @DavidHamill’s idea of conducting a “crowd sourced usability study with a number of volunteer test facilitators”; however, this creates certain methodological issues of its own.)

    In terms of publishing the stats we recorded in the form of raw numbers (in addition to percentages), this sounds like a good idea to me. Luke?

    _Why did you show users multiple versions of the same form (Comments #22-24)?_
    Again, this was an issue of time and budget. While we would have liked to have shown each user a single variation of the form — thereby eliminating the sort of “familiarity bias” that @DavidHamill describes in comment #22 above — testing each of the six designs with 20 users would have been impractical. (It would have taken a month or so just to complete the testing, never mind the analysis, write-up etc.; and would have cost thousands of pounds in incentives for users to participate”¦and that’s without factoring in the cost of user recruitment, staffing the project etc.!)

    Another option would have been to reduce the number of variations we tested (and to have tested these variations with 20 users apiece); however, we would then have had to sacrifice the “completeness” of the study. This is something we weren’t keen to do, as we thought it important to test things like the “after”, “while” and “before and while” phasing of the validation; the various different placements of the resulting validation messages; and to include a control version — which is something we wouldn’t have been able to do had we slashed the number of variations to, say, two.

    Personally speaking, I’m not sure how detrimental “familiarity bias” was in the context of this particular study. Since we were testing a standard registration form — the kind found on almost all B2C / B2B / B2E websites — our users were likely to be very familiar with its constituent form elements (and their associated input requirements) prior to taking part in the testing — and therefore weren’t learning much that they didn’t already know about these elements during our testing (i.e. as they progressed through the six variations of the design). In other words, they most likely already knew how to enter their name, address and contact details using the types of form elements we provided; and, as for username and password entry— the “trickiest” of the form elements they encountered — the requirements of these fields were not only similar to those of many other sites but were also explicitly spelt out directly beneath the fields in question. The only thing our users weren’t familiar with was the type of validation employed and, since this was different from variation to variation, they were unable to become familiar with it (which is as exactly as we needed things to be in order to assess it).

    As a result of this, the study can be said to have been designed to determine whether the inclusion of inline validation made users more efficient in filling out an already familiar form (or whether it only served to distract them). It was thus not designed to determine whether the inclusion of inline validation made users more efficient in filling out an unfamiliar form. (In this case, “familiarity bias” would have been a real issue, as users would have become more familiar with the form’s constituent elements with each new variation they encountered and this, accordingly, would have had a great effect on the results).

    I’m not saying that some “familiarity bias” isn’t present in the results of our study — clearly, users would have become more familiar with the general design and layout of the form as they worked their way through the six variations — but I do think we took reasonable steps to minimise it, bearing in mind the constraints of the project.

    _Were all variations presented randomly? (Comment #23)_
    Yes. All variations — including the control form — were randomised.

    To be clear here, “randomised” means that each variation appeared an equal number of times in each position on the testing schedule (or as near to as was possible, given that 20 users isn’t neatly divisible by 6 variations). For example, Variation A was shown as the first stimuli to three participants, as the second stimuli to another three participants, and so on. (Note: We also took care to ensure that — for example — Variations A and B were not shown one after the other to every single user).

    _Could you please explain how you measured “errors made” in the inline validation variations? (Comment #24)_
    In all six variations — i.e. including the control version — “errors made” was the number of errors returned after users clicked the Submit button at the bottom of the form.

    That is, “errors made” does not take into account the errors made by users that were caught by the inline validation mechanisms and subsequently corrected by users prior to their clicking the Submit button.

    _One last thing…_
    If you’re interested in this type of research, please “subscribe to our newsletter”:http://www.etre.com/subscribe/ – where we regularly publish usability-related insights gleaned from our work.

    Oh, and thanks for all the great comments on this study 🙂

  32. Forms are alsways a difficult area, and are one of the places within a website where you can very easily annoy. Great piece of information, thanks for writing.

  33. Hello Luke and thanks for the write up of your findings. Quit interessting findings. Although I’d have to subscribe to a lot of the methodological concerns mentioned before your results at least give an idea of the direction of effects.

    “Displaying validation inside form fields failed to deliver any substantial benefit.”

    Is that also true for completion time? I’m asking because there is evidence from psychological studies showing that shifting attention within a visual field is easer within objects than between them.

  34. It’s good to see that a lot of issues that we’ve been talking about are being solved by the technical progress WebGUI is making. Validation for forms is one example, FilePump allowing for easy CSS management AND speed is another.

  35. There are two main areas of concern that I have with inline validation: integration with server-side validations, and error message layout.

    1. As Justen Robertson correctly noted, client side validations assist users in getting through forms better, faster – while server side validations handle data correctness and security. A common scenario is to use both – but display options for when data passes inline but fails server-side, are still unclear.

    2. Always placing the error message to the right of the input field can be problematic. You must reserve appropriate space for the message, deal with possibly wrapping text, and make empty forms look good (e.g. fieldsets). I work with a framework that always places error messages to the right of form fields, and this results in layout frustration for clients & developers.

    Inline form validation is a good recommendation for a specific class of forms, but if you have complex (and well tested) server-side validations, space-limitations, or verbose error messages – YMMV.

  36. Working with a number of instances of forms, I am bouncing around a question about where is the most useful placement of validation messages. If the label is stacked above the input, and the input field is wide, is it useful and clear to the user if the message is attached to the bottom of the input or the top? Is it actually more clear to the user if the message is to the right of the input?

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career