Inline Validation in Web Forms

by Luke Wroblewski

41 Reader Comments

Back to the Article
  1. As for any confusion caused by validation marks next to ‘easy’ questions, could this be solved by scrapping the ‘valid’ mark altogether? Jakob Nielsen touches on something similar to this in a column about “usability mistakes in the movies”:http://www.useit.com/alertbox/film-ui-bloopers.html :

    “After all, you design for authorized users. There’s no reason to delay them with a special confirmation that yes, they did indeed enter their own passwords correctly.”

    Any thoughts on this?

    Copy & paste the code below to embed this comment.
  2. Did you test placing the checkmarks to the left of the questions? I generally prefer them there: they line up more easily (so I can scan upward in a straight line rather than jumping around) in left-justified lists, and most lists I see have check marks to the left (much like bullets).

    I’d also like to test the theory that giving the check box three states might help: unchecked, checked, and error. That might give a quick and easy way to figure out which need attention, along with the explanation to the right. Of course you’d need a way to make sure the user doesn’t try to check the box.

    Finally, it should not come as a surprise that most people didn’t look directly at the green check. I found it quite easy to use peripheral vision to notice the green check appearing, and didn’t need to shift to it.

    Copy & paste the code below to embed this comment.
  3. Regarding the before, while, and after options, has anyone tested the ‘while’ option only when a user is in invalid-marked field? (I would also suggest not using a “please wait…” indicator in this case.) To me it seems to play into the game-y aspect of filling out forms (“make it all green!“), but it can also be considered as a different use case — fixing invalid data.

    Copy & paste the code below to embed this comment.
  4. I have seen such forms several times, but—I could not tell why—I never implemented it in my websites. Now reading this article I see the advantages of using inline input validation. Not only it is faster, but also it discharges the server and of course the user’s internet connection. Moreover it is less frustrating when you are notified instantly and at the right position as when you have to wait and search for the incorrect input. Thank you for the article, it made me think about using this method.

    Copy & paste the code below to embed this comment.
  5. It’s really nice to see data like this put together. Great to have an immediate reference like this. Thanks!

    Copy & paste the code below to embed this comment.
  6. Really well put together piece of data. 

    We’re about to embark on a pretty major piece of e-commerce development for a pretty niche audience in terms of usability and this study into different types of validation is more than useful to me.

    Copy & paste the code below to embed this comment.
  7. This research is an excellent contribution. 

    Some of this basic validation can be undertaken by client-side JavaScript loaded at the same time as the page (e.g. a particular number type, format or range) and others might require separate AJAX-style requests (e.g. the username is not already taken).  If the data is sensitive, designers and developers must be careful that the inline validation doesn’t prove to be a security weakness.  For example in the username check, it’s quite likely the functionality can be abused to perform username enumeration i.e. identify some or many valid usernames before perhaps attempting to guess matching passwords.

    It is also worth being aware of the types of validation that shouldn’t have this inline approach used – the password input validation is probably checking length and the characters included, not whether the correct password matching the username has been typed.  So the comments about correct/valid/complete/done need to be applied consistently and consideration given to what is displayed when the subsequent server-side REVALIDATION identifies a problem e.g. the postcode is of the correct format, but doesn’t match anything in the Royal Mail’s postcode database, or appears to be in a different town, region or country, or the user missed some previous step in a multi-stage form.

    Copy & paste the code below to embed this comment.
  8. This was a really good piece of advice that I had not thought of before… when a user gets a ‘valid checkmark’ after the first three fields, and suddenly the fourth doesn’t have one (even if it’s a field that doesn’t technically need to be validated), it could slow down the user… so good advice there.

    Thanks!

    Copy & paste the code below to embed this comment.
  9. While reading this article I came up with something I’m going to try next. This should help when using validation marks only on difficult inputs (like username, password, etc. as mentioned in the article). One can put a green valid mark when, for example, the username is correct and valid, and changing also the way the text field looks (hiding the borders, changing tha bgcolor, …). And in fields where the application can’t find out if the data is correct (like last name, phone number, etc.), you can still change the input’s appearance without showing the green valid mark.
    What do you think about this alternative? I’ve just came up with this…

    Copy & paste the code below to embed this comment.
  10. Thanks for the kind words and suggestions for further testing. One of the further explorations coming from this research is dropping the valid indicator altogether but that does not help with difficult inputs where validation helps people a lot. So instead we considered changing the format /text of the indicator to something more like “that’s a valid answer” vs. “thats a corect answer”. Small difference but important.

    We didn’t place the checkmarks to the left of inputs because of the varying message sizes for error states: “too short, taken, valid, not secure, etc.” the different kinds of messages need some room to be displayed hence to the right of the inputs where they can scale. If we only did images/icons and no text -that would be an option, but then you get no help in remedying errors other than “there is an error here”.

    Not sure how a “while” could be used for only invalid fields? Do you mean after a user leaves a field in an invalid state and then comes back to it?

    You can format inputs after user input (or sometimes even while) but you shouldn’t change the content of their input to make it valid in the vast majority of cases. Here’s an example of the former using Input Masks: http://www.lukew.com/ff/entry.asp?756

    Copy & paste the code below to embed this comment.
  11. Sorry for not being clear, @lukew.

    Do you mean after a user leaves a field in an invalid state and then comes back to it?

    That’s exactly what I mean. Think about a situation when an email address is missing a necessary dot or @ sign. When coming back to that field and adding the missing character the validation mark could appear instantly (‘while’ the character is typed), instead of when the user focuses out of the field.

    1. The user might not focus out of the field at all. This can be the only invalid field and they could use the mouse to click the submit button, press Enter, or the field could be entirely scrolled off in order to move to another part of the form.
    #  The user is actually looking for this confirmation. When entering data for the first time, ‘blinking’ valid/invalid status images and changing help text while typing is distracting, confusing, and can be considered as speaking out of turn. When fixing invalid data it might appear more as a prompt confirmation of what the user is doing to remedy the problem.

    Hope this clears thing up — it’s a small change, but it could elevate the experience. Also, originally from @Trare Bapho:

    We didn’t place the checkmarks to the left of inputs because of the varying message sizes for error states

    This seems like a good idea: line up the checkmarks and invalid-marks on the left, but keep the textual messages on the right.

    Copy & paste the code below to embed this comment.
  12. Hi,

    Thanks for a great article! :)

    An idea, why not make use of the label elements to display status messages? That way you dont have to add an extra column on the right side.

    Copy & paste the code below to embed this comment.
  13. Something came to my mind after my last post.

    Is it really necessary to display the valid/confirmed messages at all? It seems to me like they add further problems for the user instead of making it easier, “Are you telling me I entered a valid name or my name correctly?”

    Why not just add inline validation messages when something is wrong?

    Copy & paste the code below to embed this comment.
  14. Ehm, I might be missing something here, but how do I actually code this kind of behaviour in my already existing form? Right now I have a pretty straightforward form on one of my websites that uses php to collect all the data and send it to a specific emailaddress. In the php I just check whether all the necessary fields are filled in correctly.

    Now I already knew this wasn’t the best way to do it as far as usability goes. I really understand the point this article is making and the inline validated forms seem to be a whole lot better, taking away the confusion while filling in. The only problem is: How to do this?
    This article makes an interesting read, but please: share the love!

    Copy & paste the code below to embed this comment.
  15. I found a nice inline form validation that use jQuery and some nice CSS3,

    http://www.position-absolute.com/articles/jquery-form-validator-because-form-validation-is-a-mess/

    Copy & paste the code below to embed this comment.
  16. @2: yeah found that one as well, as well as some basic javascript examples too. So I started playing with the basic javascript ones, since I;m not that experienced with all this. This resulted in a small form which gets checked when the submit button is pressed. This is already quite ok, since the form is not that big (7 fields), but I can’t get it to check each field after focussing on it like in the movie in the article. Right now I use an onsubmit=“return validate(this)” in the form tag. I already found the onblur even handler and tried that on the individual input fields, but no luck. Any ideas?

    Copy & paste the code below to embed this comment.
  17. Thanks for such a detailed and useful article, Luke. Not sure if you got into it in this test, but I’d be curious about eye-tracking with regard to the actual content of validation messages too, beyond just their placement and persistence. Did you see any patterns if validation messages included unnecessary (but maybe brand-appropriate) words like “Please” or end punctuation—is that clutter that impedes the experience? Also, did you notice cratering and folks getting hung up when the text of error messages didn’t perfectly mirror field names?

    Copy & paste the code below to embed this comment.
  18. Thanks for the doing the research on this. I’m just on the verge of releasing a handy little jQuery plugin that will do form validation much more simply than most drop-in options (just tag your form fields with css classes and load the script at the top). I’m happy to have learned that most of my assumptions were correct :)

    One thing that people should be aware of is that it is imperative to use server-side validation as well. If you rely on the browser to make sure your users are giving you valid input, any hacker worth his salt will figure this out in 2 minutes flat and twiddle your script. Inline validation should never be considered anything but an aid for your users – it is not a security measure. I’m sure that’s a given for most readers but it’s worth reiterating :)

    Copy & paste the code below to embed this comment.
  19. I was actually planning to start adding real-time validation to email fields. This article has given me the push I needed!

    Copy & paste the code below to embed this comment.
  20. Luke, did you try to apply some “typing speed detection” method, so “while” checking can have new life way?

    I’ve thought few times to apply this for a project I working on, but not done yet. Will be great to have any “digits prove” of this method. Maybe you have some another information about this.

    Regards, Anton.

    Copy & paste the code below to embed this comment.
  21. Hi Luke

    Thanks for adding to the knowledge base regarding forms validation, an oft neglected part of the form filling experience. The more data we can get, the better (although the statistician in me would feel much more comfortable if, in the future, you had a sample of at least 30 people!).

    You’re spot on in your article when you talk about how things like ambiguous interpretation of the tick symbol make the development of best practice recommendations for inline validation difficult.

    As I see it, the one really clear cut case for inline validation is when users have no idea whether their answer will be accepted or not. The username question is a typical example—nothing the user intuitively knows will give them any indication of whether their answer will “pass”. It’s these cases that lead to pogosticking.

    For all other cases, inline validation may cause more problems than it solves (as you’ve seen). Also, the “gain” is not as significant. Fixing a typo in a postcode is going to take marginally longer after server-side validation than after inline validation but if the error messaging is done well, the impact on the total user experience is likely to be very slim.

    Just my 2c.

    Cheers
    Jessica

    PS – Any thoughts on what the user experience might be like using inline validation in a more complex form? Helpful? Distracting? Noisy?

    Copy & paste the code below to embed this comment.
  22. Firstly, this is a very interesting article. Thanks for sharing this information.

    While the results make absolute sense, I have to question your methodology. I think perhaps your findings might have been even more decisive if you’d chosen a different methodology.

    You see, if you’re testing a form (and particularly looking at errors) then it doesn’t make sense to show the same person multiple versions of the same form.

    If you take 22 people and have 5 forms then that means (with randomisation) that each version is seen by only 4 or 5 participants who have not already seen the form (albeit with different validation). The other 17 times the form is exposed to a participant when they will have completed before. At least once before and up to 4 times before.

    So if I make an error with the 1st form, I’m less likely to make it again with the 2nd when I see the exact same question. I’m certainly not going to make the same mistake 5 times in a row.

    I understand that this would mean increasing the numbers considerably, but I don’t think that simply randomising the order the participants see the forms is an adequate substitute. Particularly if you want to start quoting figures on the results.

    Incidentally, are the forms available? In order to get your numbers perhaps you could do a crowd sourced usability study with a number of volunteer test facilitators?

    Copy & paste the code below to embed this comment.
  23. David Hamill makes an important point about randomisation. The problem is that with a within subjects design (where each participant sees all of the forms), no amount of randomisation is going to avoid bias. Here’s why.

    As a participant, you will quickly learn, on the first form, which input is appropriate. For example, Luke writes that the password field had “strict formatting requirements”. Let’s say it needed to include a number and an upper case letter. The participant will learn this on the first form and then just use the same password (or the same rule) on all subsequent forms. So I’d predict that participants made few, if any, errors on forms they saw after the first one.

    That seriously undermines the stats in the article. “When compared to our control version, the inline validation form with the best performance showed compelling improvements across all the data we measured.” Which form is “the inline validation form with the best performance”? Is this the form that did best overall (e.g. inline validation form 3), or the inline validation form that did best for each participant (which could have been a different form for each participant)? If the latter, then this will almost certainly be the 4th or 5th form as participants are really in the swing of it by then.

    And although Luke says, “We presented each form randomly”, does this include the control form or was it just the inline validation forms that were randomised? This is important, because:

    - If the control form was always presented first, you could get the results Luke reports because of a learning effect.
    – If the control form was always presented last, the results could be due to a fatigue effect.
    – If the control form was presented randomly, then 3-4 of the participants got it last (1 in 6). The other 18-19 participants had an inline validation form AFTER the control form, and so they would benefit from any learning effect.

    No amount of randomisation will control this bias. It needs a between subjects design.

    Nevertheless, it’s still an interesting piece of work!

    Copy & paste the code below to embed this comment.
  24. David Hamill and David Travis make very interesting and salient points about the methodology.

    While we’re on the topic, I thought I would add a slight concern about the use of percentages to describe the improvements in performance from inline validation compared to the control, i.e, “22% increase in success rates”, “22% decrease in errors made” etc.

    Given the relatively simple nature of the form, I would imagine that the number of failures, errors etc would be very small (e.g. less than 10). In such cases, percentages can be misleading as a change of one or two in the raw numbers will lead to large changes in percent.

    Would it be possible to provide the raw numbers, rather than percentages? It would also be interesting to know how much that improvement was down to the “pogosticking” effect.

    Also, could you please explain how you measured “errors made” on the inline validation version? For example, did you count an entry that was changed—in response to the inline validation, but before submitting the form—as an error?

    Copy & paste the code below to embed this comment.
  25. The approach I have been using on my own sites in the past four years is the following:

    1) Do not distract users with ‘approval’ signs (as by the suggestions of Reverend Duck in comment #1 and eyeMac in #13). Only show feedback when something is wrong. As the article shows, in many cases the tick has a disturbing effect on the user, and only showing it when it is actually useful (e.g. to avoid the pogosticking issue of username fields) introduces inconsistencies in the form’s behaviour (‘if this is the only field that gets approved, is anything wrong with the other ones?’). An additional benefit is to reduce the visual clutter in the form.

    2) Always show error messages on blur only (after the user leaves the field), but once they are there, you may (like yuwal suggests in #3 and #11) remove them on key press (while typing). However, if an ajax round trip to the server is involved (e.g., again, when checking if a username is taken), the delay might have a confusing effect if the error message disappears when your input has already changed again, so I’d limit this kind of feedback to validations that can be performed directly on the client and are immediate, e.g. when checking that two passwords match. But then, we have again a problem of inconsistency.

    Another possible approach that can would be interesting to consider is what we might call a ‘delayed while AND after’, which would be triggered on blur as usual but also when there is a sufficiently long pause between key presses. This can be easily implemented by calling the validation function on every key press, but with a onTimeout of a convenient fraction of a second. A subsequent key press would first cancel the previous timeout and then issue one himself with the updated input: only if there is a sufficient pause between key presses the timeout would run out and the validation function would be invoked. This would have the benefit of leaving the user alone while he types and react when he hesitates.

    While in the past years I have relied heavily on this technique for ‘autocomplete’ search fields to avoid excessive screen flickering, I have never, until now, thought of using it for validation. However, because you are no longer responding to the user while he is typing fast, this might be a way to confidently extend the ‘remove error messages while typing’ approach mentioned above to validations requiring an ajax request, and thereby solve the inconsistency issue.

    Such an approach could become quite sophisticated if you throw in the suggestion by small_jam (#20), and detect the user’s typing speed and adapt the validation delay accordingly.

    (Sorry for the überlong comment)

    Copy & paste the code below to embed this comment.
  26. Hi Luke,

    very interesting article, thank you. My question is that when showing success messages next to each answer, how to handle those questions that have a default answer pre-selected (that we strive for everywhere possible)?

    I was considering showing checkmarks next to each answer in a large-scale financial system that has forms everywhere but eventually it created too much consistency questions.

    Bye, Zoli

    Copy & paste the code below to embed this comment.
  27. I love it.  I’m going to have to start using inline validation from now on.

    Copy & paste the code below to embed this comment.
  28. Real life studies of usability on the web are still very small in number, probably due to the time and costs that goes into it, so this is a very welcome addition. Very clear article as well!

    I know that adding a few numbers or % is very tempting, but when having 22 people as guinea pigs it is better to leave them out, methodologists (like me) are like sharks smelling blood then :-) While actually, the observations you write in this article are much, much more useful than any number or percentage.

    Copy & paste the code below to embed this comment.
  29. I’ll add my voice to those saying that you don’t need to mark an entry with a tick.

    Firstly, you reduce the confusion as to whether a tick means “field complete”, “field syntactically valid” or “field correct”. Secondly, your eye-tracking study showed that people didn’t focus on the green ticks at all. They aren’t adding anything to the party.

    I know how to fill in a form. Most people do. Yes, sometimes I will make a mistake, and when I do, it’s useful to be notified of that if the form or server can tell that I have done – but I really don’t need to be told when I have successfully typed a name into a box marked “Name:”

    I’d also agree that, on initial entry, an error message should only be displayed onBlur, but on re-entry, it should be amended onKeyUp, to ensure users can see when they have entered a correct format.

    (Of course, if you’re going for the easy option and using HTML validation rather than Javascript, you probably won’t have such fine control)

    Copy & paste the code below to embed this comment.
  30. Thanks Luke for an interesting article. I’m just wondering how would inline validation work for someone with a severe vision impairment. Have you done any testing of these forms with people who use screen readers or screen magnifiers?

    Copy & paste the code below to embed this comment.
  31. that’s nice but will those forms be accessible?
    Why not break down the form? showing one field at a time?
    You don’t confuse the user with a lot of information at the same time and you can give feed back for each field individually, also you won’t have problems with lack of space. You can even give some help about how to fill each field.

    Copy & paste the code below to embed this comment.
  32. Thanks all for the comments. Just catching up on all of the new ones!

    @yuval
    Only having the validation appear when someone comes back to the field would mean either they figured out there was an error themselves OR we told them. If we told them, then the potential for inline validation might be missed since we want to correct errors as people complete the form.

    To your second why separate the icons from the messages? They are part of the same communication to the user. Depending on input field format & length – they could end up a good distance apart. Which would not be good!

    @eyeMac
    Labels keep the question people need to answer front and center. I would not override them with validation messages. Also, at the point someone is filling the input field they have moved past the label. Below might be an option but then the form would jump up and down as messages appear. Not good to “bounce” the UI like that as it disorients users.

    “Are you telling me I entered a valid name or my name correctly?” -this was addressed in the article! :)

    @Margot Bloomstein
    We didn’t get into any analysis of the messaging text in this study. Lots of other great studies on how to write up error messages out there!

    @small_jam
    Yep. The user name and password fields both have time-tuned delays on the messages we return. There was a good amount of exploration on when to trigger the feedback.

    Copy & paste the code below to embed this comment.
  33. @Jessica
    “Fixing a typo in a postcode is going to take marginally longer after server-side validation than after inline validation but if the error messaging is done well, the impact on the total user experience is likely to be very slim.”

    Actually don’t agree with this. It’s quite a bit longer to submit, re-render the page, have the user notice the error, locate where on the page it is, resolve it, resolves any “protected” fields that were wiped out on resubmit, and then hit submit again. Much more efficient to resolves as the form is being completed!

    “Any thoughts on what the user experience might be like using inline validation in a more complex form?”
    Without knowing what you mean by a more complex form -more questions? hard questions? domain expertise required? etc? I’d have to assume it would be more useful. If something is harder to fill in, helping people fill it in correctly should more useful!

    @DavidHamill
    “So if I make an error with the 1st form, I’m less likely to make it again”
    I’ll let the Etre guys (they are the usability testing experts) comment on this but ee accounted for this by compressing the available namespace for user ID selection so that each person did not get their first selection. Hence there was consistency in the error experience across all forms regardless of order. therefore -getting things right later as you suggest was not a factor in whether or not they experienced the different validation formats.

    @dtravisphd
    “So I’d predict that participants made few, if any, errors on forms they saw after the first one.”
    Not true, as I explained above. We forced username saturation errors each time. So each participant had the error formats come up.

    “does this include the control form or was it just the inline validation forms that were randomised”
    yes.

    @reflekt
    If there is a default answer then no need to validate it. As we saw in the study certain fields do better with inline validation than others :)

    @Iza
    We did not test for vision impaired users in this study. However, the input field
    responsible for the error uses a “double visual emphasis” to stand out from
    the rest of the form elements. In this case, the message is red, and we’ve added
    red instructions just to the right of the input field. This doubled-up approach is
    important because simply changing the text color of the label might not
    be enough to be noticed by color-blind people. So to ensure that everyone
    knows where an error happened, we double the visual emphasis. We could
    have opted for an icon and red text or a background color and instructions
    to highlight the inputs responsible for the error as well. Any form of double
    emphasis could work.

    Copy & paste the code below to embed this comment.
  34. @lukew

    I didn’t ment that we should override the label with new text. I was more thinking of adding some kind of icon beside it, and maybe extend the original text with a message.

    Copy & paste the code below to embed this comment.
  35. Hello everyone. I’m Simon from Etre — the user experience company that assisted Luke with this study. Luke asked me if I would stop by and address some of the comments regarding the methods we employed in evaluating the six designs, so here goes (gulp)…

    Why did you use 20 users and not more?
    We did use more! We used 22 users ;-)

    Joking aside, the simple answer is: time and budget. We tested with 20+ users because this is the minimum number required to obtain “statistically useful” metrics. (As Jakob Nielsen says, ““when collecting usability metrics, testing 20 users typically offers reasonably tight confidence levels”:http://www.useit.com/alertbox/quantitative_testing.html “). We would have loved to have involved more users in order to have improved the reliability of the metrics we recorded but, unfortunately, our schedule and budget just wouldn’t stretch to it. Therefore (and as per almost all other usability studies) all metrics quoted in Luke’s most excellent write-up should be considered indicative as opposed to definitive.

    Perhaps we should all band together and petition one of the larger usability companies like UIE or NNG to conduct a more extensive study? I suspect that they wouldn’t be prepared to make the results available for free however, given the level of investment required. (I like @DavidHamill’s idea of conducting a “crowd sourced usability study with a number of volunteer test facilitators”; however, this creates certain methodological issues of its own.)

    In terms of publishing the stats we recorded in the form of raw numbers (in addition to percentages), this sounds like a good idea to me. Luke?

    Why did you show users multiple versions of the same form (Comments #22-24)?
    Again, this was an issue of time and budget. While we would have liked to have shown each user a single variation of the form — thereby eliminating the sort of “familiarity bias” that @DavidHamill describes in comment #22 above — testing each of the six designs with 20 users would have been impractical. (It would have taken a month or so just to complete the testing, never mind the analysis, write-up etc.; and would have cost thousands of pounds in incentives for users to participate”¦and that’s without factoring in the cost of user recruitment, staffing the project etc.!)

    Another option would have been to reduce the number of variations we tested (and to have tested these variations with 20 users apiece); however, we would then have had to sacrifice the “completeness” of the study. This is something we weren’t keen to do, as we thought it important to test things like the “after”, “while” and “before and while” phasing of the validation; the various different placements of the resulting validation messages; and to include a control version — which is something we wouldn’t have been able to do had we slashed the number of variations to, say, two.

    Personally speaking, I’m not sure how detrimental “familiarity bias” was in the context of this particular study. Since we were testing a standard registration form — the kind found on almost all B2C / B2B / B2E websites — our users were likely to be very familiar with its constituent form elements (and their associated input requirements) prior to taking part in the testing — and therefore weren’t learning much that they didn’t already know about these elements during our testing (i.e. as they progressed through the six variations of the design). In other words, they most likely already knew how to enter their name, address and contact details using the types of form elements we provided; and, as for username and password entry— the “trickiest” of the form elements they encountered — the requirements of these fields were not only similar to those of many other sites but were also explicitly spelt out directly beneath the fields in question. The only thing our users weren’t familiar with was the type of validation employed and, since this was different from variation to variation, they were unable to become familiar with it (which is as exactly as we needed things to be in order to assess it).

    As a result of this, the study can be said to have been designed to determine whether the inclusion of inline validation made users more efficient in filling out an already familiar form (or whether it only served to distract them). It was thus not designed to determine whether the inclusion of inline validation made users more efficient in filling out an unfamiliar form. (In this case, “familiarity bias” would have been a real issue, as users would have become more familiar with the form’s constituent elements with each new variation they encountered and this, accordingly, would have had a great effect on the results).

    I’m not saying that some “familiarity bias” isn’t present in the results of our study — clearly, users would have become more familiar with the general design and layout of the form as they worked their way through the six variations — but I do think we took reasonable steps to minimise it, bearing in mind the constraints of the project.

    Were all variations presented randomly? (Comment #23)
    Yes. All variations — including the control form — were randomised.

    To be clear here, “randomised” means that each variation appeared an equal number of times in each position on the testing schedule (or as near to as was possible, given that 20 users isn’t neatly divisible by 6 variations). For example, Variation A was shown as the first stimuli to three participants, as the second stimuli to another three participants, and so on. (Note: We also took care to ensure that — for example — Variations A and B were not shown one after the other to every single user).

    Could you please explain how you measured “errors made” in the inline validation variations? (Comment #24)
    In all six variations — i.e. including the control version — “errors made” was the number of errors returned after users clicked the Submit button at the bottom of the form.

    That is, “errors made” does not take into account the errors made by users that were caught by the inline validation mechanisms and subsequently corrected by users prior to their clicking the Submit button.

    One last thing…
    If you’re interested in this type of research, please “subscribe to our newsletter”:http://www.etre.com/subscribe/ – where we regularly publish usability-related insights gleaned from our work.

    Oh, and thanks for all the great comments on this study :)

    Copy & paste the code below to embed this comment.
  36. Forms are alsways a difficult area, and are one of the places within a website where you can very easily annoy. Great piece of information, thanks for writing.

    Copy & paste the code below to embed this comment.
  37. Hello Luke and thanks for the write up of your findings. Quit interessting findings. Although I’d have to subscribe to a lot of the methodological concerns mentioned before your results at least give an idea of the direction of effects.

    “Displaying validation inside form fields failed to deliver any substantial benefit.”

    Is that also true for completion time? I’m asking because there is evidence from psychological studies showing that shifting attention within a visual field is easer within objects than between them.

    Copy & paste the code below to embed this comment.
  38. It’s good to see that a lot of issues that we’ve been talking about are being solved by the technical progress WebGUI is making. Validation for forms is one example, FilePump allowing for easy CSS management AND speed is another.

    Copy & paste the code below to embed this comment.
  39. I’ve created a jQuery plugin (http://plugins.jquery.com/project/jqueryvalidate) that validates form inputs based on the research in this article.

    As a live example, the plugin is used in the Cytoscape Web project’s contact form (http://cytoscapeweb.cytoscape.org/contact).

    I like the research done in this article, especially since it advocates quick feedback and visibility when the user provides input.  Good work!

    Cheers,
    Max

    Copy & paste the code below to embed this comment.
  40. There are two main areas of concern that I have with inline validation: integration with server-side validations, and error message layout.

    1. As Justen Robertson correctly noted, client side validations assist users in getting through forms better, faster – while server side validations handle data correctness and security. A common scenario is to use both – but display options for when data passes inline but fails server-side, are still unclear.

    2. Always placing the error message to the right of the input field can be problematic. You must reserve appropriate space for the message, deal with possibly wrapping text, and make empty forms look good (e.g. fieldsets). I work with a framework that always places error messages to the right of form fields, and this results in layout frustration for clients & developers.

    Inline form validation is a good recommendation for a specific class of forms, but if you have complex (and well tested) server-side validations, space-limitations, or verbose error messages – YMMV.

    Copy & paste the code below to embed this comment.
  41. Working with a number of instances of forms, I am bouncing around a question about where is the most useful placement of validation messages. If the label is stacked above the input, and the input field is wide, is it useful and clear to the user if the message is attached to the bottom of the input or the top? Is it actually more clear to the user if the message is to the right of the input?

    Copy & paste the code below to embed this comment.