Inline Validation in Web Forms

by Luke Wroblewski

41 Reader Comments

Back to the Article
  1. that’s nice but will those forms be accessible?
    Why not break down the form? showing one field at a time?
    You don’t confuse the user with a lot of information at the same time and you can give feed back for each field individually, also you won’t have problems with lack of space. You can even give some help about how to fill each field.

    Copy & paste the code below to embed this comment.
  2. Thanks all for the comments. Just catching up on all of the new ones!

    @yuval
    Only having the validation appear when someone comes back to the field would mean either they figured out there was an error themselves OR we told them. If we told them, then the potential for inline validation might be missed since we want to correct errors as people complete the form.

    To your second why separate the icons from the messages? They are part of the same communication to the user. Depending on input field format & length – they could end up a good distance apart. Which would not be good!

    @eyeMac
    Labels keep the question people need to answer front and center. I would not override them with validation messages. Also, at the point someone is filling the input field they have moved past the label. Below might be an option but then the form would jump up and down as messages appear. Not good to “bounce” the UI like that as it disorients users.

    “Are you telling me I entered a valid name or my name correctly?” -this was addressed in the article! :)

    @Margot Bloomstein
    We didn’t get into any analysis of the messaging text in this study. Lots of other great studies on how to write up error messages out there!

    @small_jam
    Yep. The user name and password fields both have time-tuned delays on the messages we return. There was a good amount of exploration on when to trigger the feedback.

    Copy & paste the code below to embed this comment.
  3. @Jessica
    “Fixing a typo in a postcode is going to take marginally longer after server-side validation than after inline validation but if the error messaging is done well, the impact on the total user experience is likely to be very slim.”

    Actually don’t agree with this. It’s quite a bit longer to submit, re-render the page, have the user notice the error, locate where on the page it is, resolve it, resolves any “protected” fields that were wiped out on resubmit, and then hit submit again. Much more efficient to resolves as the form is being completed!

    “Any thoughts on what the user experience might be like using inline validation in a more complex form?”
    Without knowing what you mean by a more complex form -more questions? hard questions? domain expertise required? etc? I’d have to assume it would be more useful. If something is harder to fill in, helping people fill it in correctly should more useful!

    @DavidHamill
    “So if I make an error with the 1st form, I’m less likely to make it again”
    I’ll let the Etre guys (they are the usability testing experts) comment on this but ee accounted for this by compressing the available namespace for user ID selection so that each person did not get their first selection. Hence there was consistency in the error experience across all forms regardless of order. therefore -getting things right later as you suggest was not a factor in whether or not they experienced the different validation formats.

    @dtravisphd
    “So I’d predict that participants made few, if any, errors on forms they saw after the first one.”
    Not true, as I explained above. We forced username saturation errors each time. So each participant had the error formats come up.

    “does this include the control form or was it just the inline validation forms that were randomised”
    yes.

    @reflekt
    If there is a default answer then no need to validate it. As we saw in the study certain fields do better with inline validation than others :)

    @Iza
    We did not test for vision impaired users in this study. However, the input field
    responsible for the error uses a “double visual emphasis” to stand out from
    the rest of the form elements. In this case, the message is red, and we’ve added
    red instructions just to the right of the input field. This doubled-up approach is
    important because simply changing the text color of the label might not
    be enough to be noticed by color-blind people. So to ensure that everyone
    knows where an error happened, we double the visual emphasis. We could
    have opted for an icon and red text or a background color and instructions
    to highlight the inputs responsible for the error as well. Any form of double
    emphasis could work.

    Copy & paste the code below to embed this comment.
  4. @lukew

    I didn’t ment that we should override the label with new text. I was more thinking of adding some kind of icon beside it, and maybe extend the original text with a message.

    Copy & paste the code below to embed this comment.
  5. Hello everyone. I’m Simon from Etre — the user experience company that assisted Luke with this study. Luke asked me if I would stop by and address some of the comments regarding the methods we employed in evaluating the six designs, so here goes (gulp)…

    Why did you use 20 users and not more?
    We did use more! We used 22 users ;-)

    Joking aside, the simple answer is: time and budget. We tested with 20+ users because this is the minimum number required to obtain “statistically useful” metrics. (As Jakob Nielsen says, ““when collecting usability metrics, testing 20 users typically offers reasonably tight confidence levels”:http://www.useit.com/alertbox/quantitative_testing.html “). We would have loved to have involved more users in order to have improved the reliability of the metrics we recorded but, unfortunately, our schedule and budget just wouldn’t stretch to it. Therefore (and as per almost all other usability studies) all metrics quoted in Luke’s most excellent write-up should be considered indicative as opposed to definitive.

    Perhaps we should all band together and petition one of the larger usability companies like UIE or NNG to conduct a more extensive study? I suspect that they wouldn’t be prepared to make the results available for free however, given the level of investment required. (I like @DavidHamill’s idea of conducting a “crowd sourced usability study with a number of volunteer test facilitators”; however, this creates certain methodological issues of its own.)

    In terms of publishing the stats we recorded in the form of raw numbers (in addition to percentages), this sounds like a good idea to me. Luke?

    Why did you show users multiple versions of the same form (Comments #22-24)?
    Again, this was an issue of time and budget. While we would have liked to have shown each user a single variation of the form — thereby eliminating the sort of “familiarity bias” that @DavidHamill describes in comment #22 above — testing each of the six designs with 20 users would have been impractical. (It would have taken a month or so just to complete the testing, never mind the analysis, write-up etc.; and would have cost thousands of pounds in incentives for users to participate”¦and that’s without factoring in the cost of user recruitment, staffing the project etc.!)

    Another option would have been to reduce the number of variations we tested (and to have tested these variations with 20 users apiece); however, we would then have had to sacrifice the “completeness” of the study. This is something we weren’t keen to do, as we thought it important to test things like the “after”, “while” and “before and while” phasing of the validation; the various different placements of the resulting validation messages; and to include a control version — which is something we wouldn’t have been able to do had we slashed the number of variations to, say, two.

    Personally speaking, I’m not sure how detrimental “familiarity bias” was in the context of this particular study. Since we were testing a standard registration form — the kind found on almost all B2C / B2B / B2E websites — our users were likely to be very familiar with its constituent form elements (and their associated input requirements) prior to taking part in the testing — and therefore weren’t learning much that they didn’t already know about these elements during our testing (i.e. as they progressed through the six variations of the design). In other words, they most likely already knew how to enter their name, address and contact details using the types of form elements we provided; and, as for username and password entry— the “trickiest” of the form elements they encountered — the requirements of these fields were not only similar to those of many other sites but were also explicitly spelt out directly beneath the fields in question. The only thing our users weren’t familiar with was the type of validation employed and, since this was different from variation to variation, they were unable to become familiar with it (which is as exactly as we needed things to be in order to assess it).

    As a result of this, the study can be said to have been designed to determine whether the inclusion of inline validation made users more efficient in filling out an already familiar form (or whether it only served to distract them). It was thus not designed to determine whether the inclusion of inline validation made users more efficient in filling out an unfamiliar form. (In this case, “familiarity bias” would have been a real issue, as users would have become more familiar with the form’s constituent elements with each new variation they encountered and this, accordingly, would have had a great effect on the results).

    I’m not saying that some “familiarity bias” isn’t present in the results of our study — clearly, users would have become more familiar with the general design and layout of the form as they worked their way through the six variations — but I do think we took reasonable steps to minimise it, bearing in mind the constraints of the project.

    Were all variations presented randomly? (Comment #23)
    Yes. All variations — including the control form — were randomised.

    To be clear here, “randomised” means that each variation appeared an equal number of times in each position on the testing schedule (or as near to as was possible, given that 20 users isn’t neatly divisible by 6 variations). For example, Variation A was shown as the first stimuli to three participants, as the second stimuli to another three participants, and so on. (Note: We also took care to ensure that — for example — Variations A and B were not shown one after the other to every single user).

    Could you please explain how you measured “errors made” in the inline validation variations? (Comment #24)
    In all six variations — i.e. including the control version — “errors made” was the number of errors returned after users clicked the Submit button at the bottom of the form.

    That is, “errors made” does not take into account the errors made by users that were caught by the inline validation mechanisms and subsequently corrected by users prior to their clicking the Submit button.

    One last thing…
    If you’re interested in this type of research, please “subscribe to our newsletter”:http://www.etre.com/subscribe/ – where we regularly publish usability-related insights gleaned from our work.

    Oh, and thanks for all the great comments on this study :)

    Copy & paste the code below to embed this comment.
  6. Forms are alsways a difficult area, and are one of the places within a website where you can very easily annoy. Great piece of information, thanks for writing.

    Copy & paste the code below to embed this comment.
  7. Hello Luke and thanks for the write up of your findings. Quit interessting findings. Although I’d have to subscribe to a lot of the methodological concerns mentioned before your results at least give an idea of the direction of effects.

    “Displaying validation inside form fields failed to deliver any substantial benefit.”

    Is that also true for completion time? I’m asking because there is evidence from psychological studies showing that shifting attention within a visual field is easer within objects than between them.

    Copy & paste the code below to embed this comment.
  8. It’s good to see that a lot of issues that we’ve been talking about are being solved by the technical progress WebGUI is making. Validation for forms is one example, FilePump allowing for easy CSS management AND speed is another.

    Copy & paste the code below to embed this comment.
  9. I’ve created a jQuery plugin (http://plugins.jquery.com/project/jqueryvalidate) that validates form inputs based on the research in this article.

    As a live example, the plugin is used in the Cytoscape Web project’s contact form (http://cytoscapeweb.cytoscape.org/contact).

    I like the research done in this article, especially since it advocates quick feedback and visibility when the user provides input.  Good work!

    Cheers,
    Max

    Copy & paste the code below to embed this comment.
  10. There are two main areas of concern that I have with inline validation: integration with server-side validations, and error message layout.

    1. As Justen Robertson correctly noted, client side validations assist users in getting through forms better, faster – while server side validations handle data correctness and security. A common scenario is to use both – but display options for when data passes inline but fails server-side, are still unclear.

    2. Always placing the error message to the right of the input field can be problematic. You must reserve appropriate space for the message, deal with possibly wrapping text, and make empty forms look good (e.g. fieldsets). I work with a framework that always places error messages to the right of form fields, and this results in layout frustration for clients & developers.

    Inline form validation is a good recommendation for a specific class of forms, but if you have complex (and well tested) server-side validations, space-limitations, or verbose error messages – YMMV.

    Copy & paste the code below to embed this comment.
  11. Working with a number of instances of forms, I am bouncing around a question about where is the most useful placement of validation messages. If the label is stacked above the input, and the input field is wide, is it useful and clear to the user if the message is attached to the bottom of the input or the top? Is it actually more clear to the user if the message is to the right of the input?

    Copy & paste the code below to embed this comment.