Progressive Enhancement with JavaScript
Issue № 271

Progressive Enhancement with JavaScript

If you’ve read the first two articles in this series, you should be starting to get into the progressive enhancement groove. In this article we are going to discuss how to apply the progressive enhancement philosophy to client-side scripting. As you will soon see, it’s all about two things: restraint and planning.

Article Continues Below

Wield your power wisely#section1

You’ve probably heard the phrase “power corrupts”. There’s more to it, but for our purposes, let’s stick with those two simple words. JavaScript is an incredibly powerful tool, and for too long it was a corruptive force on the web. It threw up road blocks, error messages, and way too many pop-up windows for web surfers. It was also greatly misunderstood, which probably contributed to its abuse, and in practice was akin to a dark art.

Not only was JavaScript doing more harm than good, it had also become unruly. Beneath the surface, it was a twisted rat’s nest of code that caused all but the most determined to run screaming;  maintenance was a nightmare because of the proliferation of convoluted and often cryptic code forking.

At the time, JavaScript really was ugly by necessity: browsers had yet to implement decent standards support and developers were busy writing spaghetti code of our own on the HTML side. JavaScript had to jump through a lot of hoops to accomplish anything with cross-browser compatibility, even something as simple as an image rollover.

Thankfully, we’re in a better place on both counts now and can finally make our JavaScript a lot cleaner. Still, we have to respect its power and act responsibly. We need to concern ourselves as much with how JavaScript should be used as with what it can do—perhaps more. We need to exercise restraint. Progressive enhancement helps us to do that because it forces us to focus on the content and build out from there.

Establishing a baseline#section2

With progressive enhancement, we build sites on a foundation of usable code. The key JavaScript concept to keep in mind is that any content users need to understand the purpose of the page should exist in that page even in the absence of client-side scripting. Period.

An example: Perhaps the content in question is a comparison table for the products you sell. If the site requirements dictate that the data needs to be sortable by column, you might consider loading the table into the page via Ajax, so you can re-sort it on the server side at a moment’s notice. Sounds perfect right?

Wrong.

What happens when potential customers visit the page without JavaScript enabled? If the content is loaded into the page using JavaScript, they have no access to that content at all, even in its unsorted state. How likely do you think they’ll be to make a purchase if they can’t even see the products?

The above scenario doesn’t even address the ramifications for search. Search engine spiders don’t execute JavaScript, so if you use JavaScript to load content into your page, they will never read or index your content. How many potential customers will you lose if your product information can’t be found and indexed by Google, Microsoft, or Yahoo?

Approaching the same requirements with progressive enhancement in mind, you would include the basic table in the markup. In most cases it could still be generated by the back end, but it would be embedded directly in the page rather than loaded via Ajax. You could still write a script to find the table in the DOM and make it interactive, generating sorting links and wiring their onclick events to Ajax calls for a re-sorted version of the table.

Approaching the challenge in this way, you have not only met the requirements, but you have also provided a “lo-fi” experience for search engine spiders and users without JavaScript.

Taking it a step further, you could even add the sorting links into the table headers manually and have them refresh the page, passing variables to re-sort the table accordingly. That would enable non-JS users to re-sort the data too, giving them a slightly less responsive, but still full-functional “hi-fi” experience.

A few simple tweaks in your script would then allow you to hijack those links to perform your Ajax requests as before, delivering the best experience to the most capable users. In the end, you have a perfect example of progressive enhancement in action.

Now that you have a fundamental understanding of progressive enhancement with JavaScript, we can discuss a few techniques you can use to get started.

Getting your scripts under control#section3

One of the keys to effectively integrating progressive enhancement is establishing a plan for script management. To do that, you must first become familiar with the concept of “unobtrusive JavaScript.” Unobtrusive JavaScript is the foundation for progressive enhancement in the client-side scripting world.

The most obvious means of “getting unobtrusive” is to axe all inline event handlers, since they can easily be registered via the DOM:

<a href="http://msdn.com">  newWin(this.href);"</del>>

The next step is to move all of your scripts to linked external files, rather than embedding them in script elements:

<script type="text/javascript">
  // my script
</script>
<script type="text/javascript" src="myscript.js"></script>

This will make them easier to maintain and afford you some economies of scale. (To be honest, these two changes may take a bit of work, as so many WYSIWYG editors and web application development frameworks generate horribly obtrusive JavaScript right out of the box. Thankfully, there are patches and add-ons you can use in many of these systems to overcome their bad habits.)

The next step in making your scripts unobtrusive is deciding when and how to include them. In the most simplistic sense, this means checking to make sure you can actually run the script in the user’s browser by testing for method support before calling it:

if( document.getElementById ){
  scriptUsingGetElementById();
}

You will also want to test for any objects you need, and you may even want to test for the existence of identified elements you need as hooks for your script. Following this process with each script you use will create an à la carte interaction experience in which only scripts that a user’s browser can handle—and that can run on top of the current page’s markup—will be executed.

For more on unobtrusive JavaScript, you should revisit Jeremy Keith’s article on the topic.

Maintain style separation#section4

JavaScript doesn’t exist in a vacuum, so just as you should also maintain some separation of your scripts from your markup (as discussed above), you should maintain some separation between your scripts and your styles.

Mainly, you must stop adding styles inline when you create or manipulate elements in the DOM. Instead, apply class names that relate either to your global stylesheets or to a script-specific stylesheet:

var el = document.getElementById( 'message' );
<del>el.style.color = '#f00';
el.style.backgroundColor = '#ffcfcf';</del>
<ins>el.className = 'highlighted';</ins>

A script-specific stylesheet is a great option if your script required a lot of styles to enable its interactivity. Setting it up in its own file allows it to be maintained independently of the rest of the styles on the site. It also allows you the ability to link to that stylesheet only when the script is executed, thereby reducing download times on pages that don’t use the script or in browsers that won’t support it.

If you do decide to embed your styles in one of your main stylesheets, be certain to write them such that they are only applied when the script has run successfully.

For more on style/script separation, you should read this article from the debut issue of Scroll (currently available in print only).

Get progressive#section5

We’ve reviewed the mindset needed to implement progressive enhancement in JavaScript and several techniques through which to do it. We’ve also touched on the concept of unobtrusive scripting and learned a little about how to manage the inter-relationship of CSS and JavaScript.

This article completes our introductory series on progressive enhancement and the ways it can be realized in your implementations of CSS and JavaScript. We hope it’s given you food for thought and will inspire you to begin using progressive enhancement in your own workflow.

28 Reader Comments

  1. Is also nicely explained in a self-training course here: “Unobtrusive JavaScript”:http://www.onlinetools.org/articles/unobtrusivejavascript/.

    I’ve been pushing this idea for years, but I think it would be interesting to go a bit more into depth about what it means these days. Obtrusive techniques tend to get used not because people don’t want to do the right thing but because of performance and availability concerns. I find myself for example working with assisstive technology to have to resort to terrible hacks as browsers have moved on but screen readers haven’t.

  2. Thanks for the article, I have to always remind people who think they know Javascript well, there is a better way to script it. Keep it unobtrusive and use graceful degradation to make your websites accessible!

  3. Great article!

    Lately I see myself using ID’s and Classes more and more. Not just for CSS but also for Javascript. It makes the content a lot more explicit and allows the behaviors to be added and tweaked without changing the content.

  4. This has been a great series and I appreciate you adding the point about keeping style declarations out of the javascript. A lot of times this point is forgotten, and it’s a main concern for keeping the structure, function, and style separated.

  5. These articles are great, but they miss one big point: javascript isn’t the same in every browser, and dealing with its differences can quickly drive you mad.

    If a developer uses a framework such as Dojo or (my favorite) jQuery, the experience will be much better for them and their users.

    They will write code faster with less time spent on cross-browser debugging. And their code will perform better and be more consistent for the end user.

  6. I still don’t see the difference between PE (Progressive Enhancement) and Graceful Degradation. I use everything you say that’s supposed to be “PE” (except having 9 different CSS-files) however i just call it good and professional design or a word that i often use with clients “Long term and responsible design that will work in the future and in the now no matter what device a person is using”.

    Isn’t it really common these days to think:
    1. Content (layer 1, without this nothing else “work”)
    2. Structure (layer 2)
    3. Design (layer 3)
    4. Script (layer 4)

    Or is that just from the graphic design and advertising business where “Content is king” and everything after that is just to _ADD_ to the content?

    I still see it as GD though since the optimal experience is with all the different layers active and everything beneath that is just to cater…. however it can as well be PE since it’s made in such a way that everything always is focused at Layer 1 – The Content, the god, the center of everything and if that works well and if the coding is done correctly it will just “work”.

  7. #7:

    The difference between the two ostensibly similar methodologies here is only that their names demonstrate the intended approach. Graceful Degradation suggests the building of a feature-rich, all-singing, all-dancing site, and ensuring it still works acceptably on browsers with limited functionality.

    The term Progressive Enhancement on the other hand, seems to indicate the building of a basic, solid site and then adding script functionality to offer a greater user experience.

    Whilst the end result should be the same in both cases, the intended routes inferred from the names are very different.

  8. Thanks for the brilliant article. Very useful and explains very clearly how to hijack the DOM.

    As others have mentioned, I think that the lack of a mention of frameworks was a shame, I use jQuery for my projects. Perhaps frameworks could be the topic of a next article?

  9. Thanks for the brilliant article! Progressive Enhancement and Graceful Degradation are two critical aspects of Javascript. However, it up to you to put it to good use!

  10. I like this article. There is something that feels so right when seperating things out like this.

    I still struggle with one thing though – the content jump. The page loads with all HTML needed and then JS comes along to do whatever you need it to do. Often this may be hiding some panels so they can be expanded by the user. So you see the content while the page is loading then it moves/toggles/disappears. Has anyone got a graceful way to get around this while keeping with ultimate separation of JS only in external files?

  11. @Alex Bobin: you need to apply the JS before the content is rendered.

    I suspect you’re applying your JS when the onload event for the document kicks in.

    Switching to the ondomready event will sort things out. This event is fired when the DOM has finished loading and before the content is rendered – any JS changes you make are applied before anything is displayed to the user.

  12. I realize ??A List Apart?? is catering to a wide audience, but this article seems a bit on the basic side. For anyone who has been reading ALA for a while, or who read Jeremy Keith’s DOM Scripting book, has been using these techniques as second nature for quite a while.

    How about some juicy details, like “techniques for keeping your presentation out of your scripts”?

  13. In answer to “Nora’s question”:/comments/progressiveenhancementwithjavascript?page=2#14 on techniques for keeping presentation and scripting separated, I recommend checking out the article I wrote for Scroll (referenced in this article). Christian Heilmann and Nicholas Zakas have been working on this topic as well and my article summarizes their recommendations as well as those I’ve come up with.

    If you can’t get a copy of Scroll, I will be re-publishing the full article for free on the web in a few weeks (when my contract with them allows me to).

    As for libraries, that may be a future article, we just need to make sure everyone’s on the same page first and I’m amazed how many people haven’t grasped what Progressive Enhancement is yet.

  14. Nothing you’re saying is *wrong*, per se, but it’s a bit outdated. As others have noted, this sort of work is best accomplished with the use of a JS toolkit. I, too, am a JQuery fan, but there are plenty of other very good, very small ones out there.

    You’re doing your readers a disservice by not discussing toolkits (presumably in an attempt to avoid advocating any particular one). There’s no good reason for this.

  15. I’ll point out that, under some circumstances, there are perfectly good reasons for using inline JS rather than included external JS files. Inline JS can produce rendered elements faster (if the DOM is ready for it or not needed) and can eliminate the need for an extra HTTP trip, resulting in speedier page loads.

  16. Supporting what Tom said above; if building a site with an MVC framework, frequently the place for some of the javascript is inline in the view it forms part of.

    Keeps the view itself atomic, and also reduces the number of http requests.

    Load your framework in the head, having concatenated all your plugins to give one .js include, then inline the code that uses that framework at the appropriate point. (this code is specific to that view, so it belongs with or in that view…)

    And yes; as said above, use onDomReady – by using a framework that handles this for you. I honestly believe that only 2 breeds of programmer use bare javascript now; those writing frameworks, and masochists.

    Write your page with everything displayed, then use javascript to collapse / hide / emellish elements you want to – therefore meaning that no-js users get a full page of content. (just like if you markup your menu as a list, no-css users still get a usable menu, just ugly. no-js users should get a usable menu, just with all the levels displayed all the time, for example)

    Don’t add content in javascript; that means no-js users miss out on it entirely.

    Remember that you can have classes and ids that are only there for javascript to pick up on, and that you can also have classes in your CSS that are only there for javascript to assign and remove dynamically – this is much more maintainable than writing to the style attribute in your javascript.

  17. Interesting article, but i also agree with the JQuery fans such as Tom Lee. Nowadays I see very little need for custom JavaScript in applications when libraries such as JQuery are so well established and stable?

    Would may be good is the next article showing how progressive enhancement can be achieved with minimal code and the use of such a library?

    I admit that some larger clients will prefer not to use an “off the shelf” solution but if they are going to save time on development costs there is a logical argument for this to be the way forward.

  18. Let me get this straight. The ideas is to replace the inline onClick event with an ID, right? Then manipulate it with getElementById? That’s genius! *LightBulb=On* What a simple, yet powerful technique. The other ideas are great and are already incorporated into my modus operandi. But I smell a forth article on this subject coming soon, yes? I’m sure there’s more to this story.

  19. From the article:

    _Approaching the challenge in this way, you have not only met the requirements, but you have also provided a “lo-fi”? experience for search engine spiders and users without JavaScript._

    Providing a “lo-fi” experience sure sounds a lot like graceful degredation to me.

    In modern web development, I how can someone plan rich interactions by not thinking about rich interactions from the beginning? Planning rich interactions using Javascript and providing a low-fi experience is still graceful degredation.

  20. bq. Providing a “lo-fi”? experience sure sounds a lot like graceful degredation to me.

    Taken as a literal binary JS/no JS switch, yes, but when combined with capability testing against the browser, you can develop many levels of (progressively enhanced) experience for users with varying levels of JavaScript support. In other words, it’s not an all or nothing (which is how Graceful Degradation has been practiced in many circles) experience, but rather one in which the fidelity of that experience is directly tied to the capabilities of the device and user agent accessing it. That’s what makes it progressive.

    The relationship between the two terms can perhaps best be viewed this way: all progressively enhanced interfaces also gracefully degrade, but not all gracefully degrading interfaces are progressively enhanced. Some may be, but it isn’t a guarantee. It all comes down to what your focus is during development and what decisions you make during the development process.

  21. This was a great article.

    In the last full sentence of the second paragraph under *Establishing a baseline*, did you mean to say load it on the “client side” instead of “server side”?

  22. I get Progressive Enhancement. It’s logical. It makes sense. It’s orderly, methodical, principled, and even elegant. I can see the advantage for site maintenance and modification. PE strives for capability inclusion and flexibility . . . for (it seems) relatively simple content, that is, content with limited or lenient interactivity requirements.

    One of the principles stated as underlying PE is that the basic content and functionality of a site is preserved at all levels of “enhancement.” That is not true for sites that *require* a given level of interactivity. PE seems to fulfill its purpose for sites where interactivity is the icing, not the cake.

    How does PE uphold its promise for web applications, whose sole purpose is to provide a highly interactive, application-level web interface to solve a given problem? Sure, the principles of PE can still be used as a developer paradigm, but the end result will not provide basic functionality at all levels of “enhancement.” Doesn’t that fail to honor a PE principle?

    If a site can not progress enough to enable its minimum functionality, what good does it do to cater to lo-fi? This question points out one difference in principle between Graceful Degradation and Progressive Enhancement. With GD, developers readily tell the user, “You don’t have what it takes to run this app.” With PE, developers are supposed to provide basic functionality at all levels of capability and only say, “It gets better from here!”

    I like the idea of PE. What I’m wrestling with is seeing it as the sole paradigm for web development, unless the developer just likes the orderly approach (which I do).

    Thanks for the very informative and clearly written articles.

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA