Open book with bookmark
Illustration by

Interaction Is an Enhancement

A note from the editors: We’re pleased to offer this excerpt from Chapter 5 of Aaron Gustafson’s book, Adaptive Web Design, Second Edition. Buy the book from New Riders and get a 35% discount using the code AARON35.

In February 2011, shortly after Gawker Media launched a unified redesign of its various properties (Lifehacker, Gizmodo, Jezebel, etc.), users visiting those sites were greeted by a blank stare. Not a single one displayed any content. What happened? JavaScript happened. Or, more accurately, JavaScript didn’t happen.1

Article Continues Below
Screenshot of a completely blank website with only the Lifehacker logo displayed.
Lifehacker during the JavaScript incident of 2011.

In architecting its new platform, Gawker Media had embraced JavaScript as the delivery mechanism for its content. It would send a hollow HTML shell to the browser and then load the actual page content via JavaScript. The common wisdom was that this approach would make these sites appear more “app like” and “modern.” But on launch day, a single error in the JavaScript code running the platform brought the system to its knees. That one solitary error caused a lengthy “site outage”—I use that term liberally because the servers were actually still working—for every Gawker property and lost the company countless page views and ad impressions.

It’s worth noting that, in the intervening years, Gawker Media has updated its sites to deliver content in the absence of JavaScript.

■ ■ ■

Late one night in January 2014 the “parental filter” used by Sky Broadband—one of the UK’s largest ISPs (Internet service providers)— began classifying code.jquery.com as a “malware and phishing” website.2 The jQuery CDN (content delivery network) is at that URL. No big deal—jQuery is only the JavaScript library that nearly three-quarters of the world’s top 10,000 websites rely on to make their web pages work.

With the domain so mischaracterized, Sky’s firewall leapt into action and began “protecting” the vast majority of their customers from this “malicious” code. All of a sudden, huge swaths of the Web abruptly stopped working for every Sky Broadband customer who had not specifically opted out of this protection. Any site that relied on CDN’s copy of jQuery to load content, display advertising, or enable interactions was dead in the water—through no fault of their own.

■ ■ ■

In September 2014, Ars Technica revealed that Comcast was injecting self-promotional advertising into websites served via its Wi-Fi hotspots.3 Such injections are effectively a man-in-the middle attack,4 creating a situation that had the potential to break a website. As security expert Dan Kaminsky put it this way:

[Y]ou no longer know, as a website developer, precisely what code is running in browsers out there. You didn’t send it, but your customers received it.

Comcast isn’t the only organization that does this. Hotels, airports, and other “free” Wi-Fi providers routinely inject advertising and other code into websites that pass through their networks.

■ ■ ■

Many web designers and developers mistakenly believe that JavaScript support is a given or that issues with JavaScript drifted off with the decline of IE 8, but these three stories are all recent, and none of them concerned a browser support issue. If these stories tell you anything, it’s that you need to develop the 1964 Chrysler Imperial5 of websites—sites that soldier on even when they are getting pummeled from all sides. After all, devices, browsers, plugins, servers, networks, and even the routers that ultimately deliver your sites all have a say in how (and what) content actually gets to your users.

Get Familiar with Potential Issues so You Can Avoid Them#section2

It seems that nearly every other week a new JavaScript framework comes out, touting a new approach that is going to “revolutionize” the way we build websites. Frameworks such as Angular, Ember, Knockout, and React do away with the traditional model of browsers navigating from page to page of server-generated content. Instead, these frameworks completely take over the browser and handle all the requests to the server, usually fetching bits and pieces of content a few at a time to control the whole experience end to end. No more page refreshes. No more waiting.

There’s just one problem: Without JavaScript, nothing happens.

No, I’m not here to tell you that you shouldn’t use JavaScript.6 I think JavaScript is an incredibly useful tool, and I absolutely believe it can make your users’ experiences better…when it’s used wisely.

Understand Your Medium#section3

In the early days of the Web, “proper” software developers shied away from JavaScript. Many viewed it as a “toy” language (and felt similarly about HTML and CSS). It wasn’t as powerful as Java or Perl or C in their minds, so it wasn’t really worth learning. In the intervening years, however, JavaScript has changed a lot.

Many of these developers began paying attention to JavaScript in the mid-2000s when Ajax became popular. But it wasn’t until a few years later that they began bringing their talents to the Web in droves, lured by JavaScript frameworks and their promise of a more traditional development experience for the Web. This, overall, is a good thing—we need more people working on the Web to make it better. The one problem I’ve seen, however, is the fundamental disconnect traditional software developers seem to have with the way deploying code on the Web works.

In traditional software development, you have some say in the execution environment. On the Web, you don’t. I’ll explain. If I’m writing server-side software in Python or Rails or even PHP, one of two things is true:

  • I control the server environment, including the operating system, language versions, and packages.
  • I don’t control the server environment, but I have knowledge of it and can author my program accordingly so it will execute as anticipated.

In the more traditional installed software world, you can similarly control the environment by placing certain restrictions on what operating systems your code supports and what dependencies you might have (such as available hard drive space or RAM). You provide that information up front, and your potential users can choose your software—or a competing product—based on what will work for them.

On the Web, however, all bets are off. The Web is ubiquitous. The Web is messy. And, as much as I might like to control a user’s experience down to the pixel, I understand that it’s never going to happen because that isn’t the way the Web works. The frustration I sometimes feel with my lack of control is also incredibly liberating and pushes me to come up with more creative approaches. Unfortunately, traditional software developers who are relatively new to the Web have not come to terms with this yet. It’s understandable; it took me a few years as well.

You do not control the environment executing your JavaScript code, interpreting your HTML, or applying your CSS. Your users control the device (and, thereby, its processor speed, RAM, etc.). Depending on the device, your users might choose the operating system, browser, and browser version they use. Your users can decide which add-ons they use in the browser. Your users can shrink or enlarge the fonts used to display your site. And the Internet providers sit between you and your users, dictating the network speed, regulating the latency, and ultimately controlling how (and what part of) your content makes it into their browser. All you can do is author a compelling, adaptive experience and then cross your fingers and hope for the best.

The fundamental problem with viewing JavaScript as a given—which these frameworks do—is that it creates the illusion of control. It’s easy to rationalize this perspective when you have access to the latest and greatest hardware and a speedy and stable connection to the Internet. If you never look outside of the bubble of our industry, you might think every one of your users is so well-equipped. Sure, if you are building an internal web app, you might be able to dictate the OS/browser combination for all your users and lock down their machines to prevent them from modifying any settings, but that’s not the reality on the open Web. The fact is that you can’t absolutely rely on the availability of any specific technology when it comes to delivering your website to the world.

It’s critical to craft your website’s experiences to work in any situation by being intentional in how you use specific technologies, such as JavaScript. Take advantage of their benefits while simultaneously understanding that their availability is not guaranteed. That’s progressive enhancement.

The history of the Web is littered with JavaScript disaster stories. That doesn’t mean you shouldn’t use JavaScript or that it’s inherently bad. It simply means you need to be smart about your approach to using it. You need to build robust experiences that allow users to do what they need to do quickly and easily, even if your carefully crafted, incredibly well-designed JavaScript-driven interface can’t run.

Why No JavaScript?#section4

Often the term progressive enhancement is synonymous with “no JavaScript.” If you’ve read this far, I hope you understand that this is only one small part of the puzzle. Millions of the Web’s users have JavaScript. Most browsers support it, and few users ever turn it off. You can—and indeed should—use JavaScript to build amazing, engaging experiences on the Web.

If it’s so ubiquitous, you may well wonder why you should worry about the “no JavaScript” scenario at all. I hope the stories I shared earlier shed some light on that, but if they weren’t enough to convince you that you need a “no JavaScript” strategy, consider this: The U.K.’s GDS (Government Digital Service) ran an experiment to determine how many of its users did not receive JavaScript-based enhancements, and it discovered that number to be 1.1 percent, or 1 in every 93 users.7, 8 For an ecommerce site like Amazon, that’s 1.75 million people a month, which is a huge number.9 But that’s not the interesting bit.

First, a little about GDS’s methodology. It ran the experiment on a high-traffic page that drew from a broad audience, so it was a live sample which was more representative of the true picture, meaning the numbers weren’t skewed by collecting information only from a subsection of its user base. The experiment itself boiled down to three images:

  • A baseline image included via an img element
  • An img contained within a noscript element
  • An image that would be loaded via JavaScript

The noscript element, if you are unfamiliar, is meant to encapsulate content you want displayed when JavaScript is unavailable. It provides a clean way to offer an alternative experience in “no JavaScript” scenarios. When JavaScript is available, the browser ignores the contents of the noscript element entirely.

With this setup in place, the expectation was that all users would get two images. Users who fell into the “no JavaScript” camp would receive images 1 and 2 (the contents of noscript are exposed only when JavaScript is not available or turned off). Users who could use JavaScript would get images 1 and 3.

What GDS hadn’t anticipated, however, was a third group: users who got image 1 but didn’t get either of the other images. In other words, they should have received the JavaScript enhancement (because noscript was not evaluated), but they didn’t (because the JavaScript injection didn’t happen). Perhaps most surprisingly, this was the group that accounted for the vast majority of the “no JavaScript” users—0.9 percent of the users (as compared to 0.2 percent who received image 2).

What could cause something like this to happen? Many things:

  • JavaScript errors introduced by the developers
  • JavaScript errors introduced by in-page third-party code (e.g., ads, sharing widgets, and the like)
  • JavaScript errors introduced by user-controlled browser add-ons
  • JavaScript being blocked by a browser add-on
  • JavaScript being blocked by a firewall or ISP (or modified, as in the earlier Comcast example)
  • A missing or incomplete JavaScript program because of network connectivity issues (the “train goes into a tunnel” scenario)
  • Delayed JavaScript download because of slow network download speed
  • A missing or incomplete JavaScript program because of a CDN outage
  • Not enough RAM to load and execute the JavaScript10
Screenshot of an error message reading, “HTTP Error 413: Request Entity Too Large. The page you requested could not be loaded. Please try loading a different page.”
A BlackBerry device attempting to browse to the Obama for America campaign site in 2012. It ran out of RAM trying to load 4.2MB of HTML, CSS, and JavaScript. Photo credit: Brad Frost

That’s a ton of potential issues that can affect whether a user gets your JavaScript-based experience. I’m not bringing them up to scare you off using JavaScript; I just want to make sure you realize how many factors can affect whether users get it. In truth, most users will get your enhancements. Just don’t put all your eggs in the JavaScript basket. Diversify the ways you deliver your content and experiences. It reduces risk and ensures your site will support the broadest number of users. It pays to hope for the best and plan for the worst.

Notes

21 Reader Comments

  1. “Just don’t put all your eggs in the JavaScript basket.”

    As a web application developer, this is our only option. Yes, content publishers can produce something quite valuable in HTML since content production and distribution are paramount, but rendering the equivalent of a desktop application on the server is just out of the question. Separating content from presentation concerns is still a great idea everywhere though.

  2. You made this case really well but it’s almost a shame it needs to be made at all anymore.

    “Unfortunately, traditional software developers who are relatively new to the Web have not come to terms with this [lack of control] yet.”

    When I first started out in the industry, there was some derision amongst Developers for the ‘Print Designer turned Web Designer’—largely because they designed for an environment they simply didn’t have. It looks a parallel can be made for certain ‘traditional’ Developers turned Web Developers, both in terms of status and mental models.

  3. The best thing about the GDS experiment was not the results and conclusions, but that they actually showed the code they used.

    https://github.com/alphagov/frontend/pull/452/files

    The generated markup for the script-enabled test is also still on the http://www.gov.uk home-page. It is an **inline-script** with the following code.

    (function(){
    var a=document.createElement(“img”);
    a.src=”https://assets.digital.cabinet-office.gov.uk/frontend/homepage/no-cache/with-js-8337212354871836e6763a41e615916c89bac5b3f1f0adf60ba43c7c806e1015.gif”;
    a.alt=””;
    a.role=”presentation”
    a.style.position=”absolute”;
    a.style.left=”-9999em”;
    a.style.height=”0px”;
    a.style.width=”0px”;
    document.getElementById(“wrapper”)
    .appendChild(a)
    })();

    For some reason 0.9% of browser visits didn’t successfully run this code (or utilize the noscript markup).

    Because it is a simple **inline script with no dependencies** I think it is fair to conclude that:
    – it probably doesn’t have any errors
    – it won’t fail due to a library dependency
    – it won’t fail due to a CDN outage
    – it won’t fail due to a delayed or incomplete script download
    – it won’t be broken by third party code

    (There is a slim possibility that in the fraction of a second between the base img request and the script img request the user abandons the page or the network.)

    I don’t think it safe to draw a strong conclusion from this experiment, but the explanation I find most likely is plugins or proxies that block all JS (including inline).

    Another (slightly different) warning about depending on JS is the AngluarJS docs.

    The latest stable version of Angular supports IE9:
    https://docs.angularjs.org/guide/ie

    But if you visit the docs in IE9 it won’t work (hasn’t worked for at least a year).

    Why?

    Well, the main factor is that no-one visits the docs site in IE9.

    But the specific cause of the failure is that the *site-specific scripts* throw before configuring Angular.

    And the actual error in the site-specific scripts is that they call `console.log()`, which only exists in IE9 **after** you open the console. No wonder the docs maintainers never found the error.

    So the warning is, just because the framework you use supports all browsers you are interested in, this doesn’t guarantee your site is working in all those browsers (even if JS is enabled and loading).

  4. @Matt Motherway

    It’s about knowing your context.

    When writing a complex application, you can demand more from your users than if you’re writing a news site or blog. On a “read only” site, like most company websites and news sites, you should not need to rely on JS to read text. However, in a more interactive environment where you might trigger business processes, pass data around and talk to a database – things are a bit different, since you can, to a degree, control the execution environment (“Always use Chrome latest when at work”).

    …and of course, then there are the blurry, grey areas that are hard to define.

  5. “I can’t imagine that browsing the Internet with JavaScript disabled is a good experience for the user.”

    Indeed. And that would be because web developers insist on building things (even reading **plain-text magazine articles**) to be a bad experience that way.

    In many cases, I should put “building” in air-quotes.

  6. A trivial misclosed html tag can make your site inoperable too. This is an indictment of bad development practices, not a technology. Most companies cannot afford to greatly increase their development efforts for 0.9 percent of potential users.

    I agree JavaScript can be overused for simple sites, but the time is gone when we have to worry about it not being available at all. Worry about experience and quality.. not fundamental tech.

  7. @JamieT “Worry about experience and quality.. not fundamental tech.”

    Progressive enhancement leads to quality and good experience.

  8. @Michiel, its simply one way. It also leads to more complexity and significantly increased development and testing effort. I have no problem with PE in concept. If I had unlimited time and money I would support every device and user choice ever made. I guess it’s up to you to decide how to use finite resources. For us, supporting the 99.x% of users running JavaScript comes first.

  9. @JamieT, sure it takes more time. The result is a website better prepared for the world wide web though. I’d like to compare it to a wooden versus a stone house. A wooden house is cheaper, faster and easier to build than the same house made of stone. The stone version is more durable; it will not be blown away by a tornado, nor will it crumble as easily during an earthquake. The wooden version will rot away without maintenance, the stone version will be covered by plants and what not, but will still function as a house.

    Starting in the stone age and building up to a light and fast wood version just makes sense. It’s not only JavaScript that can fail to load (for what ever reason), but also images, fonts, css, or any other content type you might load. You have to be very optimistic to think that all of those will always load.

    I wouldn’t want to live in a house that collapsed if a fundamental technology—say electricity—failed.

  10. “I can’t imagine that browsing the Internet with JavaScript disabled is a good experience for the user.”

    Ultimately, isn’t that up to us to decide? If the user is reading an article or other piece of primarily-text information (like, for example, this page), the user experience ought to be just fine – and if it isn’t, that’s the developer’s fault, not the user’s. There are some applications where Javascript can be reasonably expected to be necessary for a good user experience, but reading the news isn’t one of them. Try reading this very site with Javascript turned off – there are slight changes in the UX and some things take a couple more clicks, but the only thing that’s really lost by disabling JS here is the ads.

    It may be rare for users to intentionally disable Javascript, but just the same, you have no control over whether or not users are executing JS, so that’s a dependency you introduce at your own peril. If your fancy web app absolutely, positively, simply just can’t work without Javascript, then that’s fine. But when you’re building it, you should ask yourself whether the thing you’re trying to accomplish absolutely NEEDS to be done in a way where even basic functionality is totally dependent on JS. Is that dependency absolutely critical and unavoidable, or are you tacking on an unnecessary dependency?

    Don’t think of progressive enhancement as building the thing to be dependent on JS and then doing extra work to put in no-JS fallbacks for everything. Instead, try building for the no-JS case, and then putting in JS enhancements for users that do have scripting enabled.

  11. About the Gds experiment, I wouldn’t jump to a conclusion without really knowing what’s happening. There could be some totally valid scenarios where JavaScript is not the one to blame. For example there could be a crawler visiting the page. Or the user could be on a flaky connection where the http keep alive connection resets or times out between the two image requests. Remember this can happen even in a no js setup. In fact, a good js web app can detect these failures and give the user meaningful error messages and a chance to retry while providing good offline experience (with cached data and templates) rather than a dinosuer page. Above all, those numbers are for two years ago. That’s a lot time in the tech industry.

    What I can say is frontend development has always been tricky. It’s not because the client code runs on an unknown machine (that’s what VMs are created for), but rather ambiguous specs, incorrect implementation of those specs and flaky connections. Although we all know about this, but we sometimes have really high expectations from the web. Today people create native mobile apps just for a particular OS, a particular Api version, etc. They already know they are just aiming for say 60% of the market, but that is as much budget as they have. Of course they’ll expand when they grow. The same should be true for web too. You don’t have that much budget, drop support for ancient browsers. If under the same budget constraints you can achieve the same user experience for a greater audience with techniques such as PE or using better tools, then great, go for it, but if you can’t don’t get obsessed that there are 1% users I won’t be able to support. (Remember when this 1% is quite some number, more likely than not you will have quite some money to support them!)

    I’ve recently seen another trend (by React fans) that CSS is the problem of the web by saying “the worst thing about CSS is cascading and stylesheets”. It seems that the next in line to blame is Html itself. I believe these approaches are not really constructive and web ecosystem is overall a great place. We just have learn from past mistakes and try solve them in the future.

  12. @piloop: I think you’ve said it perfectly. Why is it that people always assume that just because the web is an open platform, that developers can’t set restrictions? JavaScript always seems to take the hit, for reasons I can’t fathom. Why, just the other day YouTube failed to load CSS (and this was on a stable connection on a powerful computer running a fully current browser). The site was completely unusable. Should we all account for those times when CSS fails to load? Heck, no! As developers, we have the power to say that “this site will not work without JavaScript and CSS, sorry”. There’s not some law that says we have to support every possible scenario, or else we’ll be thrown in gaol. We can say to this tiny proportion of people to whom it applies, “deal with it, if you want to use my site, upgrade your browser”.

    Ultimately, it comes down to, do we even care about 1% of people? And unless you’re Google, the answer is often, “No”.

  13. Max:”Don’t think of progressive enhancement as building the thing to be dependent on JS and then doing extra work to put in no-JS fallbacks for everything. Instead, try building for the no-JS case, and then putting in JS enhancements for users that do have scripting enabled.”

    Or, build your house from the foundation up instead of from the roof down.

  14. @cmart,

    In regards to accessibility, yes you should actually try and account for when CSS and JavaScript do not load. In fact, I find your example of YouTube not functioning correctly without the CSS on an argument for them to actually do a better job of ensuring the site is functional without CSS rather than an argument for it to be okay.

  15. I’m right up there with @ Arve on this one and is one of the flaws I see in this article.

    I think the context plays a key role in determining if interaction is simply an enhancement versus a necessity.

    When you create a news or blog site you tend to be more focused on content which requires less input from the user.

    If on the other hand you are building an app, interaction tends to be essential to building the unique interfaces you need to receive the input from the user and accomplish a given task.

    Did Gawker done f*ck up? Sure they did, but I think the horror stories of companies doesn’t mean we go all pessimistic against Javascript which has been around since the dawn of the browser.

    I know the stats show very few people turn off Javascript and if Gawker planned things out better it would have saved them some embarrassment.

    I don’t think they were wrong with pushing for a more modern app feel out of a blog so long as you warrant the need as this article has kind of pointed out.

    I don’t personally agree with their choice but hey I also don’t manage mega online content properties like they do. I give them props for experimenting and maybe it worked out better in the end for them.

    I think with a content-rich blog it seems like its a high cost to pay a team of developers to shift to a dedicated JS framework. In a blogging context there is less of a need for JS so I agree the enhancement side of things makes sense in this context, but even that is transforming (as you’ll read below).

    In the app world Javascript is essential and I don’t see that going away any time soon.

    If you don’t build a highly interactive app experience will you disappear into oblivion? Not at all but you also won’t have as great of a user experience as your competitor who has put a lot of thought into the interactive elements and how the customer gets their data in and out of the app.

    WordPress powers around 25% of the sites on the internet. (http://venturebeat.com/2015/11/08/wordpress-now-powers-25-of-the-web/). Automattic just recently pushed wordpress.com to React for their new Javascript front-end interface. They are making a strong push for turning WordPress into an API so it can interface with JS frameworks with the merging of the WP API.

    A beloved small sibling to WordPress is Ghost. Ghost.org runs a Node.js server stack with the Javascript Handlebars templating language for the front-end.

    I think its wise to pay attention to the key players and the moves they make before assuming Javascript is simply an enhancement for interaction.

    And to top it off in the general sense of “interaction” it goes beyond just Javascript. Its an essential part of designing any system and if I didn’t read into the article and have context beyond the title I’d be a bit confused.

    If you don’t value the user flows and essential forms of your users interactions going through your system you won’t be going anywhere real fast.

  16. @cmart: Why wouldn’t you plan for a no-CSS scenario? That’s what a voice-based interface is and that’s likely the future of computing.

    It’s generally a good idea to reduce your dependencies as much as possible. Solid semantic markup, links that go somewhere, and forms that submit and are processed on the server will work as long as there’s an Internet connection (and, typically, even in scenarios where the connection is lost before CSS and JavaScript are fully-downloaded). You can enhance the experience with images, CSS, JavaScript, offline, etc. to make the experience better. Just don’t assume everyone is going to get that experience.

  17. @JamieT
    “A trivial misclosed html tag can make your site inoperable too. This is an indictment of bad development practices, not a technology. ”

    Not true. Both HTML and CSS are highly fault tolerant, unlike JavaScript. Miss a closing tag, don’t enclose attributes in quotations, insert invalid attributes, incorrectly nest elements, or place raw text outside of an element and HTML still renders. Write a line of complete junk in CSS and it just skips to the next parseable line. Alternatively, I don’t need to tell you the tiny syntax deviations required to make JavaScript fail. Not only that, but in JS MVC world a JavaScript failure now equals an HTML and CSS failure, i.e. a total failure.

    “Most companies cannot afford to greatly increase their development efforts for 0.9 percent of potential users.”

    PE is not only about JavaScript support, but also about providing content to the greatest amount of users, it’s about following a model that is represented in just about every technology stack: the most basic layer followed by progressively more advanced layers, e.g. HTML-CSS-JavaScript. By making JavaScript the foundation you have now made HTML, which originally required nothing else to render, dependent upon JavaScript. You have also made CSS, originally only dependent upon HTML, now also dependent upon JavaScript. You’ve taken the most brittle and fault intolerant language of the three and made it the gatekeeper and controller; it’s like making the foundation of a building dependent upon the windows.

    Regarding costs, it is a lot cheaper to develop HTML content that is guaranteed consumable by all internet devices and then enhance for superior user experience on advanced devices than it is to start with the latest funkiness on the edge device and then find and fix the bugs and lack of support for other devices.

    I think Chris Droom (above) has a point. The JavaScript MVC supporters are behaving exactly how the Web Designers behaved towards the print Graphic Designers, with a dismissal of anything that isn’t cutting edge.

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career