The Illusion of Control in Web Design

We all want to build robust and engaging web experiences. We scrutinize every detail of an interaction. We spend hours getting the animation swing just right. We refactor our JavaScript to shave tiny fractions of a second off load times. We control absolutely everything we can, but the harsh reality is that we control less than we think.

Article Continues Below

Last week, two events reminded us, yet again, of how right Douglas Crockford was when he declared the web “the most hostile software engineering environment imaginable.” Both were serious enough to take down an entire site—actually hundreds of entire sites, as it turned out. And both were avoidable.

In understanding what we control (and what we don’t), we will build resilient, engaging products for our users.

What happened?#section2

The first of these incidents involved the launch of Chrome 66. With that release, Google implemented a security patch with serious implications for folks who weren’t paying attention. You might recall that quite a few questionable SSL certificates issued by Symantec Corporation’s PKI began to surface early last year. Apparently, Symantec had subcontracted the creation of certificates without providing a whole lot of oversight. Long story short, the Chrome team decided the best course of action with respect to these potentially bogus (and security-threatening) SSL certificates was to set an “end of life” for accepting them as secure. They set Chrome 66 as the cutoff.

So, when Chrome 66 rolled out (an automatic, transparent update for pretty much everyone), suddenly any site running HTTPS on one of these certificates would no longer be considered secure. That’s a major problem if the certificate in question is for our primary domain, but it’s also a problem it’s for a CDN we’re using. You see, my server may be running on a valid SSL certificate, but if I have my assets—images, CSS, JavaScript—hosted on a CDN that is not secure, browsers will block those resources. It’s like CSS Naked Day all over again.

To be completely honest, I wasn’t really paying attention to this until Michael Spellacy looped me in on Twitter. Two hundred of his employer’s sites were instantly reduced to plain old semantic HTML. No CSS. No images. No JavaScript.

The second incident was actually quite similar in that it also involved SSL, and specifically the expiration of an SSL certificate being used by jQuery’s CDN. If a site relied on that CDN to serve an HTTPS-hosted version of jQuery, their users wouldn’t have received it. And if that site was dependent on jQuery to be usable … well, ouch!

For what it’s worth, this isn’t the first time incidents like these have occurred. Only a few short years ago, Sky Broadband’s parental filter dramatically miscategorized the jQuery CDN as a source of malware. With that designation in place, they spent the better part of a day blocking all requests for resources on that domain, affecting nearly all of their customers.

It can be easy to shrug off news like this. Surely we’d make smarter implementation decisions if we were in charge. We’d certainly have included a local copy of jQuery like the good Boilerplate tells us to. The thing is, even with that extra bit of protection in place, we’re falling for one of the most attractive fallacies when it comes to building for the web: that we have control.

Lost in transit?#section3

There are some things we do control on the web, but they may be fewer than you think. As a solo dev or team lead, we have considerable control over the HTML, CSS, and JavaScript code that ultimately constructs our sites. Same goes for the tools we use and the hosting solutions we’ve chosen. Of course, that control lessens on large teams or when others are calling the shots, though in those situations we still have an awareness of the coding conventions, tooling, and hosting environment we’re working with. Once our carefully-crafted code leaves our servers, however, all bets are off.

First off, we don’t—at least in the vast majority of cases—control the network our code traverses to reach our users. Ideally our code takes an optimized path so that it reaches its destination quickly, yet any one of the servers along that path can read and manipulate the code. If you’ve heard of “man-in-the-middle” attacks, this is how they happen.

For example, certain providers have no qualms about injecting their own advertising into your pages. Gross, right? HTTPS is one way to stop this from happening (and to prevent servers from being able to snoop on our traffic), but some providers have even found a way around that. Sigh.

Lost in translation?#section4

Assuming no one touches our code in transit, the next thing standing between our users and our code is the browser. These applications are the gateways to (and gatekeepers of) the experiences we build on the web. And, even though the last decade has seen browser vendors coalesce around web standards, there are still differences to consider. Those differences are yet another factor that will make or break the experience our users have.

While every browser vendor supports the idea and ongoing development of standards, they do so at their own pace and very much in relation to their business interests. They prioritize features that help them meet their own goals and can sometimes be reluctant or slow to implement new features. Occasionally, as happened with CSS Grid, everyone gets on board rather quickly, and we can see a new spec go from draft to implementation within a single calendar year. Others, like Service Worker, can take hold quickly in a handful of browsers but take longer to roll out in others. Still others, like Pointer Events, might get implemented widely, only to be undermined by one browser’s indifference.

All of this is to say that the browser landscape is much like the Great Plains of the American Midwest: from afar it looks very even, but walking through it we’re bound to stumble into a prairie dog burrow or two. And to successfully navigate the challenges posed by the browser environment, it pays to get familiar with where those burrows lie so we don’t lose our footing. Object detection … font stacks … media queries … feature detection … these tools (and more) help us ensure our work doesn’t fall over in less-than-ideal situations.

Beyond standards support, it’s important to recognize that some browsers include optimizations that can affect the delivery of your code. Opera Mini and Amazon’s Silk are examples of the class of browser often referred to as proxy browsers. Proxy browsers, as their name implies, position their own proxy servers in between our domains and the end user. They use these servers to do things like optimize images, simplify markup, and jettison unsupported JavaScript in the interest of slimming the download size of our pages. Proxy browsers can be a tremendous help for users paying for downloads by the bit, especially given our penchant for increasing web page sizes year upon year.

If we don’t consider how these browsers can affect our pages, our site may simply collapse and splay its feet in the air like a fainting goat. Consider this JavaScript taken from an example I threw up on Codepen:

document.body.innerHTML += '<p>Can I count to four?</p>';
for (let i=1; i<=4; i++) {
  document.body.innerHTML += '<p>' + i + '</p>';
}
document.body.innerHTML += '<p>Success!</p>'; 

This code is designed to insert several paragraphs into the current document and, when executed, produces this:

Can I count to four?
1
2
3
4
Success!

Simple enough, right? Well, yes and no. You see, this code makes use of the let keyword, which was introduced in ECMAScript 2015 (a.k.a. ES6) to enable block-level variable scoping. It will work a treat in browsers that understand let. However, any browsers that don’t understand let will have no idea what to make of it and won’t execute any of the JavaScript—not even the parts they do understand—because they don’t know how to interpret the program. Users of Opera Mini, Internet Explorer 10, QQ, and Safari 9 would get nothing.

This is a relatively simplistic example, but it underscores the fragility of JavaScript. The UK’s GDS ran a study to determine how many of their users didn’t get JavaScript enhancements and discovered that 0.9% of their users who should have received them—in other words, their browser supported JavaScript and they had not turned it off—didn’t for some reason. Add in the 0.2% of users whose browsers did not support JavaScript or who had turned it off, and the total non-JavaScript constituency was 1.1%, or 1 in every 93 people who visit their site.

It’s worth keeping in mind that browsers must understand the entirety of our JavaScript before they can execute it. This may not be a big deal if we write all of our own JavaScript (though we all occasionally make mistakes), but it becomes a big deal when we include third-party code like JavaScript libraries, advertising code, or social media buttons. Errors in any of those codebases can cause problems for our users.

Browser plugins are another form of third-party code that can negatively affect our sites. And they’re ones we don’t often consider. Back in the early ’00s, I remember spending hours trying to diagnose a site issue reported by one of my clients, only to discover it only occurred when using a particular plugin. Anger and self-doubt were wreaking havoc on me as I failed time and time again to reproduce the error my client was experiencing. It took me traveling the two hours to her office and sitting down at her desk to discover the difference between her setup and mine: a third-party browser toolbar.

We don’t have the luxury of traveling to our users’ homes and offices to determine if and when a browser plugin is hobbling our creations. Instead, the best defense against the unknowns of the browsing environment is to always design our sites with a universally usable baseline.

Lost in interpretation?#section5

Regardless of everything discussed so far, when our carefully crafted website finally reaches its destination, it has one more potential barrier to success: us. Specifically, our users. More broadly, people. Unless our product is created solely for the consumption of some other life form or machine, we’ve got to consider the ultimate loss of control when we cede it to someone else.

Over the course of my twenty years of building websites for customers, I’ve always had the plaintive voice of Clerks’ Randal Graves in the back of my head: “This job would be great if it wasn't for the f—ing customers.” I’m not happy about that. It’s an arrogant position (surely), yet an easy one to lapse into.

People are so needy. Wouldn’t it be great if we could just focus on ourselves?

No, that wouldn’t be good at all.

When we design and build for people like us, we exclude everyone who isn’t like us. And that’s most people. I’m going to put on my business hat here—Fedora? Bowler? Top hat?—and say that artificially limiting our customer base is probably not in our company’s best interest. Not only will it limit our potential revenue growth, it could actually reduce our income if we become the target of a legal complaint by an excluded party.

Our efforts to build robust experiences on the web must account for the actual people that use them (or may want to use them). That means ensuring our sites work for people who experience motor impairments, vision impairments, hearing impairments, vestibular disorders, and other things we aggregate under the heading of “accessibility.” It also means ensuring our sites work well for users in a variety of contexts: on large screens, small screens, even in-between screens. Via mouse, keyboard, stylus, finger, and even voice. In dark, windowless offices, glass-walled conference rooms, and out in the midday sun. Over blazingly fast fiber and painfully slow cellular networks. Wherever people are, however they access the web, whatever special considerations need to be made to accommodate them … we should build our products to support them.

That may seem like a tall order, but consider this: removing access barriers for one group has a far-reaching ripple effect that benefits others. The roadside curb cut is an example we often cite. It was originally designed for wheelchair access, but stroller-pushing parents, children on bicycles, and even that UPS delivery person hauling a tower of Amazon boxes down Seventh Avenue all benefit from that rather simple consideration.

Maybe you’re more of a numbers person. If so, consider designing your interface such that it’s easier to use by someone who only has use of one arm. Every year, about 26,000 people in the U.S. permanently lose the use of an upper extremity. That’s a drop in the bucket compared to an overall population of nearly 326 million people. But that’s a permanent impairment. There are two other forms of impairment to consider: temporary and situational. Breaking your arm can mean you lose use of that hand—maybe your dominant one—for a few weeks. About 13 million Americans suffer an arm injury like this every year. Holding a baby is a situational impairment in that you can put it down and regain use of your arm, but the feasibility of that may depend greatly on the baby’s temperament and sleep schedule. About 8 million Americans welcome this kind of impairment—sweet and cute as it may be—into their home each year, and this particular impairment can last for over a year. All of this is to say that designing an interface that’s usable with one hand (or via voice) can help over 21 million more Americans (about 6% of the population) effectively use your service.

Finally, and in many ways coming full circle, there’s the copy we employ. Clear, well-written, and appropriate copy is the bedrock of great experiences on the web. When we draft copy, we should do so with a good sense of how our users talk to one another. That doesn’t mean we should pepper our legalese with slang, but it does mean we should author copy that is easily understood. It should be written at an appropriate reading level, devoid of unnecessary jargon and idioms, and approachable to both native and non-native speakers alike. Nestled in the gentle embrace of our (hopefully) semantic, server-rendered HTML, the copy we write is one of the only experiences of our sites we can pretty much guarantee our users will have.

Old advice, still relevant#section6

Recognizing all of the ways our carefully-crafted experiences can be rendered unusable can be more than a little disheartening. No one likes to spend their time thinking about failure. So don’t. Don’t focus on all of the bad things you can’t control. Focus on what you can control.

Start simply. Code defensively. User-test the heck out of it. Recognize the chaos. Embrace it. And build resilient web experiences that will work no matter what the internet throws at them.

10 Reader Comments

  1. What a wonderful article that is packed with wisdom. Thank you Araon for sharing your insight.

  2. Its even worse!
    Say for example, if i wanted to write:
    blingwald x = 1;
    instead of:
    var x = 1;
    Then browsers that only understand “var”, Wouldn’t even execute it, and the site wouldn’t work.
    BOOOOOO!!!
    JavaScript is useless!
    No one uses old-timey useless var’s anymore.
    VHY DOESN’T MY KODE VORK???

  3. In order to help you in the process, Web Design City, an Australia based Web Solutions Company would provide you with all the necessary web design solutions.If you are wishing to create a brand of your own or enhance your existing business, make sure you Visit Web Design City today.

    http://www.webdesigncity.com.au/

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career