A List Apart


Illustration by Kevin Cornell

Mo’ Pixels Mo’ Problems

Mobile devices are shipping with higher and higher PPI, and desktops and laptops are following the trend as well. There’s no avoiding it: High-pixel-density, or “Retina,” displays are now becoming mainstream—and, as you’d expect, our websites are beginning to look a little fuzzy in their backlit glory. But before we go off in the knee-jerk direction of supersizing all our sites, we must first identify the problems ahead and figure out the most responsible way forward—keeping our users in mind first and foremost.

The big problem: gigantic images

Article Continues Below

In an effort to stay ahead of the curve, many of us have begun the process of making our website designs “@2x,” quietly hoping @3x never becomes a thing. While a @2x image sounds like it would only be twice the number of kilobytes, it’s actually around three to four times larger. As you can see in my @1x vs. @2x demonstration, the end result is that photos or highly detailed compositions easily start bringing these numbers into the megabytes.

“Why is this a problem?” I hear you ask. “Shouldn’t the web be beautiful?” Most definitely. Making the web a better place is probably why we all got into this business. The problem lies in our assumption of bandwidth.

In the wealthiest parts of the world, we treat access to high-speed broadband as a civil right, but for lots of people on this planet, narrow or pay-per-gigabyte bandwidth are real things. “Because it looks pretty” is not a good enough reason to send a 1MB image over 3G—or, god forbid, something like the EDGE network.

Even in our high-bandwidth paradises, you don’t have to look far for examples of constrained bandwidth. A visitor to your website might be using a high-PPI tablet or phone from the comfort of her couch, or from the middle of the Arizona desert. Likewise, those brand-new Retina Macbook Pros could be connected to the internet via Google Fiber, or tethered to a 3G hotspot in an airport. We must be careful about our assumptions regarding pixels and bandwidth.

Failed paths: JavaScript

I’ll just use JavaScript.
Everyone ever

JavaScript has solved a lot of our past problems, so it’s human nature to beseech her to save us again. However, most solutions fall short and end up penalizing users with what is commonly referred to as the “double download.”

Mat Marquis explained this, but it’s worth reiterating that in their quest for speed and making the web faster, browsers have begun prefetching all the images in a document before JavaScript has access and can make any changes.

Because of this, solutions where high-resolution capabilities are detected and a new image source is injected actually cause the browser to fetch two images, forcing high-resolution users to wait for both sets of images to download. This double download may not seem overly penalizing for a single image, but imagine scaling it to a photo gallery with 100 images per page. Ouch.

Other attempts exist, such as bandwidth detection, cookie setting, server-side detection, or a mixture of all three. As much as I’d like robots to solve my problems, these solutions have a higher barrier to entry for your average web developer. The major pain point with all of them is that they introduce server/cookie dependencies, which have been historically troublesome.

We need a purely front-end solution to high resolution images.

Sound familiar? That’s because high-resolution images and responsive images actually come back to the same root problem: How do we serve different images to different devices and contexts using the same HTML tag?

The solution: good ol’ fashioned progressive enhancement

Those of us involved in CSS and Web Standards groups are well acquainted with the concept of progressive enhancement. It’s important we stick to our collective guns on this. Pixels, whether in terms of device real estate or device density, should be treated as an enhancement or feature that some browsers have and others do not. Build a strong baseline of support, then optimize as necessary. In fact, learning how to properly construct a progressively enhanced website can save you (and your clients) lots of time down the line.

Here are the rules of the road that my colleagues at Paravel and I have been following as we navigate this tangled web of high-density images:

  • Use CSS and web type whenever possible
  • Use SVG and icon fonts whenever applicable
  • Picturefill raster graphics

Let’s talk a bit about each.

CSS and web fonts

CSS3 allows us to replicate richer visual effects in the browser with very little effort, and the explosion of high-quality web fonts allows us to build sites on a basis of rich typography instead of image collages. With our current CSS capabilities, good reasons to rely on giant raster graphics for visual impact are becoming few and far between.

So the old rule remains true: If you can accomplish your design with HTML/CSS, do it. If you can’t accomplish your design with CSS, then perhaps the first question you need to ask yourself is, why not? After all, if we consider ourselves in the business of web design, then it’s imperative that our designs, first and foremost, work on the web—and in the most efficient manner possible.

Take a step back and embrace the raw materials of the web: HTML and CSS.

SVG and icon fonts

SVG images are XML-based vector paths originally designed as a Flash competitor. They are like Illustrator files in the browser. Not only are they resolution-independent, they tend to create extremely lightweight files (roughly determined by the number of points in the vector).

Icon fonts (like Pictos or SymbolSet) are essentially collections of vector graphics bundled up in a custom dingbat font, accessible through Unicode characters in a @font-face embedded font. Anecdotally, we at Paravel have noticed that tiny raster graphics, like buttons and icons, tend to show their awkwardness most on higher-resolution screens. Icon fonts are a great alternative to frustrating sprite sheets, and we’ve already begun using icon fonts as replacements whenever possible.

Support for @font-face is great, and basic SVG embedding support is nearing ubiquity—except for ye old culprits: older versions of IE and Android. Despite this, we can easily begin using SVG today, and if necessary make concessions for older browsers as we go by using feature detection to supply a polyfill or fallback, or even using newfangled projects that automate SVG/PNG sprite sheets.

There are cases where these formats fall short. Icon fonts, for instance, can only be a single color. SVGs are infinitely scalable, but scaling larger doesn’t mean more fidelity or detail. This is when you need to bring in the big guns.

Picturefill raster graphics

No stranger to this publication, the <picture> element, put forth by the W3C Responsive Images Community Group, is an elegant solution to loading large raster graphics. With <picture>, you can progressively specify which image source you want the browser to use as more pixels become available.

The <picture> element is not free from hot drama, and also has a worthy contender. The image @srcset attribute, notably put forth by Apple, is based on the proposed CSS property image-set(), designed for serving high-resolution assets as background images. Here’s a sample of the proposed syntax, (presented with my own personal commentary):

<img alt="Cat Dancing" src="small-1.jpg"
srcset="small-2.jpg 2x,  // this is pretty cool
large-2.jpg 100w,       // meh
large-2.jpg 100w 2x     // meh@2x

As a complete responsive images solution, @srcset has a bothersome microsyntax and is not feature-complete (i.e. it has custom pixel-based h & w mystery units and does not support em units). But it does have some redeeming qualities: In theory, the @srcset attribute could put bandwidth determination in the hands of the browser. The browser, via user settings and/or aggregate data on the speed of all requests, could then make the best-informed decision about which resolution to request.

However, as the spec is written, @srcset is simply a set of suggestions for the browser to choose from or completely ignore at its discretion. Yielding total control to the browser makes this web author cringe a little, and I bet many of you feel the same.

Wouldn’t it be nice if there were a middle ground?

Noticing the strengths of the @srcset attribute, the Responsive Images Community Group has put forth a proposal called Florian’s Compromise, which would blend the powers of both @srcset and the <picture> element.

<picture alt="Cat Dancing">
<source media="(min-width: 45em)" srcset="large-1.jpg 1x, large-2.jpg 2x">
<source media="(min-width: 18em)" srcset="med-1.jpg 1x, med-2.jpg 2x">
<source srcset="small-1.jpg 1x, small-2.jpg 2x">
<img src="small-1.jpg">

No doubt, the <picture> syntax is more verbose, but it is extremely readable and doesn’t use the confusing “100w” shorthand syntax. Expect things to change going forward, but in the meantime, we’re currently using the div-based Picturefill solution from the Filament Group, which we find is easy to use and requires no server architecture or .htaccess files. It simply polyfills the <picture> element as if it existed today.

Under the hood, our previous demonstration was using two instances of the original Picturefill to swap sources as the browser resized. I’ve made some quick modifications to our demo, this time combining both @1x and @2x sources into one Picturefill demo with the newer syntax.

Experimental technique: the 1.5x hack

Another thing we’ve been doing at Paravel is playing with averages. Your mileage may vary, but we’ve noticed that high-resolution screens typically do a great job of getting the most out of the available pixels—as you can see in this @1.5x experiment version of our demo:


If you don’t have a high-resolution screen, you can increase your browser zoom to 200 percent to simulate how compression artifacts would look on one. The @1x image clearly has the worst fidelity on high-resolution screens, and the @2x image definitely has the highest fidelity. The @1.5x version, however, fares nearly as well as the @2x version, and has a payload savings of about 20 percent. Which would your users notice more: the difference in fidelity or the difference in page speed?

Ultimately, the usefulness of the @1.5x technique depends on the situation. Notably, it does penalize the @1x user, but maybe there’s an even happier middle ground for you at @1.2x or @1.3x. We currently see the “just a bit larger” method as a viable solution for getting a little more out of medium-importance images without adding another degree of complexity for our clients. If you’re in a situation where you can’t make drastic changes, this might be a great way to gain some fidelity without (relatively) overwhelming bloat.

Above all: use your brain

Recently, while redesigning Paravel’s own website, we learned to challenge our own rules. Since we have talented illustrator Reagan Ray on our team, our new site makes heavy use of SVG. But when we exported our most beloved “Three Amigos” graphic, we made a quick audit and noticed the SVG version was 410kb. That felt heavy, so we exported a rather large 2000x691 PNG version. It weighed in at just 84kb. We’re not UX rocket scientists, but we’re going to assume our visitors prefer images that download five times faster, so that image will be a PNG.

Just use your brain. I’m not sure our industry says this often enough. You’re smart, you make the internet, and you can make good decisions. Pay attention to your craft, weigh the good against the bad, and check your assumptions as you go.

Be flexible, too. In our industry there are no silver bullets; absolute positions, methods, and workflows tend to become outdated from week to week. As we found with our own website, firmly sticking to our own made-up rules isn’t always best for our users.

Doing right by users is the crux of front-end development—and, really, everything else on the web, too. Pixel density may change, but as the saying goes, what’s good for the user is always good for business.

About the Author

29 Reader Comments

Load Comments