We’re not doing a good job#section1
Page-load times in the ten-second range are still common on modern mobile networks, and that’s a fraction of how long it takes in countries with older, more limited networks. Why so slow? It’s mostly our fault: our sites are too heavy, and they’re often assembled and delivered in ways that don’t take advantage of how browsers work. According to HTTP Archive, the average website weighs 1.7 megabytes. (It’s probably heftier now, so you may want to look it up.) To make matters worse, most of the sites surveyed on HTTP Archive aren’t even responsive, but focus on one specific use case: the classic desktop computer with a large screen.
That’s awful news for responsive (and, ahem, responsible) designers who aim to support many types of devices with a single codebase, rather than focusing on one type. Truth be told, much of the flak responsive design has taken relates to the ballooning file sizes of responsive sites in the wild, like Oakley’s admittedly gorgeous Airbrake MX site, which originally launched with a whopping 80-megabyte file size (though it was later heavily optimized to be much more responsible), or the media-rich Disney homepage, which serves a 5-megabyte responsive site to any device.
Why are some responsive sites so big? Attempting to support every browser and device with a single codebase certainly can have an additive effect on file size—if we don’t take measures to prevent it. Responsive design’s very nature involves delivering code that’s ready to respond to conditions that may or may not occur, and delivering code only when and where it’s needed poses some tricky obstacles given our current tool set.
Responsible responsive designs are achievable even for the most complex and content-heavy sites, but they don’t happen on their own. Delivering fast responsive sites requires a deliberate focus on our delivery systems, because how we serve and apply our assets has an enormous impact on perceived and actual page-loading performance. In fact, how we deliver code matters more than how much our code weighs.
Delivering responsibly is hard, so this chapter will take a deep, practical dive into optimizing responsive assets for eventual delivery over the network. First, though, we’ll tour the anatomy of the loading and enhancement process to see how client-side code is requested, loaded, and rendered, and where performance and usability bottlenecks tend to happen.
Ready? Let’s take a quick look at the page-loading process.
A walk down the critical path#section4
Understanding how browsers request and load page assets goes a long way in helping us to make responsible decisions about how we deliver code and speed up load times for our users. If you were to record the events that take place from the moment a page is requested to the moment that page is usable, you would have what’s known in the web performance community as the critical path. It’s our job as web developers to shorten that path as much as we can.
A simplified anatomy of a request#section5
To kick off our tour de HTTP, let’s start with the foundation of everything that happens on the web: the exchange of data between a browser and a web server. Between the time when our user hits go and their site begins to load, an initial request pings back and forth from their browser to a local Domain Name Service (which translates the URL into an IP address used to find the host), or DNS, to the host server (fig 3.1).
That’s the basic rundown for devices accessing the web over Wi-Fi (or an old-fashioned Ethernet cable). A device connected to a mobile network takes an extra step: the browser first sends the request to a local cell tower, which forwards the request to the DNS to start the browser-server loop. Even on a popular connection speed like 3G, that radio connection takes ages in computer terms. As a result, establishing a mobile connection to a remote server can lag behind Wi-Fi by two whole seconds or more (fig 3.2).
Two seconds may not seem like a long time, but consider that users can spot—and are bothered by—performance delays as short as 300 milliseconds. That crucial two-second delay means the mobile web is inherently slower than its Wi-Fi counterpart.
Thankfully, modern LTE and 4G connections alleviate this pain dramatically, and they’re slowly growing in popularity throughout the world. We can’t rely on a connection to be fast, though, so it’s best to assume it won’t be. In either case, once a connection to the server is established, the requests for files can flow without tower connection delays.
Requests, requests, requests!#section6
The complexities of HTML parsing (and its variations across browsers) could fill a book. Lest it be ours, I will be brief: the important thing is getting a grasp on the fundamental order of operations when a browser parses and renders HTML.
- CSS, for example, works best when all styles relevant to the initial page layout are loaded and parsed before an HTML document is rendered visually on a screen.
Rendering and blocking#section7
script elements, respectively. By default, browsers wait to render a page’s content until these assets finish loading and parsing, a behavior known as blocking (fig 3.3). By contrast, images are a non-blocking asset, as the browser won’t wait for an image to load before rendering a page.
Despite its name, blocking rendering for CSS does help the user interface load consistently. If you load a page before its CSS is available, you’ll see an unstyled default page; when the CSS finishes loading and the browser applies it, the page content will reflow into the newly styled layout. This two-step process is called a flash of unstyled content, or FOUC, and it can be extremely jarring to users. So blocking page rendering until the CSS is ready is certainly desirable as long as the CSS loads in a short period of time—which isn’t always an easy goal to meet.
document.write, used to inject HTML directly into the page at whatever location the browser happens to be parsing. It’s usually considered bad practice to use
document.write now that better, more decoupled methods are available in JS, but
document.write is still in use, particularly by scripts that embed advertisements. The biggest problem with
document.write is that if it runs after a page finishes loading, it overwrites the entire document with the content it outputs. More like
document.wrong, am I right? (I’m so sorry.) Unfortunately, a browser has no way of knowing whether a script it’s requesting contains a call to
document.write, so the browser tends to play it safe and assume that it does. While blocking prevents a potential screen wipe, it also forces users to wait for scripts before they can access the page, even if the scripts wouldn’t have caused problems. Avoiding use of
In the next chapter, we’ll cover ways to load scripts that avoid this default blocking behavior and improve perceived performance as a result.
13 Reader Comments
Great explain Jahl ! But you miss one thing, i want to add this with post good planning
I’m confused, this article is rather short and it only states that you should keep your requests and file size to a minimum and don’t use
document write(good pun though). I’m wondering what will be in the next chapters then. A mobile first approach without loading the bigger resolutions? Using vector (fonts) instead of images? Optimize, concatenate and minify until it hurts? Showing a pre-loader on mobile websites? Really curious…
Sorry for the confusion! It’s not your fault. 🙂 This particular post is just an excerpt to give a taste of the sort of subject matter that the book discusses, but I can point you to another post I wrote that follows up on this with a technical approach to loading assets quickly and responsibly. It includes some but not all of the workflow and tips discussed in the final chapters of the book:
Thanks for reading
Correct me if I’m wrong, but your DNS diagram looks like the DNS is “proxying” the content from the web server, which is not the case. The DNS merely tells the browser which IP address to use to directly request the data from the host. The web server does not communicate with the DNS. I’m sure you just wanted to simplify the diagram, but it ended up being incorrect.
IMHO, Oakley’s site isn’t that great looking. It’s OK, but I think gaudy sites like this are the next wave of nonsense that will have to be replaced by A a second “Web 2.0” era. CSS changed the web for the better, and now things are getting bad again, largely due to things like Angular (great framework, but it encourages building large, crufty 1-page websites that include lots of things in a single page load). I’m working on a real estate lead management software, and one of the core tenants of the app is that it will load quickly on a mobile device with 3G Internet speeds. I realize, most people have faster Internet than that, especially real estate agents, but what’s the point of wasting bandwidth for things that aren’t necessary?
Real Estate Lead Mangement Software
nice article, i read your article. i have understate planning and performance of website. both are co-operative words and without planning. i things, shouldn’t do any work without planning. must to planning your website and get better performance. i am appreciate again. Thanks you.
Designing for speed is must do these days. Sublevel is one of those cases where I deliberately set the performance bar very high (~ 200ms) and tried to design around that.
It means that I remove external JS files on pages that don’t need the JS code, sometimes I use inline JS code because it’s faster and inline CSS code to keep the external CSS file as small as possible.
There something wrong with the image. DNS will not request host server for content(Homepage). It just returns back IP address of host server to client, than the client freshly requests host server directly for content.
Thanks for the feedback, all!
@Maciej and @Shivaji: thanks for pointing that out. You’re totally right, and in hindsight I wish I had drawn the diagram a bit differently to more accurately reflect how that DNS step ties in. (It does appear as a bit of a proxy here which of course it is not). In the book, this excerpt largely exists to note that there is some expected latency and network-related delays that occur in the process of loading a site, particularly on aging mobile devices and networks, and while they’re somewhat beyond our control, it’s worth remembering that initially connecting can take some time (which is all the more reason to make the delivery steps that our under control perform as fast as possible). However, the book makes no effort to dwell on network-related steps that are somewhat beyond its scope, and instead moves on to focus on the steps that take place on the client-side during page load, and the ways that our code either obstructs or helps streamline the critical path to rendering the page. So basically, from here the book quickly moves on to code-related advice for optimizing assets and how to remove blocking steps on the critical path (this article covers more of that if it’s of interest: http://www.filamentgroup.com/lab/performance-rwd.html ). So I guess I mean to say that while a little technically inaccurate that part is only meant as as a small, non-comprehensive summary of the initial loading steps that lead to the real focus of this chapter. here’s hoping that in the context of the larger book (or chapter even), the network step reads as more of an intro/segue than a focus in itself. Thanks again!
In fact, how we deliver code matters more than how much our code weighs.
No no no !
Not everybody has access to affordable data plans and in my case I watch my consumption very carefully because mobile sites can cost me real money to access. My phone plan does include data, but a little bit here and a little bit there and all too quickly that allowance is gone.
Using the Oakley site – in its original form (80mb) as an example; had I accessed it on a “proper” bowser (see below) after I exceed my data cap, it could have cost me at least AUD 2.40 (3c mb excess usage charge). All our providers cost roughly the same for the plans and excess usage – they differ only what is unmetered or “free” access to services I do not want, and/or much, much, more limited coverage
Before textbrowser, given a choice of a site that gave me FOUC and cost $1 and a “seamless” experience that cost me $2.40 – guess what I would, and did, choose. There are still (especially) newspaper websites on my “never ever use when mobile” blacklist as they were too “heavy” – funnily enough I try to avoid these on the desktop because of autostart videos (ads and/or video telling me about what I am reading) as well – go figure !
Such an important topic that I’ll ask you to please excuse my spam: Here is a 9 KB rich framework which is made with “speed first” strategy: http://natuive.net
Hopefully this trend picks up and the web provides the smooth UX we need.
information about css and html code that is very helpful for me who is studying deeper .. keep working , thanks to quality article , if you need information about more try visiting Harga Sepatu Nike
Thanks for the wonderful information Scott, We are also provide services of Condo Move Toronto Ontario area.
Got something to say?
We have turned off comments, but you can see what folks had to say before we did so.
More from ALA
Personalization Pyramid: A Framework for Designing with User Data
Mobile-First CSS: Is It Time for a Rethink?
Designers, (Re)define Success First
Breaking Out of the Box
How to Sell UX Research with Two Simple Questions