If you’ve ever stood on a chair holding a cell phone up to get a better signal or refreshed a page that’s been hanging for 30 seconds, you already know that today’s user experiences have a gaping hole. We’re spending thousands of hours crafting interfaces that are the product of countless conversations, user tests, and analytics data piled up to our (virtual) eyeballs—only to have the experience crippled by a weird signal coming from a cell tower.
Maybe your user has switched from 3G to WiFi. Maybe her battery is low. Or maybe it’s simply dark out. Whatever the scenario, real-life factors can easily thwart your best intentions—and leave your users frustrated and angry.
The concept of considering real-world factors while designing isn’t new. Environmental design can be traced back to at least 500 BCE, when the ancient Greeks started creating houses that were heated with solar energy, and it’s based on two simple truths: The real world exists, and you can’t control it.
You can’t control all the factors when a user interacts with your design, but you can certainly plan for them simply by acknowledging that they exist. I call these design conditions. Some design conditions, like the device a person is using, stay the same through a single visit or interaction with your product. But other design conditions—like energy consumption, lighting, and signal strength—have the potential (and tendency) to change during the course of a single visit, or even from page load to page load.
Just a year ago, I wouldn’t have had much of an answer for these user experience problems because the device-level APIs needed weren’t ready for primetime yet. But today, we can start to do something to improve our users’ experiences, even under these dynamic conditions, thanks to the recent buildup of the Device API.
What is the Device API?#section2
Some of the APIs have stayed contained within the Boot2Gecko operating system, but a lot of the work has been transferred to the W3C for standardization. That’s the work we’ll be focusing on today as we explore these APIs and the potential they bring to improving how our products hold up against real-world and environmental design conditions.
Battery Status and Network Information#section3
Responsive design has saved us a lot of trouble. But it’s also brought new and exciting problems like asset management to the forefront. How do we deal with images in a way that scales to any situation, such as small screens or limited bandwidth?
If it were simply an issue of “small screens get small images,” the responsive images problem would pretty much be solved with the picture element. But this assumes a small screen should be served a smaller image to accommodate its size and potential bandwidth limitation. What we are starting to realize, though, is that the size of a display has very little relation to the amount of bandwidth available.
Under optimal conditions, everyone would have a lightning-fast connection with 100 percent battery life. The more people use mobile devices, the less likely that becomes, and the more often these conditions will affect your users’ experience. If a user is casually browsing over a fast connection, low-resolution images aren’t going to result in the best experience. On the other hand, if a user has a poor connection and minimal battery life, downloading enormous images could leave him with a dead phone.
It’s situations like this that make the Battery Status and Network Information APIs so interesting.
The Battery Status API can tell you how much battery life is left in the device (level), whether the level is going down (discharging) or whether the level is going up (charging). This information isn’t only provided as a snapshot at load time, but also at events that are tied to the battery status. The events that are currently in the specification include:
This gets a whole lot more interesting when coupled with the Network Information API, which lets you tap into bandwidth information about a device. As the draft is currently written, it returns two pieces of information: the connection speed in MB per second, and a true/false boolean value to inform you if the bandwidth is being metered in any way by the ISP. This is all the information you need to filter assets and manage bandwidth in the browser. To track when a user is offline, this API can also return a connection of
Even though Network Information and Battery Status work wonders on their own, the combination of these two APIs has the potential to help you not just manage assets at initial page load, but also to modify the interface as the connection or battery status changes over time. You can even run energy tests to give a user an estimate about when her battery might die under the current conditions (like “miles to empty” in a car). You won’t be able to get specific information like “Facebook is draining your battery,” but you will know whether there’s enough energy to accomplish a certain task in your application.
These two APIs, and particularly the combination of them, should likely be our first resources for making designs better equipped to handle real-world scenarios. They allow us to detect for performance bottlenecks and craft an experience around them (remember our image management problem?). But there are a couple others that really stand out from the pack as well: the Ambient Light Sensor and Proximity Sensor APIs, which take experiences a little further out of the browser.
Ambient Light Sensor#section4
The Ambient Light Sensor API uses the light sensor of a device to tell us about its current environment. Of course, the limitation of this API is that the device needs to have a light sensor, whether it’s filtered through the camera or through another type or sensor. It doesn’t matter where the sensor is, but it does have to exist. The API functions in much the same way as Battery Status in that the light level can be captured at initial load time and also through an event called
This API might feel a little weird because it doesn’t use a normal web value like pixel, percentage, or
em; the API returns values in lux units (
lx). A lux unit is an international measurement of light intensity—not something we typically use on the web. In fact, I’d never heard of a lux before I discovered this API, but it makes me feel super-smart when I bring it up. Because this is relatively cutting edge, the device-level support for a lux value is a little hit-and-miss.
The Ambient Light Sensor API would likely improve the experience of using an e-reader, like the Kindle, because it allows access to information about the available light in a room. With this information, you can easily adjust color values, typography, or other design elements to provide a more comfortable reading experience.
The Proximity Sensor API, which enables near field communication (NFC) from the browser, is probably the furthest from our reach today—not because the specification is behind, but rather because most devices don’t yet have the necessary sensors. Relatively few smartphones contain NFC technology right now, and it could be a couple more releases until we see it in something like the iPhone.
If the user’s device contains a proximity sensor, you can access it to detect nearby objects enabled with NFC information (awesome, right?). The API contains an event called
ondeviceproximity, which is triggered when an object is within the range of the sensor.
The W3C doesn’t recommend trying to accurately measure the distance of an object due to the volatility of today’s sensors. But you can still push the limits of user experience with just a few keystrokes by removing yourself from the browser’s constricting environment and unleashing the real world of interactive objects, light sensitivity, connection information, and energy consumption into an interface.
Pushing on with environmental design#section6
Environmental design on the web is simply the concept of taking outside factors into account—something we’re just seeing the beginning of with the Device API.
More APIs are becoming available every day that you can start integrating into your applications in creative ways, but you shouldn’t feel limited by them, either. We know an experience doesn’t have to be the same in every browser, but I would argue it doesn’t necessarily have to be the same in every room of your house, either. As connections, battery life, and other situations change, so can a user’s experience with your site.
Dealing with chaos in the browser is a full-time job for most of us—it’s why we have quality assurance testing. The key to advancing the web and creating a successful experience is to embrace this chaos rather than spend your time fighting against it. Using the madness to your advantage and constantly adding to your UX tool belt will ensure we all help move web experience in the right direction.
17 Reader Comments
Great read, thank you. This information was new to me, the problems this has the potential of solving are not. Brilliant stuff.
Your discussion of proximity sensing is incorrect because proximity is not about NFC; rather, it’s about sensing physical proximity. One of the main use cases for it is if the user lifts the phone close to their face, the device can detect that and do things based on this fact. NFC isn’t mentioned anywhere in the W3C spec.
Great article, thanks!
A few comments:
* The responsive images effort does not directly relate to bandwidth. It does not rely on the assumption that “small screen == narrow bandwidth”. Basically, we want to send out smaller images to smaller screens because that is all that is necessary.
* The “Network information API” is in great flux. It is not really implemented anywhere, and the “bandwidth measurement” part is not actually defined, and it’s not clear if, when & how browser vendors will actually implement it. Developers should not rely on that API, at least not in the near future.
* Many people, me included, would argue that due to its variability, available bandwidth is not something designers can nor should take into account in their designs, due to the simple fact that you cannot predict the future. Measurements of past bandwidth, even if they were implemented in browsers, would not provide any guaranty regarding the bandwidth a few hundred milliseconds from now.
Sounds interesting for sophisticated web apps. But as a designer of simple sites, I’d like to present the user-agent with high and low assets, and let it decide which to use and how.
Very interesting article. Had no idea there were APIs for all those different indicators. These will probably be standard for web development in couple of years seeing how fast the technology is evolving.
Great read, is there any information on when this is going to be available to us and which browser will be the first to implement these APIs?
Thank you Tim. The Device API is definitely the future and will have a profound impact in a relatively short period of time.
Thanks for the feedback all!
@Marcel there’s some information on the Boot2Gecko wiki https://wiki.mozilla.org/WebAPI – They call it the “WebAPI” but it’s pretty much the same thing. Items that have (W3C) next to them have made it over to the DeviceAPI
is the .killer-logo big heading meant to be half underneath the main navigation, or is that a problem with the CSS for the new layout?
I’m using Ubuntu Linux Precise Pangolin and Chrome Version 24.0.1312.57
Without discussing the merits of the overall idea, I think it’s worth correcting a number of factual inaccuracies in this article:
It’s not clear to me what “the Device API” is supposed to be. The link in the article points to the Device APIs group (aka DAP). Each of the various APIs mentioned in the article are intended to be a device API, and there is no claim made by DAP that it is covering all of the device APIs in the world (it isn’t) or that there is one big Device API To Rule Them All.
The work on device APIs started well before 2011. The group started in 2009, and it was built atop previous input. Mozilla wasn’t contributing at the time, though there did later bring some parts of the B2G APIs to the table. To be fair though, most of B2G concerns System APIs (SysApps). It’s a different set of functionality, not meant for the browser but rather for installed applications, and handled in another group.
I would be very wary about relying on the Network Information API. I am not aware of anyone being able to implement it useful (or describe a developer-useful alternate approach). As a result it has been close to being culled for a while, only save by the fact that someone regularly shows up claiming that they have a new idea.
Also note that using the Battery API to estimate how long the battery might last (as is suggested) is very likely to be wrong (battery consumption patterns are rarely if ever linear).
It’s hardly a limitation of the ambient light sensor API that it needs an ambient light sensor to work… I’m also not sure why it would be weird that it would use lumen values rather than a CSS size unit. It’s also not imprecise because it’s cutting edge, but because there’s huge variance in sensors.
Proximity has nothing whatsoever to do with NFC. As its name indicates, NFC is about communication, using a field, that’s near. The point of the Proximity Sensor API is to know if there’s something close. The typical use case is knowing if your phone is close to your face so you don’t hang up with your ear. The vast majority of smartphones support this. There will be an NFC API, but it won’t bear any relationship whatsoever to this.
I see a lot of dissension in response to this post. I am tempted to be equally dismissive, however, are there practical demonstrations of this capability that you could provide?
Things like adjusting background color according to lumens I find quite off putting. It strikes me that the relative perceived brightness of a screen should be something managed by the user’s preferences and manual interference, not my assumptions.
Also, @Dominic Wormald, I see that .killer-logo overlap too on Mac Chrome and, regardless of intent, it’s weirding me out.
Good article Tim, appreciated.
@Elizabeth There is no doubt that a developer can do the wrong thing in adapting a design based on lumens. Like any functionality, it can be used wrongly. But it can also be used properly in order to provide enhancements that the underlying platform cannot because it can at best guess at the best changes.
This is really a nice article. Thank you very much..
One thing we need to remember, real life data is very noisy. You almost always have to do some kind of post-processing to remove outliers and identify central tendencies. Only then can we start making design decisions based on them. I wonder if such kind of post-processing is already done by the devices today, or is it up to developers to code it up themselves.
Most phones have a proximity sensor, actually. It is used to detect if the phone is held to the ear or not. The proximity sensor is therefor usually located near the front speaker.
The question is, does the Proximity API use this when available, or does it only want to use NFC?
Robin Berjon makes one good point abount bandwidth. I was thinking to use this in the future for adaptive images (given browser vendors don’t come up with a proper solution), but it makes very little sense when you think about it.
When on a 3G-like connection and on the move, the available bandwidth may vary quickly enough to make its status at page-load too inaccurate to rely on. This goes for public Wi-Fi as well, because public Wi-Fi may well be hooked up to a 3G-connection: in trains and planes this is certainly the case, as is for Wi-Fi tethering. In fact, the bandwidth of any public (or otherwise heavily shared) connection is inherently unstable and unreliable, be it a wired or wireless connection.
And with all that taken into consideration, it’s saying nothing about latency yet. One might have heaps of bandwidth at horrible latency. Satellite connections may provide latencies of a few seconds even, but so would 2G internet connections. A fiberoptics user may be doing some heavy P2P traffic, increasing the latency while keeping most of the bandwidth.
Got something to say?
We have turned off comments, but you can see what folks had to say before we did so.
More from ALA
Personalization Pyramid: A Framework for Designing with User Data
Mobile-First CSS: Is It Time for a Rethink?
Designers, (Re)define Success First
Breaking Out of the Box
How to Sell UX Research with Two Simple Questions