When I was a younger, less experienced designer, I was uncomfortable showing work that wasn’t “done.” I thought that design was something I should show only in its glorious, final state.
Many designers and design processes suffer from the same isolation problem: we don’t show our work to our users until the very end—when they’re stuck with it. We treat research as a box to check off, rather than an asset to use as a design evolves. We rely on personal opinions to make decisions instead of validating them with the people using the product.
However, the more we share our work in progress, using a variety of testing methods at every stage of design, the more input we can get from the people the design is for. Multiple research methods ensure that we receive diverse feedback; and more diverse feedback helps our products better meet our users’ needs.
Learning to share#section1
When I first came to Etsy, I was surprised to learn how much their design process focuses on iteration and testing. Designers show Etsy’s buyers and sellers early versions of new designs to get direct feedback.
This doesn’t just happen once, either—research is integrated throughout the entire design process, from small conceptual tests to working prototypes to fully functioning features. At each point in the design process, we ask specific questions to help us move forward to the next phase of work.
To answer these questions, we use different research and testing methods, tailored to the type of feedback we’re looking for. Each type of research has strengths and weaknesses. If we limited ourselves to one type of research, like usability testing, we wouldn’t catch everything. Gaps in the feedback would leave us to build something that didn’t totally align with what our users need.
Here is how we use research at Etsy at each point in the design process to solicit different types of feedback (and the surprises we encounter along the way!).
At the beginning of a project, we define what we’re going to build. To formulate a project goal, we start by looking at high-level business goals, research from past user testing, and data on current Etsy usage. The direction we pick in this phase dictates what the next few months of work will look like, so user validation is particularly important here.
To help choose our path, we create low-fidelity mockups and do concept testing with Etsy users who fit the target audience for the project. Rather than invest a lot of engineering time up front on building something we might not use, we often test clickable prototypes, which, while clunky, are cheap to create and involve zero engineering time. They’re mockups of an unbuilt interface, which also means we’re able to throw in blue sky features. Focusing on the best-case scenario of the feature lets us test whether sellers grasp the concept—realism and constraints can come later.
My team at Etsy, Shop Management, recently tested concepts for a promotional tool for sellers. We had a rough idea of what we wanted to build, but it was important to see if sellers understood the feature and its benefits before we went too far. We recruited sellers to remotely walk through our prototype, asking them:
- “What’s the purpose of this screen?”
- “Tell me about what you just did there.”
- “What’s the value of this tool?”
- “How would you use a tool like this for your shop?”
- “How would you describe this tool to a friend?”
Even though the format of these sessions is similar to how usability testing is conducted, we’re not focused on usability feedback at this early stage; we’re more concerned with solidifying a direction. There might be gaping holes or implausible scenarios; in one version of our clickable prototype, I forgot to account for the iOS keyboard! But mixing up details like that is okay when the questions we’re asking are broad. Instead of focusing on the interface, we’re asking participants about the idea. Validating our direction as early as possible sets us up for success down the road.
Once the concept has been validated, we dive into designing the new feature. Design constraints come into play, and we’re now tasked with solving some of the details that we punted on in earlier conceptual versions. As more constraints are applied and we get deeper into uncovering the specifics of what we’re creating, the research becomes more focused on the interface itself. This is where usability testing becomes our best friend.
Last year, the Shop Management team redesigned the core of Etsy’s seller tools, the Listings Manager. The redesign was much needed; the interface was showing its age, and so was the technology behind it. Many useful new features had been added to the Listings Manager over the years, but they were added as their own pages instead of being integrated into existing workflows. Nothing was optimized for mobile, despite increased traffic to Etsy from mobile devices. We needed to rearchitect the Listings Manager with sellers’ workflows and technology best practices in mind.
Redesigning such an integral part of a seller’s toolset was going to be tricky, though. Sellers are very sensitive to change because these tools are what they use every single day to run their businesses. So we conducted usability testing every two weeks to make sure our design decisions matched the way sellers wanted to work. And we used task-oriented questions during these sessions, like asking sellers to:
- perform an action that existed in the old design, like editing a listing
- find a familiar feature, like “quick edit,” in a new location
- go through a common flow, like finding a listing that had expired and renewing it
- use new design paradigms, like a gear dropdown for performing actions
We tried a few clever ways of redesigning the Listings Manager interface; for example, we added a sidebar for editing listings, so sellers wouldn’t have to go to an entirely new page. But the sidebar totally bombed—it required way too much scrolling and wasn’t as useful as we anticipated. It was painful for us to watch sellers struggle to use our prototype. Thanks to usability testing, we immediately ditched the sidebar and moved on to a more practical interface.
After weeks of iteration, we have a solidified design that works end to end. The product has bugs and missing features, needs a lot of polish, and is months away from launch—but this is when we want Etsy users to start kicking the tires and using it as a part of their normal workflows.
Usability testing is great for feedback on specific tasks and first impressions, but because it’s a simulated environment, it can’t provide the answer to every question. We might still be trying to validate whether we successfully hit the original goals we set for our project, and for that we need the feedback of people who use the product regularly.
To catch these types of issues and to vet new features on Etsy, we created beta tester groups to opt-in specific buyers and sellers to early versions of our features. We call these “prototype groups,” and each one has a forum in which members can post feedback and talk to one another and to our product teams. The scale of prototype groups can range anywhere from a few dozen people to thousands; our largest prototype group to date has been 10,000 Etsy users. Having so many people use a pre-release version of a feature means that we’re able to catch edge cases, weird bugs, and gnarly user experience issues that bubble up before we release it to everyone.
When we released the Listings Manager redesign to a prototype group, we wondered things like:
- If we repeatedly got feedback on something in usability testing, were we able to successfully fix it, or was it still an issue?
- Is it faster to edit listings in the new Listings Manager? If not, what are the biggest points of friction?
- What tasks are sellers trying to perform when they switch back to use the old version of the Listings Manager? Why did they switch?
We added some new image editing tools for sellers’ listing images, but noticed that the tool icons were crowding the images interface. Our solution for this was to roll the actions up into a small dropdown. When we put these updates through usability testing, nothing noteworthy came out about the new interface, so we moved forward with it.
After sellers in the prototype group started using it, however, we saw consistent negative comments in the forums about the new dropdown. To create a new listing, sellers tend to copy an existing listing as a template, then edit attributes such as the photos and title. In the old design, editing photos was a straightforward flow, but the new dropdown added in two clicks, more reading, and extra mouse movement. We had created more friction for sellers adding new listings.
The prototype group allowed us to catch issues like this because sellers were putting our product through realistic scenarios. We spent the next six months fixing problems that came directly out of the prototype group. Having a direct line of communication with our beta-testing sellers helped us find patterns, identify problems, and vet solutions before our full release.
When a feature is fully functional and any design kinks we’ve uncovered have been smoothed over, we’ll often release it as an experiment: we’ll direct a portion of traffic to a different version of a page or a flow, then compare metrics like conversion or bounces. What people say anecdotally doesn’t always line up with what they actually do; data helps us understand how people are actually using a new interface.
Etsy’s seller onboarding process is a great place to run experiments because new sellers don’t know how onboarding will work. We’re also able to analyze the long-term impact that onboarding has on a shop’s success. For example, our team noticed that it was taking sellers up to a month to complete the onboarding process and open up their shops, so we began a redesign project to decrease the amount of time it takes to open a shop on Etsy.
At the onset of the project, we defined a number of metrics to determine the success of the redesign, including:
- the time it took for a seller to go through the onboarding process
- the percentage of shops that completed onboarding
- key shop success metrics, such as the number of listings in a shop
We designed another version of onboarding that was solely about getting the basics of a shop—shop name, items, payments—in place. Anything optional, like a decorative banner, return policy, or an About page, could be easily added after the shop opened. We were thrilled with the new interface and simple design, but we wanted proof that this was the right direction.
It’s a good thing we tested the new onboarding against the old. When the results of the experiment came back, we saw that more people were completing the new onboarding (good!), but the number of listings per shop was down (bad!). We had over-optimized and made it too easy for new sellers to go through onboarding. The data from the experiment uncovered design flaws that we never would have found otherwise.
Share early and often#section6
Becoming comfortable with showing unfinished design work isn’t easy. As designers, we’re tempted to want to exhibit control over our work. But by waiting until the very end, we’re assuming that our decisions are right. There’s so much that we can learn early on from the people who use our products. Ultimately, we want to be confident going into a launch that our users won’t feel surprised, confused, or ignored.
Successful product launches are a direct result of continual research throughout a project. Using a variety of methods to get feedback at different points in the process will help surface a range of issues. Addressing these issues will bring your product that much closer to meeting your users’ needs. Don’t wait until design decisions are solidified to ask what your users think. If you ask questions at the right times along the way, you’ll be surprised by what you learn.