Illustration by

Discovery on a Budget: Part III

Sometimes we have the luxury of large budgets and deluxe research facilities, and sometimes we’ve got nothing but a research question and the determination to answer it. Throughout the “Discovery on a Budget” series we have discussed strategies for conducting discovery research with very few resources but lots of creativity. In part 1 we discussed the importance of a clearly defined problem hypothesis and started our affordable research with user interviews. Then, in part 2, we discussed competing hypotheses and “fake-door” A/B testing when you have little to no traffic. Today we’ll conclude the series by considering the pitfalls of the most tempting and seemingly affordable research method of all: surveys. We will also answer the question “when are you done with research and ready to build something?”

Article Continues Below

A quick recap on Candor Network#section2

Throughout this series I’ve used a budget-conscious, and fictitious, startup called Candor Network as my example. Like most startups, Candor Network started simply as an idea:

I bet end-users would be willing to pay directly for a really good social networking tool. But there are lots of big unknowns behind that idea. What exactly would “really good” mean? What are the critical features? And what would be the central motivation for users to try yet another social networking tool?

To kick off my discovery research, I created a hypothesis based on my own personal experience: that a better social network tool would be one designed with mental health in mind. But after conducting a series of interviews, I realized that people might be more interested in a social network that focused on data privacy as opposed to mental health. I captured this insight in a second, competing hypothesis. Then I launched two corresponding “fake door” landing pages for Candor Network so I could A/B test both ideas.

For the past couple of months I’ve run an A/B test between the two landing pages where half the traffic goes to version A and half to version B. In both versions there is a short, two-question survey. To start our discussion today, we will take a more in-depth look at this seemingly simple survey, and analyze the results of the A/B test.

Surveys: Proceed with caution#section3

Surveys are probably the most used, but least useful, research tool. It is ever so tempting to say, “lets run a quick survey” when you find yourself wondering about customer desires or user behavior. Modern web-based tools have made surveys incredibly quick, cheap, and simple to run. But as anyone who has ever tried running a “quick survey” can attest, they rarely, if ever, provide the insight you are looking for.

In the words of Erika Hall, surveys are “too easy.” They are too easy to create, too easy to disseminate, and too easy to tally. This inherent ease masks the survey’s biggest flaw as a research method: it is far, far too easy to create biased, useless survey questions. And when you run a survey littered with biased, useless questions, you either (1) realize that your results are not reliable and start all over again, or (2) proceed with the analysis and make decisions based on biased results. If you aren’t careful, a survey can be a complete waste of time, or worse, lead you in the wrong direction entirely.

However, sometimes a survey is the only method at your immediate disposal. You might be targeting a user group that is difficult to reach through other convenience- or “guerilla”-style means (think of products that revolve around taboo or sensitive topics—it’s awfully hard to spring those conversations on random people you meet in a coffee shop!). Or you might work for a client that is reluctant to help locate research participants in any way beyond sending an email blast with a survey link. Whatever the case may be, there are times when a survey is the only step forward you can take. If you find yourself in that position, keep the following tips in mind.

Tip 1: Try to stick to questions about facts, not opinions#section4

If you were building a website for ordering dog food and supplies, a question like “how many dogs do you own?” can provide key demographic information not available through standard analytics. It’s the sort of question that works great in a short survey. But if you need to ask “why did you decide to adopt a dog in the first place?” then you’re much better off with a user interview.

If you try asking any kind of “why” question in a survey, you will usually end up with a lot of “I don’t know” and otherwise blank responses. This is because people are, in general, not willing to write an essay on why they’ve made a particular choice (such as choosing to adopt a dog) when they’re in the middle of doing something (like ordering pet food). However, when people schedule time for a phone call, they are more than willing to talk about the “whys” behind their decisions. In short, people like to talk about their opinions, but are generally too lazy or busy to write about their opinions. Save the why questions for later (and see Tip 5).

Tip 2: Avoid asking about the future#section5

People live in the present, and only dream about the future. There are a lot of things outside of our control that affect what we will buy, eat, wear, and do in the future. Also, sometimes the future selves we imagine are more aspirational than factual. For example, if you were to ask a random group of people how many times they plan to go to the gym next month, you might be (not so) surprised to see that their prediction is significantly higher than the actual number. It is much better to ask “how many times did you go to the gym this week?” as an indicator of general gym attendance than to ask about any future plans.

I asked a potentially problematic, future-looking question in the Candor Network landing page survey:

How much would you be willing to pay, per year, for Candor Network?

  • Would not pay anything
  • $1
  • $5
  • $10
  • $15
  • $20
  • $25
  • $30
  • Would pay more

In this question, I’m asking participants to think about how much money they would like to spend in the future on a product that doesn’t exist yet. This question is problematic for a number of reasons, but the main issue is that people, in general, don’t know how they really feel about pricing until the exact moment they are poised to make a purchase. Relying on this question to, say, develop my income projections for an investor pitch would be unwise to say the least. (I’ll discuss what I actually plan to do with the answers to this question in the next tip.)

Tip 3: Know how you are going to analyze responses before you launch the survey#section6

A lot of times, people will create and send out a survey without thinking through what they are going to do with the results once they are in hand. Depending on the length and type of survey, the analysis could take a significant amount of time. Also, if you were hoping to answer some specific questions with the survey data, you’ll want to make sure you’ve thought through how you’ll arrive at those answers. I recommend that while you are drafting survey questions, you also simultaneously draft an analysis plan.

In your analysis plan, think about what you are ultimately trying to learn from each survey question. How will you know when you’ve arrived at the answer? If you are doing an A/B test like I am, what statistical analysis should you run to see if there is a significant difference between the versions? You should also think about what the numbers will look like and what kinds of graphs or tables you will need to build. Ultimately, you should try to visualize what the data will look like before you gather it, and plan accordingly.

For example, when I created the two survey questions on the Candor Network landing pages, I created a short analysis plan for each. Here is what those plans looked like:

Analysis plan for question 1: “How much would you be willing to pay per year for Candor Network?”#section7

Each response will go into one of two buckets:

  • Bucket 1: said they would not pay any money;
  • and Bucket 2: said they might pay some money.

Everyone who answered “Would not pay anything” goes in Bucket 1. Everyone else goes in Bucket 2. I will interpret every response that falls into Bucket 2 as an indicator of general interest (and I’m not going to put any value on the specific answer selected). To see whether any difference in response between landing page A and B is statistically significant (i.e., attributable to more than just chance), I will use a chi-square test. (Side note: There are a number of different statistical tests we could use in this scenario, but I like chi-square because of its simplicity. It is a test that’s easy for non-statisticians to run and understand, and it errs on the conservative side.)

Analysis plan for question 2: “Would you like to be a beta tester or participate in future research?”#section8

The question only has two possible responses: “yes” and “no.” I will interpret every “yes” response as an indicator of general interest in the idea. Again, a chi-square test will show if there is a significant difference between the two landing pages.

Tip 4: Never rely on a survey by itself to make important decisions#section9

Surveys are hard to get right, and even when they are well made, the results are often approximations of what you really want to measure. However, if you pair a survey with a series of user interviews or contextual inquiries, you will have a richer, more thorough set of data to analyze. In the social sciences, this is called triangulation. If you use multiple methods to triangulate and study the same phenomenon, you will get a richer, more complete picture. This leads me to my final tip …

Tip 5: End every survey with an opportunity to participate in future research#section10

There have been many times in my career when I have launched surveys with only one objective in mind: to gather the contact information of potential study participants. In cases like these, the survey questions themselves are not entirely superfluous, but they are certainly secondary to the main research objective. Shortly after the survey results have been collected, I will select and email a few respondents, inviting them to participate in a user interview or usability study. If I planned on continuing Candor Network, this is absolutely what I would do.

Finally, the results#section11

According to Google Optimize, there were a total of 402 sessions in my experiment. Of those sessions, 222 saw version A and 180 saw version B. Within the experiment, I tracked how often the “submit” button on the survey was clicked, and Google Optimize tells me “no clear leader was found” on that measure of engagement. Roughly an equal number of people from each condition submitted the survey.

Here is a breakdown of the number of sessions and survey responses each condition received:

Version A:
better mental health
Version B:
privacy and data security
Total
Sessions 222 180 402
Survey responses 76 68 144

When we look at the actual answers to the survey questions, we start to get some more interesting results.

Bucket 1:
would not pay any money
Bucket 2:
might pay some money
Version A 25 51
Version B 14 54
Breakdown of question 1, “How much would you be willing to pay per year for Candor Network?”

Plugging these figures into my favorite chi-square calculator, I get the following values: chi-square = 2.7523, p = 0.097113. In general, bigger chi-square values indicate greater differences between the groups. And the p-value is less than 0.1, which suggests that the result is marginally significant (i.e., the result is probably not due to random chance). This gives me a modest indicator that respondents in group B, who saw the “data secure” version of the landing page, are more likely to fall into the “might pay some money” bucket.

And when we look at the breakdown and chi-square calculation of question two, we see similar results.

No Yes
Version A 24 52
Version B 13 55
Breakdown of question 2, “Would you like to be a beta tester or participate in future research?”

The chi-square = 2.9189, and p = .087545. Again, I have a modest indicator that respondents in group B are more likely to say yes to participating in future research. (If you’d like to learn more about how to run and interpret chi-square tests, the Interaction Design department at the University of California, San Diego has provided a great video tutorial.)

How do we know when it’s time to move on?#section12

I wish I could provide you with a formula for calculating the exact moment when the research is done and it’s time to move on to prototyping, but I’m afraid no such formula exists. There is no definitive way to determine how much research is enough. Every round of research teaches you something new, but you are always left with more questions. As Albert Einstein said, “the more I learn, the more I realize how much I don’t know.”

However, with experience you come to recognize certain hallmarks that indicate it’s time to move on. Erika Hall, in her book Just Enough Research, described it as feeling a “satisfying click.” She says, “[O]ne way to know you’ve done enough research is to listen for the satisfying click. That’s the sound of the pieces falling into place when you have a clear idea of the problem you need to solve and enough information to start working on a solution.” (Just Enough Research, p. 36.)

When it comes to building a product on a budget, you may also want to consider that research is relatively cheap compared to the cost of design and development. The rule I tend to follow is this: continue conducting discovery research until the questions you really want answered can only be answered by putting something in front of users. That is, wait to build something until you absolutely have to. Learn as much as you can about your target market and user base until the only way forward is to put some sketches on paper.

With Candor Network, I’m not quite there yet. There is still plenty of runway to cover in the research cycle. Now that I know that data privacy is a more motivating reason to consider paying for a social networking tool, I need to work out what other features will be essential. In the next round of research, I could do think-aloud studies and ask participants to give me a tour of their Facebook and other social media pages. Or I could continue with more interviews, but recruit from a different source and reach a broader demographic of participants. Regardless of the exact path I choose to take from here, the key is to focus on what the requirements would be for the ultra-private, data-secure social network that users would value.

A few parting words#section13

Discovery research helps us learn more about the users we want to help and the problems they need a solution for. It doesn’t have to be expensive either, and it definitely isn’t something that should be omitted from the development cycle. By starting with a problem hypothesis and conducting multiple rounds of research, we can ultimately save time and money. We can move from gut instincts and personal experiences to a tested hypothesis. And when it comes time to launch, we’ll know it’s from a solid foundation of research-backed understanding.

Recommended reading#section14

If you’re testing the waters on a new idea and want to jump into some (budget-friendly) discovery research, here are some additional resources to help you along:

Books

Articles

2 Reader Comments

  1. Hi,
    Thanks for sharing. This is nice article. Assignment Help Firm provide the best assignment and writing service to all college students.

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career