Illustration by

Discovery on a Budget: Part II

Welcome to the second installment of the “Discovery on a Budget” series, in which we explore how to conduct effective discovery research when there is no existing data to comb through, no stakeholders to interview, and no slush fund to draw upon. In part 1 of this series, we discussed how it is helpful to articulate what you know (and what you assume) in the form of a problem hypothesis. We also covered strategies for conducting one of the most affordable and effective research methods: user interviews. In part 2 we will discuss when it’s beneficial to introduce a second, competing problem hypothesis to test against the first. We will also discuss the benefits of launching a “fake-door” and how to conduct an A/B test when you have little to no traffic.

Article Continues Below

A quick recap#section2

In part 1 I conducted the first round of discovery research for my budget-conscious (and fictitious!) startup, Candor Network. The original goal for Candor Network was to provide a non-addictive social media platform that users would pay for directly. I articulated that goal in the form of a problem hypothesis:

Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind.

Also in part 1, I took extra care to document the assumptions that went into creating this hypothesis. They were:

  • Users feel that social media sites like Facebook are addictive.
  • Users don’t like to be addicted to social media.
  • Users would be willing to pay for a non-addictive Facebook replacement.

For the first round of research, I chose to conduct user interviews because it is a research method that is adaptable, effective, and—above all—affordable. I recruited participants from Facebook, taking care to document the bias of using a convenience sampling method. I carefully crafted my interview protocol, and used a number of strategies to keep my participants talking. Now it is time to review the data and analyze the results.

Analyze the data#section3

When we conduct discovery research, we look for data that can help us either affirm or reject the assumptions we made in our problem hypothesis. Regardless of what research method you choose, it’s critical that you set aside the time to objectively review and analyze the results.

In practice, analyzing interview data involves creating transcriptions of the interviews and then reading them many, many times. Each time you read through the transcripts, you highlight and label sentences or sections that seem relevant or important to your research question. You can use products like NVivo, HyperRESEARCH, or any other qualitative analysis tool to help facilitate this process. Or, if you are on a pretty strict budget, you can simply use Google Sheets to keep track of relevant sections in one column and labels in another.

Screenshot of a spreadsheet with quotes about Facebook usage
Screenshot of my interview analysis in Google Sheets

For my project, I specifically looked for data that would show whether my participants felt Facebook was addicting and whether that was a bad thing, and if they’d be willing to pay for an alternative. Here’s how that analysis played out:

Assumption 1: Users feel that social media sites like Facebook are addictive

Facebook has a weird, hypnotizing effect on my brain. I keep scrolling and scrolling and then I like wake up and think, ‘where have I been? why am I spending my time on this?’

interview participant

Overwhelmingly, my data affirms this assumption. All of my participants (eleven out of eleven) mentioned Facebook being addictive in some way.

Assumption 2: Users don’t like to be addicted to social media

I know a lot of people who spend a lot of time on Facebook, but I think I manage it pretty well.

interview participant

This assumption turned out to be a little more tricky to affirm or reject. While all of my participants described Facebook as addictive, many of them (eight out of eleven) expressed that “it wasn’t so bad” or that they felt like they were less addicted than the average Facebook user.

Assumption 3: Users would be willing to pay for a non-addictive Facebook replacement

No, I wouldn’t pay for that. I mean, why would I pay for something I don’t think I should use so much anyway?

interview participant

Unfortunately for my project, I can’t readily affirm this assumption. Four participants told me they would flat-out never pay for a social media service, four participants said they would be interested in trying a paid-for “non-addictive Facebook,” and three participants said they would only try it if it became really popular and everyone else was using it.

One unexpected result: “It’s super creepy”

I don’t like that they are really targeting me with those ads. It’s super creepy.

interview participant

In reviewing the interview transcripts, I came across one unexpected theme. More than 80% of the interviewees (nine out of eleven) said they found Facebook “creepy” because of the targeted advertising and the collection of personal data. Also, most of those participants (seven out of nine) went on to say that they would pay for a “non-creepy Facebook.” This is particularly remarkable because I never asked the participants how they felt about targeted advertising or the use of personal data. It always came up in the conversation organically.

Whenever we start a new project, our initial ideas revolve around our own personal experiences and discomforts. I started Candor Network because I personally feel that social media is designed to be addicting, and that this is a major flaw with many of the most popular services. However, while I can affirm my first assumption, I had unclear results on the second and have to consider rejecting the third. Also, I encountered a new user experience that I previously didn’t think of or account for: that the way social media tools collect and use personal data for advertising can be disconcerting and “creepy.” As is so often the case, the data analysis showed that there are a variety of other experiences, expectations, and needs that must be accounted for if the project is to be successful.

Refining the hypothesis#section4

Graphic showing a process with Create Hypothesis, leading to Test, leading to Analyze, leading back to Create Hypothesis
Discovery research cycle: Create Hypothesis, Test, Analyze, and repeat

Each time we go through the discovery research process, we start with a hypothesis, test it by gathering data, analyze the data, and arrive at a new understanding of the problem. In theory, it may be possible to take one trip through the cycle and either completely affirm or completely reject our hypothesis and assumptions. However, like with Candor Network, it is more often the case that we get a mixture of results: some assumptions can be affirmed while others are rejected, and some completely new insights come to light.

One option is to continue working with a single hypothesis, and simply refine it to account for the results of each round of research. This is especially helpful when the research mostly affirms your assumptions, but there is additional context and nuance you need to account for. However, if you find that your research results are pulling you in a new direction entirely, it can be useful to create a second, competing hypothesis.

In my example, the interview research brought to light a new concern about social media I previously hadn’t considered: the “creepy” collection of personal data. I am left wondering, Would potential customers be more attracted to the idea of a social media platform built to prevent addiction, or one built for data privacy? To answer this question, I articulated a new, competing hypothesis:

Because their business model relies on advertising, social media tools like Facebook are designed to gather lots of behavior data. They then utilize this behavior data to create super-targeted ads. Users are unhappy with this, and would rather use a social media tool that does not rely on the commodification of their data to make money. They would be willing to pay for a social media service that did not track their use and behavior.

I now have two hypotheses to test against one another: one focused on social media addiction, the other focused on behavior tracking and data collection.

At this point, it would be perfectly acceptable to conduct another round of interviews. We would need to change our interview protocol and find more participants, but it would still be an effective (and cheap) method to use. However, for this article I wanted to introduce a new method for you to consider, and to illustrate that a technique like A/B testing is not just for the “big guys” on the web. So I chose to conduct an A/B test utilizing two “fake-doors.”

A low-cost comparative test: fake-door A/B testing#section5

A “fake-door” test is simply a marketing page, ad, button, or other asset that promotes a product that has yet to be made. Fake-door testing (or “ghetto testing”) is Zynga’s go-to method for testing ideas. They create a five-word summary of any new game they are considering, make a few ads, and put it up on various high-trafficked websites. Data is then collected to track how often users click on each of the fake-door “probes,” and only those games that attract a certain number of “conversions” on the fake-door are built.

One of the many benefits of conducting a fake-door test is that it allows you to measure interest in a product before you begin to develop it. This makes it a great method for low-budget projects, because it can help you decide whether a project is worth investing in before you spend anything.

However, for my project, I wasn’t just interested in measuring potential customer interest in a single product idea. I wanted to continue evaluating my original hypothesis on non-addictive social media as well as start investigating the second hypothesis on a social media platform that doesn’t record behavior data. Specifically, I wanted to see which theoretical social media platform is more attractive. So I created two fake-door landing pages—one for each hypothesis—and used Google Optimize to conduct an A/B test.

Two screenshots of a Candor Network landing page with different copy
Versions A (right) and B (left) of the Candor Network landing page

Version A of the Candor Network landing page advertises the product I originally envisioned and described in my first problem hypothesis. It advertises a social network “built with mental health in mind.” Version B reflects the second problem hypothesis and my interview participants’ concerns around the “creepy” commodification of user data. It advertises a social network that “doesn’t track, use, solicit, or sell your data.” In all other respects, the landing pages are identical, and both will receive 50% of the traffic.

Running an A/B test with little to no site traffic#section6

One of the major caveats when running an A/B test is that you need to have a certain number of people participate to achieve any kind of statistically significant result. This wouldn’t be a problem if we worked at a large company with an existing customer base, as it would be relatively straightforward to find ways to direct some of the existing traffic to the test. If you’re working on a new or low-trafficked site, however, conducting an A/B test can be tricky. Here are a few strategies I recommend:

Tip 1: Make sure your “A” is very different than your “B”

Figuring out how much traffic you need to achieve statistical significance in a quantitative study is an inexact science. If we were conducting a high-stakes experiment at a more established business, we would conduct multiple rounds of pre-tests to calculate the effect size of the experiment. Then we would use a calculation like Cohen’s d to estimate the number of people we need to participate in the actual test. This approach is rigorous and helps avoid sample pollution or sampling bias, but it requires a lot of resources upfront (like time, money, and lots of potential participants) that we may not have access to.

In general, however, you can use this rule of thumb: the bigger the difference between the variations, the fewer participants you need to see a significant result. In other words, if your A and B are very different from each other, you will need fewer participants.

Tip 2: Run the test for a longer amount of time

When I worked at Weather Underground, we would always start an A/B test on a Sunday and end it a full week later on the following Sunday. That way we could be sure we captured both weekday and weekend users. Because Weather Underground is a high-trafficked site, this always resulted in having more than enough participants to see a statistically significant result.

If you’re working on a new or low-trafficked site, however, you’ll need to run your test for longer than a week to achieve the number of test participants required. I recommend budgeting enough time so that your study can run a full six weeks. Six weeks will provide enough time to not only capture results from all your usual website traffic, but also any newcomers you can recruit through other means.

Tip 3: Beg and borrow traffic from someone else

I’ve got a pretty low number of followers on social media, so if I tweet or post about Candor Network, only a few people will see it. However, I know a few people and organizations that have a huge number of followers. For example, @alistapart has roughly 148k followers on Twitter, and A List Apart’s publisher, Jeffrey Zeldman (@zeldman), has 358k followers. I have asked them both to share the link for Candor Network with their followers.

A screenshot of a Tweet from Jeffrey Zeldman promoting Meg's experiment
A helpful tweet from @zeldman

Of course, this method of advertising doesn’t cost any money, but it does cost social capital. I’m sure A List Apart and Mr. Zeldman wouldn’t appreciate it if I asked them to tweet things on my behalf on a regular basis. I recommend you use this method sparingly.

Tip 4: Beware! There is always a risk of no results.

Before you create an A/B test for your new product idea, there is one major risk you need to assess: there is a chance that your experiment won’t produce any statistically significant results at all. Even if you use all of the tips I’ve outlined above and manage to get a large number of participants in your test, there is a chance that you won’t be able to “declare a winner.” This isn’t just a risk for companies that have low traffic, it is an inherent risk when running any kind of quantitative study. Sometimes there simply isn’t a clear effect on participant behavior.

Tune in next time for the last installment#section7

In the third and final installment of the “Discovery on a Budget” series, I’ll describe how I designed the incredibly short survey on the Candor Network landing page and discuss the results of my fake-door A/B test. I will also make another revision to my problem hypothesis and will discuss how to know when you’re ready to leave the discovery process (at least for now) and embark on the next phase of design: ideating over possible solutions.

22 Reader Comments

  1. A nice read. I’ve come across dozens of surveys and researches, where people are asking the wrong questions. When you’re about to interview people on a larger scale, always have a pilot survey done first to verify you’re on the same page with the interviewees and that they understand your questions as you want them to.

    It’s very easy to get useless results when you don’t follow the scientific research methods.

    Looking forward to read the next part of the article!

  2. Thanks for sharing the great information indeed!! However, the third point in part 1 you shared is possible?

  3. Thanks for sharing this article and yes, brainstorming is necessary whenever you are applying for a jobs in dubai and make sure to read job description and details before.

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career