Inside Your Users' Minds: The Cultural Probe
Issue № 234

Inside Your Users’ Minds: The Cultural Probe

Theoretically, usability testing is a great way of finding out what is wrong with the products and services we design. We sit the users down in the lab and ask them to perform certain tasks, to “tell us what you think—give voice to your stream of consciousness.” And on the whole, it works.

Article Continues Below

I’m no mind reader (apart from when I am playing poker) but sometimes I do get the impression that users tend to say one thing when they mean another. However, they are not to blame—context is. In the lab, you shine the lamp into their eyes and ask lots of questions until you reach the finale:

“Would you use this site/service/product/software?” you say.

“Oh yes,” they reply.

You smile at them, all pink and happy. Some of the nice users even add a flourish.

“I will look at it this evening when I get home.”

And the real charmers will throw in a knee-trembling claim:

“I will tell my girlfriend/husband/dog, she/he has been looking for exactly this sort of thing.”

They pocket their cash, say their goodbyes, and enter into the outside world. And the minute they step out the door, real life intrudes on their thoughts. Should they’ve peas or beans for tea? Catch the bus or the tube? They’ve stopped thinking about you and your site/service/product/software, even before they turn the corner. By the time they get home, they’ve forgotten all about you, your lab, and their promises—even when sitting next to their girlfriend/husband/dog. They don’t think about your site/service/product/software. They watch Ugly Betty instead. They might, in the worst case scenario, never think about it or you again.

This is, in part, due to the exam-style conditions users are under. The desire to perform well and the need to please can lead to answers that simply aren’t true. Especially, when there is a cash incentive. We can’t help being nice to people who give us money. Money aside, the artificiality of getting users to step through a series of tasks in a laboratory can lead to behavior that is different to how they would behave if they were in their natural “habitat” doing the same tasks of their own accord. Therefore, wouldn’t it be better if we could probe inside their minds and study user behavior and motivations in a more natural context?

Aha! Enter ethnography.

Ethnography: stalking your user (legally)#section2

Traditionally, ethnography is performed by observing users often in the workplace, sometimes over a long period of time, in order to build up a pattern of user behavior—and it can be a tedious process.

An ethnographer and I once stood on the observation deck of a steel-rolling mill watching the mill operator put in his mid-morning bacon butty order over the intercom. “Do you want ketchup?” came the reply. The ethnographer turned to me and said, “My job is to ask, ‘What is ketchup?’” I laughed out loud in an unethnographic manner, but he had a point. It was a case of, as my old boss used to say, “I don’t know what I don’t know.”

At the end of a field study, ethnographers might tell you what users do (eat tomato sauce in the workplace) but observation alone will not tell you how they feel (they hate the level of automation, but love ketchup). This can be problematic because feelings and satisfaction are high on the usability consultant’s list of what is important and needs to be measured. Also, observation alone does not tell you what is really important to users and what is mundane. Probing inside users’ heads does.

Cultural probes are a “quick and dirty” way of looking into users’ thoughts. They allow you to capture what types of knowledge and aspects of users’ jobs are important as well as how they feel about them. Probes go beyond classical user study techniques which focus on either what people say (questionnaire and interviews) or what they do (observation studies).

What’s a cultural probe?#section3

One of the simplest cultural probes is a diary. This can be as simple as a pocket-sized diary in which, for a week or so, users jot down specifics about when, how, and why they interact with a website or service. Online blogs and photo diaries on Flickr work just as well—with the added advantage that you, the usability consultant, can see what is happening to users in real time and you don’t have to spend a lot of time afterwards converting the information you get into digital form.

Alternatively, giving users a dictaphone to talk into instead of writing/typing notes can allow you to capture all types of potentially useful information and head off the excuse that the user didn’t have a pen handy or the dog was sick on the diary (oh yes, it can be like asking for homework).

Typical questions in a cultural probe are based on the basic interrogatives: what, when, where, why, who, and how. And just as important as “How are users interacting?” is “How are users feeling?” Ratings can be useful for feelings so that interactions can be measured by emotion. Knowing why or why not users interact with something may help designers tune the final version. Also, users often use products and software in ways designers didn’t foresee them being used, and this becomes more obvious outside the laboratory when users have space to record what they do.

A few tips to keep in mind when you launch a cultural probe:

  1. Open-ended questions are a great way of encouraging users to write extra information down. Questions such as “What would you do differently in this type of situation?” uncover all sorts of thoughts that may lead to new solutions. Identifying whether an event or situation came up unexpectedly or whether it was triggered by something else (“it was my cat’s birthday”) is useful too.
  2. Giving users the choice of how they record their thoughts and feelings—text, photos, and drawings—is a good strategy so that they can decide on the best way of communicating for them and don’t feel hampered or self-conscious.
  3. The diary in whatever form can be as small or large as desired. However, size does play a part in how it is perceived by users. In paper form, space for 40 entries can be overwhelming whereas 10 entries might encourage them to complete the book. The digital equivalent sends out less of a psychological message as users have as little or as much space as they need.

Most users are shy to begin with but once they are alone with their thoughts and a platform on which to express themselves many are very forthcoming. They record all sorts of issues they simply would not have been prompted to think about when completing a questionnaire or performing a task in the lab, conscious of the two-way mirror. Diaries and blogs introduce a sense of intimacy that encourages users to tell you what is truly on their minds.

When to probe#section4

Using probes during the initial stages of a new project can help you generate design solutions that answer users’ needs. During one such study I handed out diaries to ten users and asked them to describe incidents, over the ten days that followed, when they felt that their mobile phones had let them down. I asked them to describe a solution—even a magical one—to their situation which would guarantee them a successful outcome to the problems they had. The users sketched out all sorts of solutions: a stylus to take notes on their mobile during a call, a mobile which could text a fax, a mobile which could open word documents or texts whilst in the middle of a phone call.

By understanding exactly how testers used their mobile phones on a daily basis—and specifically, how they used them when they needed to respond quickly to someone else—it was easier for me to see a general trend. Users wanted their mobiles to be more like miniature networked computers, not just telephones.

Analyzing the results: emergent behavior#section5

We may be individuals, but we are all human, and it follows that we have similar sorts of concerns and irritations. Even if your user group seems to contain the most diverse bunch of people you have ever seen common themes will emerge. They always do.

By analyzing the results of your cultural probe, you can build up a pattern of how users behave: what they love and hate, what motivates them to do what they do and why. Solutions based on this knowledge can help you to give users what they need, rather than what they say they want. Results can also help identify unrecognized needs and invent new products—and create happy users who feel as if you can read their minds.

About the Author

Ruth Stalker-Firth

Ruth Stalker-Firth has been designing and implementing software for users in the UK and Switzerland for over 12 years. She has lectured human-computer interaction at Lancaster and Westminster Universities, UK. She lives and works in London.

35 Reader Comments

  1. I must confess, at uni I found HCI stuff incredibly tedious, yet now that it’s of use to my work, my interest is sparked. A nice, light midweek read. Thanks

  2. Since the question you ask already influences the user I think it is better not to ask at all. Instead you should monitor the user while using your product/website/whatever. You can see so much when just analyzing the real world usage of your product.

  3. It is very important to get feedback, when users testing a new product. By monitoring you get only results for the usabillity, but you will never know what the user is thinking about the product and this could be something completly different.

  4. The discussion above shows that the article did not make clear whether it’s about market research or “only” about usability. If you want to figure out what new products to launch of course you have to ask questions. But if the product is really new and the person you ask does not know such a product yet, the answer might not be accurate. As the author said the person is influenced just by asking this question in a specific environment. This is ok – as long as you are aware of that and maybe ask the same question in different environments.

  5. If you want to figure out what new products to launch of course you have to ask questions.

  6. If you want to figure out what new products to launch of course you have to ask questions.

  7. The article starts with ‘Theoretically, usability testing…’ so for me it is about usability and not market research. But as humans, we interpret things in different ways based on our experiences thus far – user or designer/author or reader – so something which is clear to me might not be to someone else (leaving aside the fact that we ‘read’ differently on screen). And this carries over into our work.

    Analyzing user behaviour without talking to users (either shadowing/videoing them or electronically: click capture/web metrics/eye tracking/etc) can be a nice way of working but you need to be careful that you don’t introduce your own interpretative bias. There has been much research into ethnomethodologically informed ethnography which tries to avoid introducing bias when interpreting the results of a study which is normally done by following a prescribed method. We are all biased so why not ask your user about his biases?

    The best thing about usability testing is that clients can hear a user saying something which is obvious to us but not to them and which they might not fully understand if you just told them ‘how it is’. The best bit about cultural probes is that you get to see user motivation and context, or situatedness, which affects all things!

    Rather like when you find a HCI lecture series tedious – this can be because of the time of day, the mood you are in, the lecturer (heaven forbid!) and whether you are motivated to learn about HCI or not. I used to ask my students what they thought of HCI at the start of my courses and some would say, “Rubbish, I hate it all,” but if I introduced certain topics within a context which interested them, by the end of a lecture I could have the same students uttering things like ‘That was really interesting.’ and motivated to turn up next time even if it was Monday morning 9am. Context is key. And so is feedback, so thanks for your comments.


  8. …how about this one: “Should they’ve peas or beans for tea? Catch the bus or the tube?”

    I’ve never seen the contraction “they’ve” used that-a-way before, but I love it! And why not, Professor? Exactly what rule of English grammar does the construction break?
    Speaking of “break”: I’ve seen little evidence of the slightest concern for usablility in the Nokia and Motorola mobiles that Cingular has palmed off on me — just in handling phone calls, address books and speed dials. Forget about “surfing the web” on these doorstops.

    If I were keep a diary, I’m afraid that it’d be full of [expletive-deleted]s about breaking the devices in question over the designers heads.

  9. What about the problem of observation with ethnography – if I know I am being observed I will act quite differently to when I am alone? Example, if you were watching what I do with my computer at work I’m sure you’ll see me do a very industrious 8 hours of non-stop work. If I am not being observed, I will do a lot more web browsing, personal emailing, phone calls, etc – I see it as a similar issue as “life intruding” that your users gets when they walk out the door.

    Even the act of keeping a diary forces you to think more about what you are doing, which is behaviour unlike a typical user. I think some diet gurus say to keep a food diary as once you’re observing how much you’re eating you change your behaviour.

    I think the problem of trying to please the ethnographer would probably be quite similar to trying to please the usability examiner.

    Just a thought.

  10. One big problem that’s evident from the last-mentioned scenario in the article —mobile phone users wanting a small networked pc in their palms— is that culture and (lack of) expertise impose limits on imagination and accuracy.

    Culture: Most end-users testing for a new type of cell phone will have significant experience using other similar devices. People are used to the UI’s that ship on current models. Therefore it’s unlikely they’ll comment on design flaws that are present in both the tested phone as well as their previously used devices.

    (lack of) Expertise: Since people are not articulate about or consciously preoccupied with the many minor frustrations they have to cope with using a device like this, test-subjects are unlikely to give feedback that the basic functionality of the phone is flawed. It’s common for people to think that if making a call is hard, it’s their fault. If they’re typing something with the keypad and it takes a bit while longer then they’d like for the letters to appear on the screen, it’ll disturb them, but not on a conscious level.

    That’s why you’ll always get the kind of feedback that tells they want a more advanced device, offering more functionality, offering more features. While actually making the basics work better and stripping out the clutter of unused ‘power features’ would make most users happier in the end.

    (Please note that my definition of culture in this context specifically encompasses the way the people have grown accustomed to today’s mediocre interfaces —in it’s literal definition: of average quality— and the acceptance of difficulty and the minor frustrations that are part of contemporary mediocre user experiences)

    Only people with domain expertise in usability can articulate these problems, but they’re also not the best test subjects precisely because of the mind-luggage that comes with their domain expertise. Again culture rearing it’s ugly head.

    This type of research should not be used as the fundament of usability studies. Whilst it might bring some flaws to light, the results are culturally biased and therefore mostly useless. We must accept that mediocre usability has become part of our culture and thereby the expectations of our test subjects. The only way to advance beyond the current level of quality is by eliminating the influence of this culture of mediocrity in our decision-making process.

    I propose the following method. Designers must embed themselves in the field of their target market, becoming their own persona. Besides starting to ‘own’ the problem they set out to solve, this allows them to become intimate with every bit of the culture of their local peers and will allow them to better judge the kind of feedback they get back from the people around them. This part of the process is art, not science.

    Usability testing should be confined to gathering measurable data (how long does it take to perform this task? which parts does the user look at when trying to figure this out?), this raw data should finally be aggregated into conclusions and translated into words by people articulate in the domain. This part is science and should be treated as such.

    (this should really be a blog post, not a comment, but since my blog isn’t up yet..)


  11. Thanks for very interesting article. Can I translate your article into polish and publish it at my weblog? I will back here and check your answer. Keep up the good work. Greetings

  12. Dirk, thanks for your long comment. I really feel the same regarding the need of scientific research when doing usability testing. SO let us know once your blog is up.

  13. Was there anything specific to webdesign in this article? Not that anything was wrong, all the points are important. They are important for webdesigners and everyone else. But I’m used to more in-depth articles on ALA than this one. Greetings

  14. Thanks for very interesting article. I really enjoyed reading all of your articles. It’s interesting to read ideas, and observations from someone else’s point of view”¦ makes you think more. Keep up the good work. Greetings

  15. Sorry I’m late:

    bq. “Would you use this site/service/product/software?”? you say.

    Then, the entire test’s already flawed. Don’t you know “Don’t listen to users?” I’m sorry (probably not), but one might test with really few users, but not with the wrong methodology. And I may say that I suspected exactly that when reading this article …

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA