The article starts with ‘Theoretically, usability testing…’ so for me it is about usability and not market research. But as humans, we interpret things in different ways based on our experiences thus far - user or designer/author or reader - so something which is clear to me might not be to someone else (leaving aside the fact that we ‘read’ differently on screen). And this carries over into our work.
Analyzing user behaviour without talking to users (either shadowing/videoing them or electronically: click capture/web metrics/eye tracking/etc) can be a nice way of working but you need to be careful that you don’t introduce your own interpretative bias. There has been much research into ethnomethodologically informed ethnography which tries to avoid introducing bias when interpreting the results of a study which is normally done by following a prescribed method. We are all biased so why not ask your user about his biases?
The best thing about usability testing is that clients can hear a user saying something which is obvious to us but not to them and which they might not fully understand if you just told them ‘how it is’. The best bit about cultural probes is that you get to see user motivation and context, or situatedness, which affects all things!
Rather like when you find a HCI lecture series tedious - this can be because of the time of day, the mood you are in, the lecturer (heaven forbid!) and whether you are motivated to learn about HCI or not. I used to ask my students what they thought of HCI at the start of my courses and some would say, “Rubbish, I hate it all,” but if I introduced certain topics within a context which interested them, by the end of a lecture I could have the same students uttering things like ‘That was really interesting.’ and motivated to turn up next time even if it was Monday morning 9am. Context is key. And so is feedback, so thanks for your comments.