Unmoderated testing has many benefits but would not be so appropriate for this type of testing. We’re trying to create a management metric here—something you can communicate with confidence to management.
We find unmoderated testing can bring quite a bit of noise into the data. People who are not really the right target audience, even though they might say they are. People who are not that committed to the testing and just run through it. And ‘professional’ test participants—those who take a lot of tests. You have to be very careful.
In unmoderated remote testing, it is very difficult to ascertain whether someone has successfully completed the task or not, and that is a major disadvantage from a design and continuous improvement point of view. In a 2015 study, Measuring Usability found that while 93% of participants said they had completed a set of tasks successfully, only 33% of these tasks were verified as being actual successes.
Also, you still need to analyze the videos / results because the most important thing you do is figure out what’s not working, and how to fix it. You will need an expert carefully watching the participants to see where they’re stumbling, where they’re having trouble.