Measuring user experience versus expectations has proven to be valuable, and it can too for you. In this new blog post, I would like to share with you a technique I used in usability tests to help me get more value out of the participant’s feedback. This technique can be used in both lab usability testing setups, as well as remote usability testing setups. For the regular readers, it is no joke that I have a lot of faith in remote user testing and its flexibility it can offer anyone to quickly, and cost effectively, learn more from real users using your website. I have been performing remote user testing for several years on an enterprise level and its insights have never failed me. Of course, any form of qualitative research will require intense resources to analyse.
Personally, one of the biggest challenges that I had when performing research, was never resources, but the categorisation and quantifying of the findings. Sure, I will always end up spending a lot of time analysing the videos, but in the end, I need to know how often certain issues happened. Since categorising is very subjective, I need to always think twice about how to prioritise the findings, which ones need my attention the most. Now, I’ve found out how.
Quantifying the Qualitative
Last year I bought and read Thomas Tullis and William Albert’s book ‘Measuring the User Experience‘ (O’Reilly). Hidden away on two pages, was the method that I found the most valuable in my daily remote user testing activities.
The method taught me to add 2 simple questions to a task, and then to plot these answers on a grid. Tullis and Albert stated that the tester should ask an ‘expectation’ question before, and ‘experience’ question after each task. The question should be answered by the participant on a scale of 1 to 7, where 1 is difficult and 7 is easy.
By asking the question before, testers can get a sense of how confident the participants feel to completing a task, and then compare that with the answer of the follow up question revealing how the participant actually experienced completing the task. It is a clear a simple method for discovering if expectations of features, functionality, design, is actually being met, and if not, how many people are affected.
Logging the Answers
When applying this technique, you will only need to do few things to make sure you can create the graph in the end.
- Add the ‘expectation’ before a task you want to track on your graph.
- For the questions in step 1, add the ‘experience’ question after the task.
- When analysing the test results, log the answers in a spreadsheet, per user, per task tracked.
- Plot the answers on a graph and voila.
Here is an example of a task with the questions before and after:
- On a scale from 1 to 7, where 1 is very easy, and 7 is very hard, how easy or difficult do you expect it to be to locate the calculation tool on the website?
- Find the calculation tool on the website.
- On a scale from 1 to 7, where 1 is very easy, and 7 is very hard, how easy or difficult did you find it to be to locate the calculation tool on the website?
You can replace the task with any task you want.
The next thing you need to do, is to log the answers in a spreadsheet when analysing the remote user testing videos (or lab videos). Here is an example:
Visualise the Answers
Once we have logged all the answers, we can then start plotting the results in a graph.
As you can see, I have plotted the answers (not from the log shown above the graph) into the graph and have split it into 4 quadrants. Tullis and Albert use this same method in their book to allow for quick visual distinction between acceptable results and results that require more attention.
In the example above you can see the 3 tasks that I applied the method to in my test. Participants had to find, and use a tool, but also use the tool to continue their journey on the site to a product detail page. The graph is telling me that the findability of the tool is sub-par. It needs some looking at because the participants, when asked, expected to find the tool quite easy (2.5 on a scale of 7), but scored it 4.8 after having actually experienced finding the tool.
All in all, this is a simple research method to add on top of a remote user test. I apply it to 90% of the tests that I perform and it helps me report to stakeholders which tasks need the most attention. Recording how many participants struggle with a task is one step of the journey to optimisation, but making sure we meet user expectations is another, and possibly even more powerful. Use this technique to discover what your participants expect, and how they experience. Feel free to share your experiences in the comments.
Thanks to Thomas Tullis and William Albert for writing about the technique in their book ‘Measuring the User Experience‘ (O’Reilly).
NOTE: Since the writing of this post, the 2nd edition of the book has been released. You can read more about Measuring the User Experience 2nd edition on Amazon.