Measuring User Experience versus Expectations

Measuring User Experience versus Expectations

Share these actual insights

Measuring user experience versus expectations has proven to be valuable, and it can too for you. In this new blog post, I would like to share with you a technique I used in usability tests to help me get more value out of the participant’s feedback. This technique can be used in both lab usability testing setups, as well as remote usability testing setups. For the regular readers, it is no joke that I have a lot of faith in remote user testing and its flexibility it can offer anyone to quickly, and cost effectively, learn more from real users using your website. I have been performing remote user testing for several years on an enterprise level and its insights have never failed me. Of course, any form of qualitative research will require intense resources to analyse.

Personally, one of the biggest challenges that I had when performing research, was never resources, but the categorisation and quantifying of the findings. Sure, I will always end up spending a lot of time analysing the videos, but in the end, I need to know how often certain issues happened. Since categorising is very subjective, I need to always think twice about how to prioritise the findings, which ones need my attention the most. Now, I’ve found out how.

Quantifying the Qualitative

Last year I bought and read Thomas Tullis and William Albert’s book ‘Measuring the User Experience‘ (O’Reilly). Hidden away on two pages, was the method that I found the most valuable in my daily remote user testing activities.

The method taught me to add 2 simple questions to a task, and then to plot these answers on a grid. Tullis and Albert stated that the tester should ask an ‘expectation’ question before, and ‘experience’ question after each task. The question should be answered by the participant on a scale of 1 to 7, where 1 is difficult and 7 is easy.

By asking the question before, testers can get a sense of how confident the participants feel to completing a task, and then compare that with the answer of the follow up question revealing how the participant actually experienced completing the task. It is a clear a simple method for discovering if expectations of features, functionality, design, is actually being met, and if not, how many people are affected.

Logging the Answers

When applying this technique, you will only need to do few things to make sure you can create the graph in the end.

  1. Add the ‘expectation’ before a task you want to track on your graph.
  2. For the questions in step 1, add the ‘experience’ question after the task.
  3. When analysing the test results, log the answers in a spreadsheet, per user, per task tracked.
  4. Plot the answers on a graph and voila.

Here is an example of a task with the questions before and after:

  1. On a scale from 1 to 7, where 1 is very easy, and 7 is very hard, how easy or difficult do you expect it to be to locate the calculation tool on the website?
  2. Find the calculation tool on the website.
  3. On a scale from 1 to 7, where 1 is very easy, and 7 is very hard, how easy or difficult did you find it to be to locate the calculation tool on the website?

You can replace the task with any task you want.

The next thing you need to do, is to log the answers in a spreadsheet when analysing the remote user testing videos (or lab videos). Here is an example:

User Expectation versus Experience Log

Visualise the Answers

Once we have logged all the answers, we can then start plotting the results in a graph.

User Expectations vs Experience Graph

As you can see, I have plotted the answers (not from the log shown above the graph) into the graph and have split it into 4 quadrants. Tullis and Albert use this same method in their book to allow for quick visual distinction between acceptable results and results that require more attention.

In the example above you can see the 3 tasks that I applied the method to in my test. Participants had to find, and use a tool, but also use the tool to continue their journey on the site to a product detail page. The graph is telling me that the findability of the tool is sub-par. It needs some looking at because the participants, when asked, expected to find the tool quite easy (2.5 on a scale of 7), but scored it 4.8 after having actually experienced finding the tool.


All in all, this is a simple research method to add on top of a remote user test. I apply it to 90% of the tests that I perform and it helps me report to stakeholders which tasks need the most attention. Recording how many participants struggle with a task is one step of the journey to optimisation, but making sure we meet user expectations is another, and possibly even more powerful. Use this technique to discover what your participants expect, and how they experience. Feel free to share your experiences in the comments.

Thanks to Thomas Tullis and William Albert for writing about the technique in their book ‘Measuring the User Experience‘ (O’Reilly).

Order ‘Measuring the User Experience’ from today

NOTE: Since the writing of this post, the 2nd edition of the book has been released. You can read more about Measuring the User Experience 2nd edition on Amazon.


About Matthew Niederberger

Matthew works as Conversion Optimisation Manager at Ziggo BV. In his free time he enjoys family life as well as digging into online user research material whilst frequently generating some of his own, which he freely shares here on


  1. Imran Forbes says:

    This is a new method to me thanks for sharing your reading amd lesson learnt.

  2. Tony says:

    I noticed you flipped the meaning of the scale from the book example (1 = hard, 7 = easy) to your example (7= hard, 1 = easy). Any particular reason, or is it just a typo?

    • Well spotted Tony! This was a typo that I discovered after I had performed the first test. I decided to leave it in my charts and test templates since it made no difference to the results/insights, just a silly moment of mine… laps of concentration I guess 🙂 #onlyhuman

  3. Great, insightful technique. We’ve been using expectation analysis for years. It’s been our go-to tool for highlighting priorities and uncovering “stories” within user testing results. My favorite are tasks/features that stakeholders thought would perform much better, but were at or slightly below expectations. Those tend to have good objective results that disguise the otherwise poor outcomes vs. expectations. That ‘blinking light’ metric can lead to a lot of interesting findings once you dig deeper. Granted, we use Magnitude Estimation instead of Likert scales (briefly described by Tullis&Albert), but the expectation concept itself rocks.

  4. Holger Indervoort says:

    Very interesting article. Thanks a lot. One question, aren’t there very much users setting the expectations always very high so that the tasks most of the time will be in the left quadrants and almost never on the right?

  5. Hi Holger, I think that is partially true. I think that most humans are optimistic by nature, but the presumption is that participants will fill it in based on 2 possible criteria:
    1. they have viewed the website (not interacted with it) and based on the initial impression will score the question accordingly
    2. they will score the question based on previous experiences elsewhere on the internet that show some level of comparison to the task at hand

    Either way, it is not full proof. The proverbial proof in the pudding can be found between the comparison between the expectation and eventual experience. If there is a strong offset between the two, you have something worth looking at…

    Like the character V said in the movie ‘V for Vendetta’… There is no certainty, only opportunity.

  6. lingzhi wang says:

    I have used a similar methodology to evaluate the gap between the expection and the design, but I only explore how the participants expenct the UI process and features, including the reseaons, of cause.Then I can know which feature or UI flow is not the same as the participnats’ expection, even inferior to or beyond the expection.

    thank you for sharing this method.

    I think this method is proper to define the emergency of the design modification demands.

  7. Kristin says:

    I am a big fan of the book you mention here and I also recently made a trial run with this technique. It didn’t work very well for me (or maybe it did?) – probably due to the fact that I didn’t have enough participants. It is one of those techniques that just makes sense and which gives usable information easily. Thanks for some interesting articles.. keep them coming :o)

    • Number of participants can be a real knuckle dragger at times. Have you had a look at ZURBs VerifyApp? You can get 100 participants for only $100. When they complete the regular ZURB test, you can forward them on to a survey and let them answers your questions there. Good luck!

  8. Sanket Jadhav says:

    I agree that this is a good way to visualize and quantify findings, however we (researchers) must always be cautious about ‘false negative/positive’. From my experience of having run quite a few usability studies I have noticed that a lot of times, users do not realize if they haven’t succeeded a task and end up responding very positively post task. If the moderator / note taker isn’t careful to note this, the results of this visualization may be misleading…


  1. […] is becoming more and more important. I shared a method for quantifying insights like these in a previous post, and believe me, it will only become more crucial in the development/optimisation cycle as user […]

  2. […] What else can I say about Measuring the User Experience that I haven’t said already. In an article I wrote earlier on Expectation Measurements I discussed the value of measuring and comparing a user’s expectations versus experience. […]

Leave a Reply

FREE Course!

Get FREE access to my User Feedback Fundamentals course at Udemy, a $29 value. Just sign-up to my newsletter.
Unsubscribe at anytime and keep access to the course.

Nope, we don't deal in spam, not even the canned sort.