sudolabs logo

9. 11. 2020

6 min read

User testing best practices 2 / 2

User testing best practices for product managers or UX researchers. 5 users are enough for any tech. user testing. Involve all your important stakeholders into testing sessions as observers.

Jan Koscelansky

question mark

In the first chapter of this user testing blog post, you could found my reasoning, why are live 30 min. video calls the best method of executing easy and effective user research and also what are the best internal and external sources for recruiting your testers. But now let’s go to the more controversial topic - What is the required number of testers - 10? 100? 1000?

How many users do I need to involve?

The first answer that comes to mind is “as many as possible!”... Well, it is not so straight forward. First of all it would be very costly and moreover, increasing number of testers will not bring such additional benefits as one would expect.

There are many studies that suggest +/- 5 testers as an optimal number for any tech. product user research. That’s right! Only 5 users are enough to uncover 85% of issues or user insights. Look at the famous graph below, made by UX research guru Jakob Nielsen

graph

Let’s assume that there are 10 major value adding problems hidden in our product (or 10 important insights in the customer journey that we have to understand). Nielsen’s research was measuring how many of existing problems / insights we get uncovered while testing with increasing amount of testers.

Do we want to uncover all 100% of problems or insights? Ok, but then we need to test at least with 15 users. Recruiting 15 users can be pretty demanding and following execution, evaluation and processing of those tests will take at least few weeks.

But what will happen if we involve “only” 5 users? Well, we will collect 85% of existing insights, which is still a pretty high number right? It is not 100% but to uncover the maximum of insights we would need to do 3x more tests and thus spend 200% more resources. But the additional value would be only 17% more uncovered insights. Pretty clear inefficiency, huh?

formula of testing inefficiency

Of course, there are cases where you need to be a perfectionist and get all 100% of problems - for example in life-dependent medical products or highly critical business services. But in standard situations and typical commercial products, we should be totally fine with an 85% discovery rate.

Actually, the best approach from my experience is to be iterative and rather do a few smaller test rounds than one large overwhelming testing exercise. It is already a pretty tough PM job to process that 85% of insights - analyze them, specify them, prioritize them, and implement them. So having more issues to work on, will only increase the product backlog and keep us longer away from testing those new improvements.


If we are able to tame our perfectionism and get along with insights from 4-5 testers, we can focus on fewer things, work in shorter cycles, and repeat the testing more often with new versions of our continuously improving product.


But is 5 really enough?

There is one more concern related to the number of testers. To write it as a quote, it would sound something like this: “OK, I am fine with uncovering only 85% of all insights, but how can I be sure that these insights are the most important ones and why should I act only upon 5 user voices?"

Well first of all we are doing qualitative research here, not quantitative one. So we are looking for insights and themes, not hard numbers or statistical significance. Our goal is to get in touch with real users in order to get a better understanding of where to focus our improvement efforts.

If we see that 3 out of our 5 testers struggled with some product area it is enough to claim that we should focus on that product part. It would be useless to hypothesize at this point whether maybe another 5 users would have completely different problems. Well, they wouldn't! It would be a huge statistical exception if we found different or more critical problems after testing with another group of users (from the same user segment)

However, the best way to solve these kinds of concerns is to involve all important decision-makers in the product testing sessions as observers. Actually, me not succeeding in persuading all relevant stakeholders to join the user testing sessions was maybe one of my most crucial failures.

You can do your best to explain to your business stakeholders that users are struggling with some of our features and that we should rethink it completely. But if they join the testing sessions and directly see that repetitive user frustration, it gives them a completely different perspective.


If your stakeholders see after 5 testers that the majority of discovered issues are repeating, they will fully agree that there is no reason to continue with another user and that we should rather quickly go in front of a whiteboard to start working on those identified issues.


So if it is at least slightly possible, try to get all your important decision-makers into the room with testers, so that they can experience it all together with you. But explain precisely what it means to be an observer. They really cannot interact with testers or jump into the testing process. There should be always only 1 person leading the live session and communicating with testers to avoid any confusion. We should make it as easy as possible for testers to behave as in a “normal” situation.

It is usually super hard for CEOs or other non-product people just to sit there and listen to users and how they struggle with our product. There is always a strong urge to jump in and explain to testers how to actually use their beloved product in a desired manner. But we are not on a sales session, nor are we here to do a product demo. We are here to sit quietly, observe user behavior, and feel their pain. Maximum what we can do is to ask a few questions to get a better understanding of user intentions and needs.

user testing research

Do I need to test with users if I know my product so well?

Very interesting is also the beginning of Nielsen's graph, where we see that zero testers will bring you zero insights. Duh!

I have heard so many times from various business people that they don’t need to spend time on user testing because they know their customers and products very well and there are anyway so many things to build or fix even without user testing.

The problem is that these business people and industry experts are not typical users of their products. In this case, their knowledge is actually their disadvantage and since they are too deep in the topic, they interact with the product in a very different way than a typical user would do. It is great to use their knowledge during a user testing outcomes analysis, but avoiding user testing at all and counting only on internal ideas is a very risky approach.

Already 1 tester can show you 25% of problems and get you enough food for thought. However the involvement of the 2nd and 3rd tester will allow you to start seeing the patterns and main themes of users' struggles. And this is the main goal of any user testing. We are not trying to get from users some precise orders on what to do with our product. We only need to collect insights about the areas where to focus and then internally dig deeper into those areas.


However, all of this is just a theory. If I shall give you 1 advice only, I would recommend you to be flexible and adjust the number of testers in “real-time” based on the collected outcomes. This means to stop testing the moment you start to feel that you gained enough insights and on the other hand increase the testers group if you still haven’t gotten the critical "aha moment" that you'd expected.

This may sound a bit vague, but I can assure you that you will naturally feel those aha moments without any struggle.

Share

Let's start a partnership together.

Let's talk

Our basecamp

700 N San Vicente Blvd, Los Angeles, CA 90069

Follow us


© 2023 Sudolabs

Privacy policy
Footer Logo

We use cookies to optimize your website experience. Do you consent to these cookies and processing of personal data ?