I want to compare an existing function in an existing solution (search filters not being seen and/or used on our corporate intranet) with one or two prototypes.
What’s the best way to this? How do I compare and measure and verify if something is better/clearer easier to find without telling the test person what I’ll be testing? The problem, or problems, are:
So, how would you go about this?
- They know that they are being tested.
I think this is the wrong term to use, the user isn't being tested, the designs are being tested. This is an important shift to get the users speaking openly and not trying to avoid / cover mistakes as they feel they are being assessed.
- I'm comparing a fully working existing solution with limited prototypes (thus the scope is limited) and hence then know they are tested.
Whilst it's more work, how about you build a prototype which represents the existing system, so the same fidelity as the other prototypes and with the same limited scope. That way you're comparing like for like. I think comparing cross fidelity is a bad idea as they evoke different responses and feedback from users.
Answered by dougajmcdonald on November 21, 2021
I don't think it's a problem that users know they are being tested. It's often the case in usability testing. You need to find a task that is as close as possible to a real use case where users would benefit from using the filters (without mentioning filters).
Your prototypes need to allow the users to move down 2 or 3 different paths, not only the one using filters.
To avoid the obvious discrepancy in quality with the current website, you could reproduce in low-fidelity prototype the way filters currently work live. It might also limit the impact of familiarity bias.
For a good setup, you should use 3 groups of users with group 1 testing prototype A, then B, then C, group 2, prototype B, A, C, group 3 testing C, B, A. This way, they can compare the different solutions and give you more detailed feedback. Changing the order of the prototypes will allow you to take into account possible priming effects from one prototype compared to another.
Answered by celinelenoble on November 21, 2021
as I understood your main worries is to collect unreal data from running those tests while the employees already know about that, hence do not have the opportunity to go ahead later with the right prototype to develop.
How do you know that this function hasn't been used in past from all the users? Do you have a report with metrics that prove this? If yes I think you can go ahead with no problem.
Your Question: How do I compare and measure and verify if something is better easier to find without telling the test person what I'll be testing?
I would run the test on small teams. You could try to speak with the TL to have a list of up to 5 people to interview (10-15 minutes) with specific open questions about the intranet system used and then move to their desk just to observe their way of use of the system (10-15 minutes).
Do not mention, during the interview or shadowing, which specific features are needed to be improved but inform that involve all the system.
During the shadowing do not help them to complete any tasks but just observe how they use the system.
If everybody knows about the test then I would probably, involve mostly the employees that can interact more with the filtering function in their daily tasks as their use would be more valuable than others. Your aim is to improve that function, therefore, have those as user test is helpful.
To compare past data and actual data then could be enough to understand which solution is the best.
Answered by Mari on November 21, 2021
4 Asked on November 27, 2020
1 Asked on November 5, 2020
1 Asked on August 12, 2020 by lewis-hanson
Get help from others!