Practical Use Cases of User Experience Research – Part 3
Why Conduct UXR?
Use cases typically describe how users interact with a system in order to determine how to optimize the system from the users point of view. User experience research is predicated on that idea; empathizing with users will vastly improve products and save resources in the long run.
There are a multitude of ways to apply user experience research to the product development process, with applications of multiple research methodologies possible at every phase of the design cycle. During early product development, user input steers a product’s concept, branding, and structure. The following use cases are valuable assessment tools that are often used during the early phases of the design thinking process as well as to inform later iterations of the product.
When developers need to choose between two versions of a product, feature, design element, etc. it is prudent to determine the most effective design via A/B testing. Two similar user groups are presented with distinct variations of the variable being tested; the version that performs the best based on metric criteria is the winner!
For example, an online clothing store has rebranded and the design team is considering changing the label on the checkout button from “Pay Now” to “Treat Myself.” Two versions of the platform are released to small user samples: Version A (“Pay Now”) and Version B (“Treat Myself”). When compared, 18% more users completed purchases using Version B than using Version A, validating the design update.
This form of testing requires a product team to produce test instances or high-fidelity prototypes for users to try. The ideal participant sample for this use case depends on the variable to be assessed and the study design, with samples ranging anywhere from 5-1,000+ users.
Another method of determining user preferences is by directly asking your users what they like! It may seem like a no-brainer, but teams often make assumptions about user desires instead of gathering concrete evidence. Preference Testing quantifies user sentiments by presenting a singular user group with variations of the same design and asking those users to choose their favorite.
Continuing our online clothing store example, the design team has created two different sets of icons (A and B) for the rebrand – users are shown images of the site with Icon Group A vs. Icon Group B and asked to choose which set of icons they prefer. If there are more than two options, users are asked to rank the options based on their preferences.
This form of testing requires a product team to have actionable design concepts to present to users. The ideal participant sample for this use case is at least 10 users.
The way information is categorized and presented to users can make or break a design; asking users to structure the information early in the design process will mitigate costly re-engineering of the platform later. When developers need to determine or refine the way information is structured and delivered within the system, Card Sorting sessions provide insight as to how users naturally structure information when perceiving the product.
These insights allow your team to alleviate disparities between the existing information structure and users’ organic mental structuring of the information. For example, users browsing our online clothing store report that they did not find the expected items in a particular category, such as “Skirts,” so a small group (3-10 users) is asked to complete a Card Sorting session wherein they organize inventory items into general categories.
This testing reveals that the majority of users place “Skorts” (skirts with built-in shorts) and “Play Sets” (skirt and matching top) in the “Skirt” category, alerting designers and engineers to the fact that those items are currently grouped with “Athleisure” and “Dresses” respectively and inspiring the team to restructure how inventory is categorized.
You can’t alleviate user pain points without knowing what those pain points are, and only the users can reveal them. Why make faulty assumptions about how users will interact with the platform when a user researcher can observe users and report nuanced findings?
From the moment a system is viable enough for beta testing, users should be exposed to it. Usability Testing provides valuable insights for even the most refined products; via Moderator, 5-15 users are observed and recorded operating the system, then report their feedback via a System Usability Scale Survey (a simple questionnaire that determines the general usability of the product) and/or an In-Depth Interview about their experience.
Users interact with the system organically and may be guided by study prompts delivered by the Moderator. Afterward, users complete a 10-item survey about their experience. The Moderator takes live notes of the users’ interactions as observed, then transcribes the recorded session and annotates the users’ comments and interactions with the system to isolate trends in user feedback. These trends are compared and contrasted with the survey responses to produce research insights and recommendations that provide a nuanced understanding of the user experience.
For example, moderated usability testing of the online clothing store app revealed users who required a tutorial to operate the platform were confused by the information architecture of the application and struggled to navigate the user interface due to confusing iconography. These insights lead to meaningful design changes and inspire further studies, like those outlined above, to further investigate user behavior and desires.
Unmoderated testing is also a popular option – users record their testing sessions themselves to be annotated by the Moderator at a later date. This eliminates live notes but saves time as participants and researchers don’t have to coordinate schedules to complete testing. It is important that the users are familiar with usability testing as a practice when they’re recruited for unmoderated testing.
Ready to Listen to Users?
Conducting user experience research is a way for companies to ensure their efforts and decisions are effective with their users. In REEA Global’s experience, companies who take the time to understand their users tend to have higher and quicker growth results compared to those who dive into business initiatives based on what they think is important. Don’t assume what’s important, listen to your user’s voice instead.