Website Usability Testing Methods:48-Point Guide
To Getting More Value

Johnathan Dane
Johnathan Dane
Klientboost Logo
Get Your Free Marketing Plan,
Custom Tailored For Your Industry

Website usability testing methods are an important consideration for any business with a presence or product online. You’ve got something to sell and people to sell it to--but with this wide-wide world of digital communication, sometimes it’s hard to tell if people like you or like your product. This is where website usability testing comes into play to provide you with insight on how your customers/users/visitors are interacting with you and your business.
So, what is website usability testing? It’s a technique used to determine how easy it is for your users to use your product (e.g. an app or a website). There are different stages in your product development when you should be evaluating users. There are also different methods you can use to gather information about your product’s usability. These methods involve different means of gathering data, different environments to evaluate users, and different perspectives on how to interpret your data.

Just pressing buttons...
Just pressing buttons... - image source

Why Are Usability Testing Methods Important?

We all can come up with hunches and opinions about how our users will interact with our products--but without testing these hunches and opinions, we won’t know what’s actually happening. Knowing how your users interact with your product is incredibly valuable.
With this data, you can identify speed bumps, hurdles, snags, or missing features they’re looking for. These snags can cause your users to get frustrated, confused, or enraged to the point of transforming into enormous, radioactive green monsters.
On the other hand, usability testing can help identify things you’re doing right. You can see areas where users have great, streamlined experiences--so you can learn which elements or environments make your product easy to use. Then, you can prioritize designs and features in other areas of your product.

Data wins again.
Data wins again. - image source

Analytics vs. Usability Testing

There are so many data gathering tools to tell you what your users are doing with your product, but these metrics only reflect actions. Actions are often executed by users who are frustrated or even rage clicking. Rage clicking is when a user rapidly and repeatedly clicks on elements looking for results.
Just because an action was taken doesn’t mean you have a happy user who is having a great time. Frustrated users may even stay on your product, clicking around, visiting pages, but these actions won’t tell you if the environment you’re providing is easy to use or pointing them toward what they’re looking for. Some users can be gluttons for punishment.

Usability Testing Categories

Testing isn’t a one-time thing, and it’s also not always executed in the same way or the same context. There are multiple stages in developing, managing, and optimizing your product. You should implement testing to explore, assess, and compare.
Here’s a breakdown of these three stages.

  1. Explorative
    Typically used in early product development to determine the effectiveness of a more raw design or prototype, explorative testing focuses on users’ thought processes and conceptual understanding of the purpose of your product.
  2. Assessment
    Used in the middle of your product’s development, you’ll focus more on overall usability. Evaluate real-time trials of the technology to determine the satisfaction, effectiveness, and overall usability.
  3. Comparative
    You’ll observe users comparing two or more products or designs. This will help distinguish strengths, weaknesses, and opportunities. This is a great opportunity to quantify comparisons. You could time users completing the same task in each product. Track the number of steps it takes to complete identical tasks, and also evaluate pro’s, con’s, and missed opportunities for additional features.

These are three common stages or settings to conduct your usability tests--but I think you can already tell, to get the most out of your product, testing should be continuous.

“You’re killing me, smalls.”
“You’re killing me, smalls.” - image source

When it comes to practical testing, you’ll be examining people -- not just numbers, movements, or statements. With technology today, you don’t even need to meet them face-to-face; although there are pros and cons to physical testing methods and virtual ones...

  • Physical (in-person)
    You’ll meet your current or potential users face to face, and observe them as they go through a set of predefined tasks. You’ll need a killer, well-thought-out script or list of questions for the best possible in-person testing session. Stacey, a UX designer working on a fitness product, went to a mall in San Francisco, where she approached shoppers leaving yoga and sports stores to gather information for the fitness app she was working on.
  • Virtual (via screen shares)
    Typically, this is done with screen sharing or session recording; however, same as physical method, you still observe your subjects performing predefined tasks within your product, while you gather insights about their experience and observe their reactions. This article details how to use Go-To Meeting to evaluate visitors who are currently using your website.

Ok, so now you know what categories of testing you can conduct, and how to evaluate users in-person or virtually. You’re probably asking, “How can I get started right away on the cheap?” I am so glad you asked...

Budget-Friendly, Fast Feedback Methods

There are a bunch of ways you can quickly gather usability data. You can try: 5-Second Usability Tests, Click Tests, Open Ended Question Tests, Navigation Tests, Preference Tests, Surveys (e.g. Survey Monkey), Hallway Tests (try in-person or even social networks, Skype, Google Hangouts), gather support emails, solicit feedback via social media.

A Hotjar heatmap we used here at KB on our Resources page
A Hotjar heatmap we used here at KB on our Resources page 

At KlientBoost, I’ll implement heatmaps and scroll maps routinely to evaluate how our visitors are interacting with our pages.
For our Resources page, we wanted to know which resources were getting more click action. We found that our guides and webinars were much more popular than our interviews and mentions, so we de-emphasized those and began working on additional guides and webinars for future marketing offers.

5 Examples of the Type of Feedback to Collect

Now, you might be imagining a scenario where you painstakingly set up a few usability tests and you’re staring a test subject in the face not know what to ask. Don’t worry. Here’s a few great topics and open ended questions to get you started:

  • Can users easily describe what your product offers?
  • What information/features do your users expect (or need)?
  • Do your users find value in their experience?
  • What works, what’s annoying, and where could your site be more helpful?
  • What’s the one thing your product couldn’t live without?

Jessica, from, took all of the data she gathered and identified similar pain points and experiences. With all of these together, she aligned the commonalities and was able to prioritize where testing should occur to alleviate these issues.

9 Super-Duper Practical Usability Testing Tips

If you’re new to all of this, you’ll love these. Here’s some really practical tips to get you started gathering killer data and forming testing hypothesis:

  1. Define goals for your study. Have an understanding of the business priorities and users’ priorities. Your goals will shape the direction of your questioning and the actions you’ll observe. There’s no room for the “ready-fire-aim” mentality here -- have a plan.
  2. Make sure you’re gathering the right users. You should know who your ideal user is or your primary persona. There’s no point in optimizing for people who aren’t relevant to your core audience.
  3. As a moderator, have a well-prepared script. Keep the conversation loose. Structure your questions to keep your subjects talking about what they value and what they need. Shape every question into an open ended question.
  4. Use segmentation. Segment your in-person users based on their LTV (lifetime value) metrics.
  5. Provide a set of actions for your subjects to perform. These should be based on the primary, secondary, and tertiary functions of your product.
  6. User recording tools allow you to segment your audience based on traffic sources, demographics, behavior, and actions. For context, it’s best to group your recorded sessions into your familiar persona groups.
  7. Avoid corrupting your results by interrupting your users or leading them. For physical testing, do your best to be an impartial moderator with extremely limited interaction and direction.
  8. When observing your users try using timers. You should also identify friction points (or pain points) by identifying and noting moments of confusion. Look for gestures, pauses as well as exploratory scrolls and clicks.
  9. Users who know they’re being observed often try and perform for you. They’re over thinking about the tasks at hand and this can muddy your data and skew your results. For best results, try your best to simulate a comfortable, natural environment for them to interact with your product.

If you follow these tips, you’ll fall in love with testing just like we did.

Testing gives mad feels, yo.
Testing gives mad feels, yo. - image source

Analyzing Your Results

Ok, you’ve got tons and tons and tons of results--what now? There are two main ways to turn your qualitative results into actionable insights:

  • Numeric: You can quantify the different parameters to find commonalities.
  • Depth: You can delve into how frustrated or comfortable your observed users were?

You’ll need to compare interactions and experiences across diverse users, so a numeric scale can be a huge help. You’ll need varying parameters to measure the differences between all of your recorded sessions or observations.
This criteria should be based upon your needs and priorities. In other words, you should establish a hierarchy of actions or functions of your product that would affect a user’s experience while simultaneously affecting your product’s viability.
Then, with your priorities in order, you can structure a pattern where you can easily identify commonalities and similar experiences to identify your pain points or poor user experiences. If high-level math and algorithms (or whatever) aren’t your thing, it’s cool. We’ve got good people for that…nerds.

An example of the System Usability Scale
An example of the System Usability Scale - image source

You should try this really great numeric scale that’s been used for over 30 years. It’s called the SUS (or System Usability Scale). It’s a Likert scale that was invented by John Brooke back in 1986. It’s basically ten questions from which subjects rank their agreement or disagreement on a scale from one to five.
With all of these responses in a numeric scale, you’ll be able to see trends much easier than reading review after review. Here’s an example of how to use Brooke’s SUS.

The glorious Unbounce dashboard
The glorious Unbounce dashboard

Here’s a super-simple example of some data to analyze (above). The Unbounce dashboard omits tons and tons of other metrics like: time on site, bounce rate, duration of visit, etc. It also isolates a simple analysis of how many visitors we’ve had, how many of them converted, and the comparison of how one variant compared to the other. Unbounce does all the hard work for us, so we can easily see which variant is more successful.
We’re constantly A/B testing on our landing pages in Unbounce, a popular usability testing tool. Here’s the result of a test we ran in Unbounce on a landing page for our case studies. Our design team hypothesized that visitors might convert better if the form was designed differently (to emphasize our call to action, messaging, and make the form more clickable overall). This alternate design improved the conversion rate by 61%. Which is pretty cool…
That’s an example of what you’re trying to achieve. You need to coordinate a way to isolate important commonalities to interpret successes and failures.

Numbers, numbers, numbers, OMG  
Numbers, numbers, numbers, OMG  - image source 

We’ve been talking about fitting experiences into a numeric system, so let’s shift into a more human approach...
In terms of measuring the depth of your results, you should be conducting interviews with your in-person test subjects, or contacting your session recording subjects for further analysis. These conversations should lead your subjects into discussing moments when they became confused, frustrated, or distracted.
This is critical, because they may not have been confused--but still found the experience frustrating. Visitors are much more than numbers and their experiences are complex, so this approach can yield really great insights into providing them with an optimized experience.
User research can sometimes seem cold or robotic -- quantifying and analyzing experiences and interactions, so it’s best to try and use these as a basis to establish a feeling or qualitative impression. Make it a human experience, and help your subjects communicate in that way. Be real.

Wrap Up on Website Usability Testing Methods

This was a pretty quick look at usability testing, and I hope you fell in love with the idea of gauging your users’ experiences. Now, that you’ve got some tricks up your sleeves, bring em’ to your next marketing meeting, and impress your teammates. You can lead the way to gauge usability really fast, on the cheap, or in an in-depth way (like a boss) to gain some game-changing insights.
What are some ways you and your team have gathered this kind of data? Any wins you wanna brag about? Let us know.

Chapter 2:
CRO Testing

What You’ll Learn: Now that you’ve learned what to research, it’s time to start running tests to increase your conversion rates.

Chapter 3:
CRO Strategies

What You’ll Learn: You understand CRO, you know how to accurately test, now what tried & true strategies can you put to the test?

Chapter 4:
CRO Tips and Tricks

What You’ll Learn: Sometimes one new idea is all you need to get a new test to perform better. Here are some tips and tricks to take you further.