Using click testing to validate website designs

Exploring new ways to improve the sutton.gov.uk website

Luke Piper
3 min readSep 13, 2021

What’s the problem?

Over the last 12 weeks, the sutton.gov.uk redevelopment team has designed and tested hundreds of concepts for a new council website. The best ideas have been shortlisted and now the challenge is figuring out which design is the most effective and should be prioritised in beta.

One of the methods we’re using to validate effectiveness is first-click testing. This activity captures where on an interface a user first clicks or taps when completing a task. It’s a useful method to understand if users find a website clear and navigable.

Why is click testing important?

Website research shows that if a user's first click leads down the right path their overall task success tends to be higher (around 87% task success) than if the click leads down the wrong path (around 46% task success).

These statistics echo the feedback we’re getting about the current sutton.gov.uk website. We’re hearing that the design lacks clarity and often leads users into the wrong areas of the website. The result is a frustrating experience with users preferring to call or email the council instead.

With this in mind, the design of a new Sutton website needs to be thoroughly click tested to ensure users can successfully find what they need.

How do we conduct a click test?

The image below shows two variations of a homepage for the new website. Each variant uses a slightly different set of components and navigation structure. We want to know which variant users find clearest to navigate.

Side by side comparison of two variations of a homepage for the new Sutton council website

To test this we assigned users with a simple instruction, indicate where on the interface they would go to renew a parking permit. The image below shows the responses received as a heatmap.

A click map showing where users would go to renew a parking permit

Once we’ve received a significant number of completions the next step is to analyse the data. Here we’re looking at three things to confirm whether a design is effective. Distribution, time, and confidence.

Distribution

Did users click or tap in places that might lead to a successful task completion? In the parking permit heatmap above, we see that Variant A generated lots of clicks in areas unrelated to the task. Variant B, however, received a more accurate click distribution.

A noisy click scatter might signal that a design is causing uncertainty.

Time

How quickly did users decide where to click or tap? In the example above, the average click time for Variant A was 10 seconds whereas for Variant B it was 7 seconds.

A long decision time indicates users are having to think harder to find what they need.

Confidence

How confident do users feel that their click or tap would lead to completing the task? To determine this we asked users to place how they felt on a confidence scale. In the parking permit example, users tended to feel more confident in their decisions when using Variant B.

Lower levels of confidence suggest the labeling or design of an interface is unclear.

Going forward

In this example, we are confident that variant B generally outperformed Variant A for usability. We now need to test how our shortlist of design variants perform when users complete other tasks (i.e where would you go to pay a council tax bill). Depending on the insight gained we might then need to amend any poor performing elements of a design to identify whether this makes a difference to usability.

Once we’ve tested all our design variants across a wide breadth of tasks, we can use the evidence to help us make an informed decision on which website design to focus on in beta.

--

--