Over the past 14 years, we’ve collected case studies and UX metrics from real-life projects for 5 different editions of our UX Metrics and ROI report. This collection of data allows us to look back over a variety of design projects since 2006 and see how the impact of UX design has changed.
About the Case Studies and Metrics
Since 2006, our UX Metrics and ROI report (previously titled Return on Investment for Usability) has served as a showcase of real-life UX-benchmarking projects. Over 5 editions, we’ve collected a total of 99 case studies. Each case study details the product, its design problem, design changes, and at least one pair of UX-metric values (before and after the design change).
For each metric pair, we calculated an improvement score — an estimate of the relative magnitude of each design project’s impact on the metric, expressed as a percentage. For example, imagine that the conversion rate on an ecommerce site was 2% before a redesign. After the redesign, the conversion rate was 5%. In this case, the ratio between the two values of the metric (after and before) would be 5/2 = 2.5, for an improvement of (5-2)/2 = 1.5 = 150% in the conversion metric.
We calculated improvement scores for all the collected case studies and compared the improvement scores from 2006–2008 against the new improvement scores collected in 2020.
Substantial Decrease in Average UX Impact
Even by 2008, when we published the 3rd edition of this report, UX improvements had shrunk compared with 2006. That trend has continued through to 2020.
On average, UX improvements have substantially decreased since 2006–2008: from 247% to 75% (a 69% decrease). This difference is statistically significant (p = 0.01) — we can be quite confident that average improvement scores are lower now than they were 12–14 years ago.
These numbers are averages across many different design projects. We found an immense amount of variability in our data — the 95% confidence interval for the 2020 is quite wide, from 8% to 104%. (Because the collection of improvement scores from 2006 to 2008 doesn’t have a normal distribution, we can’t provide a confidence interval for that dataset.)
There’s a substantial difference between (for example) an 8% increase in a conversion rate, or a 104% increase. We weren’t surprised to see such a wide range because the outcome of any benchmarked design project depends heavily on the following factors:
- The existing quality of the experience: A product with many big UX problems has lots of opportunities for big improvements.
- The expertise and talent of the team: The better the UX team, the more likely it is to make the right design choices (based on research and experience). However, even an excellent UX professional won’t get every design choice right every time.
- The quantity and quality of the changes: A large project with many changes may be more likely to have a big impact on metrics than a small one. However, this is not always a hard rule: when there are many changes, there are also many opportunities to go wrong. Moreover, some of the case studies collected for the report show big metric impacts as a result of small, smart design changes.
- The precision and sensitivity of the methods and metrics: Some research methods are more sensitive than others. Part of this difference comes from the different available sample sizes for each method. For example, A/B testing on a main page of a site can capture thousands of data points easily, enabling researchers to detect even a slight change of a few percentage points. But with quantitative usability testing, it is a lot more expensive to get such as large sample size.
- How difficult it is to change the measured behavior or preference: The more resistant your users are to take a specific action, the more challenging it will be to change their behavior. For example, consider the proportion of users making a big purchase on a luxury ecommerce site. Even a big improvement in design might cause only a small change in how much people want to give you money.
Bear in mind that your individual design project may have a different impact depending on these factors. It’s possible your improvement will be close to the average of 75% but, as the wide confidence interval shows, there is an immense amount of variability. In our data, half of the values were between 13% and 157%.
This doesn’t mean that we can expect 50% of all design projects will have an improvement score within that range. It’s possible your own impact could be a 500% or 5,000% improvement, but our data suggests that an improvement score that high is unusual. Similarly, you could see a low improvement score of a few percent, but that would also be rare. In most cases, you can avoid a negative score: even though it’s certainly possible to make the design worse, as long as you test it well before implementation, you will not release a degraded user experience.
Smaller UX Improvements Are Good
At first glance, it might look like a decreasing trend in the average improvement resulted from a UX-focused redesign means that UX professionals have gotten worse over the years. We believe that the opposite is true. Over the past 14 years, the UX-design community has grown substantially — and so has our collective knowledge and experience.
This decrease in average improvement scores doesn’t mean we’re doing a bad job; it shows that, as an industry, we’ve done an excellent job over the past 10+ years. At the beginning of the human factors and usability movement, just about every product had substantial room for improvement.
“In these early years, design was truly abominable — think splash screens, search that couldn’t find anything, bloated graphics everywhere. The only good thing about these early designs was that they were so bad that it was easy for usability people to be heroes: just run the smallest study and you would inevitably find several immense opportunities for improvement. Finding and fixing UX problems during the dot-com bubble was like shooting fish in a barrel — every design was so bad!”
Jakob Nielsen, Principal of Nielsen Norman Group
We’ve addressed many of the biggest UX problems across many different designs. In some cases, those were individual fixes for specific problems. But as an industry, our collective knowledge has grown as well — we now have a rich set of best practices and design patterns. Each individual designer can build on the existing work of the designers that have gone before.
So, are all the world’s UX problems now fixed, and are all designs perfect? Certainly not. There’s still substantial room for improvement for the majority of experiences. (Luckily, that means UX professionals have great job security.) This finding simply shows that we’ve done a good job of improving experiences overall and addressing the most glaring problems.
Does this mean that UX is less important or impactful today? Also no. We believe that even though the magnitude of these design changes has decreased, they are no less important. Because experiences overall are getting better, user expectations have gotten much, much higher. Today, if you put a website considered adequate in 2006 in front of a user, she would refuse to use it.
We believe that those expectations will continue to rise in lockstep with the average quality of experiences. (Again, this is good for UX professionals’ job security!) As a consequence, even small improvements in the UX may be worth the expense. Also, remember that your competitors are likely improving their experiences as well — a UX arms race.)
Future Predictions
If these trends persist, designs will continue to improve on average, and — as a result — UX interventions will continue to have small effects. However, we believe that those smaller changes will be valuable because user expectations and standards will grow ever higher as well.
As an industry, we’re moving into a period of refinement. If we think of building our experiences like building a house, the 2000s was a time where the focus was largely on the foundation, the walls, the roof — we were focused on just making sure the structure didn’t collapse. In 2020, many teams are finding that the structure of their experiences is sound and they can now turn their attention to fine-tuning: painting the walls, installing skylights, interior décor. Each individual change may not have as massive an impact as it once did. But the accumulation of these small refinements will produce more sophisticated, enjoyable experiences.
For more information on these case studies and more analysis of the data (including analysis of how each category of UX metric improvements has changed over time), check out the full report: UX Metrics and ROI.