Josh Gallant
Josh is the founder of Backstage SEO, an organic growth firm that helps SaaS companies capture demand. He’s a self-proclaimed spreadsheet nerd by day, volunteer soccer coach on weekends, and wannabe fantasy football expert every fall.
» More blog posts by Josh Gallant
Why is A/B testing analysis important?
By now, you’re likely familiar with the basics of A/B testing and its use.
To summarize, A/B testing pits two or more variations of a landing page, marketing asset, or individual element (like a headline) against each other to help you figure out which one performs best.
These tests help you gather critical data that you can then use to inform your CRO efforts.
Analyzing your A/B test results is important for four major reasons:
- To see how effective your hypothesis was
- To figure out which variant was most successful
- To understand why the test generated its results
- To make informed decisions
Basically, you want to determine if what you did worked, why it worked, and what findings you can use going forward. Think of A/B testing analysis as a debrief for your experiments, where you arrive at your conclusions.
You evaluate and assess your data based on key metrics and benchmarks to determine what elements are working, what elements need to be changed, and to learn more about how your pages are performing.
In some cases, an A/B test has a clear, obvious winner—variant A has a different headline than variant B, so it’s likely the reason why everything’s working.
In other cases, digging into the data can help you optimize without overhauling everything.
But what metrics and KPIs should you be tracking?
The 10 best A/B testing success metrics and KPIs to track
Figuring out what metrics and KPIs to track in your A/B testing efforts is sometimes extremely easy… and sometimes it’s not.
The best way to figure out what you should be tracking is to map out your user journey from start to finish. Every change you make as part of an A/B test impacts how a customer interacts with your landing page. Mapping that user journey, even roughly, will help you determine what metrics will measure your success.
Here are 10 of the best A/B testing metrics and KPIs you can track right now.
1. Conversion rate
The conversion rate of a given landing page is probably the most obvious metric you can track in your A/B testing—and definitely worth tracking as part of your CRO efforts.
Tracking your conversion rate is pretty simple, too—there’s an easy formula to follow:
All you need to do is determine what constitutes a conversion, whether it’s a sign-up, demo request, or a sale, and you’re off to the races. Easy, right?
Generally speaking, a higher conversion rate means more business, which is why it’s so central to conversion rate optimization.
So what is a good conversion rate, then?
It depends on the industry you serve. Across all industries, the median conversion rate is 4.3%. Conversion rate will vary dramatically depending on your offerings, the industry you work in, and your goals. Some businesses have a much longer sales cycle than others, particularly in SaaS.
And it’s worth reiterating the point here—a conversion could just be a completed action on a piece of content, like a form submission.
For businesses in the SaaS space, to continue the example, strong conversions may just be getting users in at the top of the funnel to keep them engaged with content and work their way through to a final sale. By comparison, an ecommerce site’s conversion rate may be a direct measure of completed purchases.
Zooming out from the landing page experience, conversion rate is a massively important stat that can help you uncover fascinating data about your users and their journey.
2. Bounce rate
In contrast to your conversion rate, bounce rate is the percentage of visitors who enter a landing page but leave without taking additional actions—they bounce right out of there. These exits are known as single-page sessions.
You can calculate your bounce rate with this easy formula:
Checking the bounce rate on your landing pages is a great way to gauge visitor interest. If you have a high bounce rate, then it could mean that you’re not grabbing attention the way you’d hoped—you might need to rejig your headline and offer, for example.
Alternatively, a higher bounce rate could also mean that there are user experience issues on your page, and that they’re causing confusion—for example, the layout doesn’t guide a visitor through your content and to your offer.
Either way, bounce rate is a great indicator of the overall quality of your page, the content on it, and its UX.
Many of the activities and processes you engage in as part of CRO will have a knock-on effect of reducing bounce rate, simply because you’re making more engaging, more relevant, and more useful content that users will want to stick around and interact with.
3. Click-through rate (CTR)
Click-through rate is a metric every marketer should be tracking on any CTA they have in the wild. Click-through rate specifically measures the percentage of clicks on a specific link, as compared to the number of times the link was shown (AKA the number of impressions).
Here’s how you calculate your click-through rate:
And like the conversion rate and bounce rate of a landing page, CTR is one of the bread-and-butter stats that everyone should be watching to track performance.
A low CTR could mean that you need to spruce up your calls to action or that the elements you have on your landing page aren’t working together as intended.
After all, every part of your landing page impacts your conversion rate, including your clickable CTAs. Building optimized CTAs that get the click is key to continued success.
4. Scroll depth
Now we’re getting into the deeper stuff (pun definitely intended).
Scroll depth refers to how far down a web page a user scrolls. It’s a relatively straightforward metric, but tracking it is another story.
The good news—tools exist that integrate with your landing pages to actively track scroll depth. You can add scroll depth tags to your Google Analytics tracking or use a dedicated heatmap tool like HotJar.
Because it’s so closely linked to bounce rate and other time-on-page metrics, scroll depth will always be an important metric for A/B testing and landing pages. It can provide a lot of information about user behavior.
Good scroll depth typically falls between 60 and 80% of the way down a page. Basically, anything over 50% is a strong showing.
That said, it’s important to contextualize scroll depth by comparing it with time on page or session duration (the amount of time a user spends on your landing page or website).
- A high scroll depth percentage coupled with a short session duration could indicate visitors are glancing at your page without engaging.
- On the other hand, a low scroll depth but high session duration could suggest that users are focusing on the above-the-fold section and not engaging beyond it.
Most tools you use to track scroll depth will automatically run the numbers for you, but if you ever need to run some fast numbers on your own, here’s the formula for calculating scroll depth yourself:
Scroll depth is a great metric to track to determine the effectiveness of a page’s content.
5. Abandonment rate
At first glance, abandonment rate just looks like another term for bounce rate. But while bounce rate measures page exits, it measures the percentage of users that terminate a task before it’s completed.
In other words, the abandonment rate is how often users bounce when they’re halfway through converting.
This metric is most commonly tracked on ecommerce sites where users add items to their shopping cart but leave without purchasing.
There are a few common contributors to abandonment rates:
- Confusing checking or conversion process
- Unexpected shipping costs
- Lack of payment options
- Security concerns
- Window shopping
Regardless of the reason, abandonment rate is another important metric to help drive insights into user behavior.
Generally speaking, the greater the abandonment rate, the more likely it is that there are elements on your page causing friction. Tracking your form abandonment rate can help you determine which of your A/B testing variants is more likely to contribute to a sale.
Here’s a quick way to calculate it for forms in particular:
6. Retention rate
In the context of A/B testing metrics, retention rate is the percentage of users who revisit a landing page or website after a period of time.
This is another user behavior metric to keep an eye on because it provides another indicator of how effective a page is. Comparing retention rates between page variants can help you identify which version will likely get return traffic and engagement. Comparing retention rates across audience segments can offer great insight into which types of audiences are the most likely to convert.
This is a great way to optimize a marketing campaign for long-term success, and you can experiment with different offers to try and encourage repeat or return business.
7. Session duration or average time on page
Session duration tracks how long visitors spend on your entire website at a time, which can span multiple different pages. Average time on page tracks how long visitors spend on a single page on your website.
Both are great metrics to track in your A/B testing.
Why? Well, think of it this way:
Imagine you’re throwing a party. You want your guests to stick around and have a great time, right? It’s not a great feeling when someone drops by just to make an appearance and then pulls an Irish Goodbye after less than an hour.
Session duration and time on page in A/B testing is kind of like that. It tells you how long visitors (party guests) are spending on your pages (your party).
Tracking session duration and time on the page in your A/B tests can help you figure out which version of your page or offer is more appealing and interesting to your audience. It’s not a complete picture of why they’re hanging around, but the higher visitors stay on your pages, the more likely it is you’re doing something right.
These metrics, in conjunction with others that focus on user experience, can help you figure out more information about what’s working on your pages.
8. Average order value
Average order value, or AOV, is a metric that tracks how much money a customer spends on a purchase through your pages (just like it says on the tin). This metric is typically tied closest to ecommerce pages, and it’s an important one.
Imagine you’re testing out two different pages with different deals for your customers. Version A has some really incredible savings for your customers, and version B offers free shipping on all orders.
Tracking AOV lets you see exactly which version is better for your bottom line.
If you’re not factoring in AOV, you could end up declaring the wrong page a winner. Suppose version B has a 2x higher “add to cart” rate. Without AOV in the mix, this could look like a no-brainer A/B test. Version B all the way.
But what if the AOV on version A is 5x higher than version B? If users spend more money on version A, you’ve got a great insight into their habits and what offers will work going forward.
At the end of the day, tracking average order value and comparing it between your page variants lets you optimize your offers and could lead to better profit margins.
Here’s a quick way to calculate AOV on the back of a napkin:
9. Churn rate
Churn refers to the number of customers who discontinue a service, typically in the context of ongoing subscriptions or service offerings.
So how does churn rate get measured with A/B testing?
It’s not quite as straightforward as some of these other metrics and requires a bit more thought and prediction.
First, your marketing efforts for these customers are more like remarketing. You’re targeting existing customers that you think are likely to churn—that is, discontinue services—with offers designed to keep them onboard and engaged.
A/B testing to reduce churn rate effectively has you tracking what offers work best to keep customers engaged and on board with what you have on offer.
10. Revenue
Revenue is probably the most important metric you can track when A/B testing.
Tracking revenue lets you directly measure your hypotheses against your bottom line. You’ll often see this metric tracked in conjunction with things like AOV, your overall conversion rate, and abandonment rate.
Basically, A/B tests that track revenue let you see, with greater clarity, what impact your CRO efforts have on profit and whether they truly benefit your business.
Digging into revenue can help you uncover buying patterns and what messages are most likely to resonate, but it’s important to be cautious when attributing revenue changes to specific A/B tests. That’s because the revenue data from an A/B test that runs for a month isn’t necessarily applicable to all your pages at all times.
It’s important to carefully analyze your A/B testing data to make informed decisions that lead to long-term success, not flash-in-the-pan bursts of business.
Best practices for analyzing A/B testing results with key metrics
Now that you’re more familiar with some of the key metrics used in A/B testing, it’s time to put them to work.
Before you run any A/B test, define the specific metrics you want to measure and establish a clear hypothesis. Follow your standard procedure for running an A/B test and run it until completion.
From there, it’s time to analyze your data. Here are five best practices to help inform your analysis.
Check for statistical significance and winning variants
Any experiment you run will have a clear end date in mind. When you hit that endpoint, the first thing you should do is check to see if the findings from your test are statistically significant.
Statistical significance refers to data that cannot be attributed to pure chance. In other words, your test results aren’t a random fluke and are, in fact, the result of your experiment.
The tools you use to run A/B tests, like Unbounce’s A/B testing tool, will automatically determine if your results get the seal of approval.
From there, you can then look to see if there was a winning variant in your test. If one of the pages you had running clearly came out on top, make a note and record the data. If your champion page didn’t win, don’t worry—you can still gather plenty of data from the test.
Compare results with key metrics and KPIs
That list of metrics and KPIs we ran through up above? Yeah, we’re coming back to them!
Most of the tools you use to track marketing performance should already report on some or all of these metrics, so you shouldn’t have to worry about gathering all that data.
The metrics you measure against will be determined by the nature of your test, which means you should select them during your planning process to make sure they align with your goals and objectives.
Whatever A/B testing metrics you decide on, make sure they are specific, relevant, and measurable. That way, you can ensure you’re making informed, meaningful decisions about your A/B testing efforts.
Determine internal and external influence
As you’re analyzing your data, you’ve got to stay critical. Take stock of any internal or external variable that may have influenced or affected your results.
For example, let’s jump back to what we discussed about tracking revenue as a key metric.
As important a measurement as this is, it can also be tricky to analyze effectively. If you run a test on sales around what is traditionally a busy season for your business, or if something in the news or pop culture has drastically influenced the buying habits of your customers, you need to note it.
Basically, if there’s a chance your A/B testing efforts have been influenced by these factors, it could skew what the data is telling you.
Take action based on your A/B test analysis
Record your findings at the end of your analysis and determine if the test worked.