What is A/B testing? A step-by-step guide with ideas & best practices

In simple terms, A/B testing is when you test a new version of an asset against the current version to see which performs better.

Marketers run A/B tests to experiment with different messaging, page layouts, calls to action, color schemes, and more.

There are plenty of benefits to testing these elements rather than simply rolling out the changes right away. Mainly, you get to reduce the risk of rolling out a change that ultimately hurts performance, then having to roll it all back. You also get to isolate specific elements on the page to more deeply understand what actually moves the needle.

TABLE OF CONTENTS

  1. What is A/B testing?
  2. A/B testing terminology
  3. Why should you run A/B tests?
  4. What can you A/B test?
  5. How do you run an A/B test?
  6. A/B testing metrics
  7. How to analyze A/B test results
  8. Common A/B testing mistakes to avoid
  9. A/B testing tools
  10. A/B testing FAQs

In this guide, we’re taking a comprehensive look at all things A/B testing.

We’ll cover the different types of A/B tests, how to effectively plan and launch a test, how to evaluate if a test was successful, which metrics you should focus on, common mistakes we’ve seen over the years, and more.

Let’s get to it!

What is A/B testing?

A/B testing (sometimes called “split testing”) is a type of experiment in which you create two or more variants of a piece of content—like a landing page, an email, or an ad—and show ’em to different segments of your audience to see which one performs the best.

Essentially, A/B testing lets you play scientist—and make decisions based on data about how people actually behave when they hit your page.

How does A/B testing work?

In a typical A/B test, traffic is randomly assigned to each page variant based upon a predetermined weighting. For example, if you are running a test with two landing page variants, you might split the traffic 50/50 or 60/40. To maintain the integrity of the test, visitors will always see the same variant, even if they return later.

The main factor that decides how much weight you would ascribe to your page variants during a test is timing: whether you’re starting the test with multiple variants at the same time or testing new ideas against an established page.

A/B testing variants

PRO TIP: Keep in mind you need to drive a certain amount of traffic through test pages before the results are statistically significant. You can find calculators online or use landing page builders and conversion rate optimization tools, like Unbounce, that have A/B testing and confidence percentages built-in.

If you’re starting a new marketing campaign and have several ideas about which direction to take, you can create a variant for each idea.

In this scenario, you’d most likely assign equal weight to each variant you wanna test. For two variants, that’d be 50/50. For three, it’d be 33/33/34. And so on. You want to treat them equally and pick a champion as soon as possible. As you have no conversion data on any of the pages, begin your experiment from a position of equality.

If you already have a campaign that you want to try some new ideas out on, it’s usually best to give your new variants a smaller percentage of traffic than the existing champion to mitigate the risk inherent with introducing new ideas.

Admittedly, this will be slower. It’s not recommended that you try to accelerate an A/B test by favoring new variants though, as they’re not guaranteed to perform well. (Remember, A/B testing is all about mitigating risk. Test wisely!)

Multivariate testing vs A/B testing vs split testing vs A/B/n testing: What’s the difference?

The short answer is this:

  • A/B testing is a simple “Version 1 vs Version 2” test.
  • A/B/n testing is the same, but for three or more variants (n = variable).
  • Split testing is also typically a “Version 1 vs Version 2” test but for larger changes.
  • Multivariate testing tests multiple elements on a page at the same time.

Of course, there’s more nuance to it than that though.

All four testing types may sound fairly similar on the surface, but it’s worth understanding the subtle differences before you start building and testing your hypotheses.

A/B testing

A/B testing is the most common form of testing used in digital marketing. Two versions of a landing page or element are compared to determine which one performs better in terms of a specific goal, like click-through rates or conversion rates.

A/B/n testing

A/B/n testing is the same as A/B testing, just with more than two variants. The “n” represents a variable number of variants (i.e. A/B/C/D/E/F etc.). You can test more than two variants at a time, just be careful not to go overboard or you’ll be left trying to decipher the performance data version of The Da Vinci Code.

Split testing

Split tests are similar to A/B testing in that it involves comparing two versions of a page or element, but split testing is more commonly used to describe larger changes or complete redesigns that require more detailed analysis. In truth, you can use A/B testing and split testing interchangeably.

Multivariate testing

On the other hand, multivariate testing involves testing multiple variations of multiple elements on a page at the same time.

For example, in a basic A/B test you may leave everything the same between your variants except for the heading. This way you can isolate just one element for your test, and any changes in performance are likely connected to that element. In a multivariate test, you may be testing the heading, the CTA button, the form length and the background color all simultaneously.

Multivariate tests can definitely work, but the same risks that we mentioned with A/B/n testing apply here as well. Test too many things at once and you may not know which element actually led to any positive performance changes.

A/B testing terminology 101

Before we get into how you run an A/B test, it’s important to learn some fundamental testing terminology:

What is a “variant”?

“Variant” is the term for any new versions of a landing page, ad, or email you include in your A/B test. It’s the version where you apply the change you’re experimenting with—your “variable.” Although you’ll have at least two variants in your A/B test, you can conduct these experiments with as many different variants as you like. (But note that it’ll increase the time your test takes to achieve statistical significance.)

What is a “control”?

In the context of A/B testing, the “control” variant refers to the original or existing version of a webpage, email, or other marketing material that you are testing. This is the version that is currently in use before any changes are made. It serves as a benchmark against which the “challenger” or “variant B”—the modified version where one or more elements have been changed—is compared.

At the beginning of any A/B test, your control variant is also your “champion”.

What is a “champion”?

You can think about A/B testing like gladiatorial combat. Two (or more) variants enter, but only one variant leaves. This winner (the version with the best conversion performance, typically) is crowned the “champion” variant.

When you start an A/B test, your original version is your champion by default, since it’s the only version for which you already have performance data. Once the test concludes, you might find that one of your “challenger” variants has performed better than the original—which makes it your new champion.

What is a “challenger”?

When starting an A/B test, you create new variants to challenge your existing champion page. These are called “challenger” variants. If a challenger outperforms all other variants, it becomes your new champion. If it doesn’t, you can throw it in the scrap heap of failed marketing ideas.

What is “statistical significance”?

“Statistical significance” is a crucial concept in A/B testing that determines the reliability and validity of the results obtained from the test. It’s the key to understanding if the changes you’re testing genuinely affected outcomes, or if any differences in the data were just down to variance.

There isn’t necessarily one “correct” formula for calculating statistical significance, but one common approach uses Pearson’s chi-squared test to determine the likelihood of an outcome being statistically significant.

Luckily, to spare you the high school calculus flashbacks (SIN vs COS vs TAN anyone?), Unbounce’s A/B testing tools have confidence percentages built-in, so you don’t have to do all of the math yourself every time.

There’s always going to be some margin of error no matter which approach you take, so treat these formulas as directionally accurate most of the time.

Why should you run A/B tests?

Beyond sheer curiosity, here are four great reasons why you should A.B.T. your landing pages (always be testing, that is).

Reason #1: Improve conversion rates

First, the obvious answer.

A/B testing can lead to higher conversion rates.

In fact, that’s typically the reason most marketers start experimenting with A/B tests. You have a landing page that converts at 2% and you want it to convert at… more than that.

Especially when you consider that the average conversion rate is 4.3% (across all industries of course, but the premise still remains.)

With A/B testing, you can experiment with every single element on the page to constantly chase a higher and higher conversion rate. The higher your conversion rate, the greater the return on investment in most cases, which leads us to the next reason.

Reason #2: Greater ROI from all traffic sources

Why would you want higher conversion rates?

To squeeze more juice out of the orange.

You’re already running marketing campaigns (or soon will be) and in most cases, the more conversions you can generate, the better. This is because your conversion events are ultimately part of larger conversion funnels or sales funnels.

One conversion event leads to the next, and so on.

For example, if you’re already planning to spend $10K on a PPC campaign to drive traffic to one of your core landing pages—why not actively test the page to make sure you’re maximizing the $10K investment?

After all, your business goals are likely tied to the actual returns (i.e. dollars) that the conversions lead to.

Higher conversion rates = more conversions = more ROI.

Reason #3: De-risk design layout and messaging updates

We’ve all been there.

You have a vision for what your landing page could look like, and you’re incredibly confident that conversions are going to skyrocket when you implement the changes. You’re so confident that you just hop straight into the editor, make the changes and go live.

…and then you look at Google Analytics the following week…

Your conversion rate dropped from 5% to just 2% on the page.

“Undo, undo, undo!”

Instead of just going “full send” with your changes with your fingers crossed, you can significantly reduce the risk by running a simple split test first.

Best case scenario, you were spot on and the new version is absolutely crushing the old one. But by de-risking the change, you’re able to quickly pull the plug on the change if things are going off the rails.

Reason #4: Better understand your customers & visitors

One of the more underrated benefits of A/B testing are the insights into user behavior you can pull.

You likely have some assumptions on what would motivate your ideal fit users to actually convert on your pages, but best way to test those assumptions is quite simple:

Test your assumptions with real users.

Run A/B tests to experiment with product messaging, value propositions, or just overall page layouts.

For example, you may find that mobile users are more likely to convert if your form is above the mobile “fold” rather than asking them to scroll. Or you may find that a higher percentage of users convert if you add a personalized element like the name of their city to your primary H1.

PRO TIP: You can use dynamic text replacement inside Unbounce to personalize your landing pages at scale. This way, instead of needing to create dozens of unique pages for each city, you can create one central page and dynamically update the city name.

What can you A/B test?

Most marketing departments rely on a mixture of experience, gut instinct, and personal opinion when it comes to deciding what will work better for their customers. It sometimes works out, but often doesn’t. When you start A/B testing, you should be prepared to throw all the boardroom conjecture out the window: the data (properly interpreted, anyway) doesn’t lie. It’s worth telling your boss this.

There are a number of elements that you can focus on in your testing. The different variations and content that goes into the test are up to you, but which one works the best (whether you like it or not) is up to the customers.

Some of the elements you should consider split testing are:

Headlines

Your main headline is usually a succinct rendering of your core value proposition. In other words, it sums up why anyone would want your product or service.

When it comes to testing, consider playing around with the emotional resonance of the wording. You might try a headline that evokes urgency, or one that fosters curiosity. Similarly, experimenting with the length of the headline can impact performance—while shorter headlines are generally punchier, a longer headline can convey more information and potentially draw readers in more effectively. And don’t overlook the potential impact of font style and size—sometimes a change in typography can refresh the entire feel of a page.

Here are some other approaches you can try when testing your headline:

  • Try a longer versus shorter headline
  • Express negative or positive emotions
  • Ask a question in your headline
  • Make a testimonial part of your headline
  • Try different value propositions

Calls to action (CTA)

On a landing page or web page, your call to action is a button that represents your page’s conversion goal. You can test the CTA copy, the design of the button, and its color to see what works best. Try making the button bigger, for example, or make it green for go, blue for link color, or orange or red for an emotional reaction.

You can also explore different verb usages to incite action. (For instance, “Join” might have a different impact compared to “Discover”.) Remember, though, the copy should speak to the value of your offer—the benefit someone will get from clicking.

Hero sections

A hero shot is the main photo or image that appears above the fold on a landing page or web page. Ideally, it shows your product or service being used in a real-life context, but how do you know what hero shot will convert for which landing page? Do you go with the smiling couple? Or maybe a close-up of the product itself? Experiment and find out.

You might test different imagery styles—such as photographic or illustration—to see which one resonates more with your audience. Similarly, experimenting with the size and orientation of the image can help shape visitors’ focus. Play around with the color schemes to evoke different emotions and set a specific tone.

PRO TIP: Just like your headline and supporting copy, the hero shot is subject to message match. If your ad mentions mattresses, but your landing page’s hero shot shows a rocking chair, then you’ve likely got a mismatch.

Lead forms

Depending on your business, you might need more than just a first name and an email—but the number of fields can be a decisive factor in user engagement.

You might test a form with only essential fields against one with additional, optional fields to gauge your visitors’ willingness to provide more information. Additionally, experimenting with different types of fields—such as dropdowns or open fields—can offer insights into user preferences and potentially increase form submissions.

If you have a particularly strong need for data, try running a test with different form lengths. This way, you can make an informed decision about what abandonment rate is acceptable when weighed against the extra data produced.

Copy

Time to put on your “content creator” hat.

For the copy of your campaign (whether on a landing page or in an email), you might consider testing different writing styles. For example, a conversational tone might resonate better with your audience than a formal tone. It could also be beneficial to experiment with the inclusion of bullet points or numbered lists to enhance readability and engagement.

Often the biggest factor is long copy versus short copy. Shorter is usually better, but for certain products and markets, detail is important in the decision-making process. You can also try reordering features and benefits, or making your language more or less literal.

There are lots of opinions on what works and what doesn’t, but why not test it and see for yourself?

PRO TIP: If you’re an Unbounce user, you can use Smart Copy to generate conversion-optimized copy automatically with AI. Blank pages are truly a thing of the past.

Layout

The layout of your landing page or email can completely change the visitor experience. You might try a layout that emphasizes visual elements over text—or vice versa—to see which is more effective.

Will a CTA on the left outperform one placed on the right? And does that testimonial video do better if you put it at the bottom of the page or the top? Good question. Sometimes changing the layout of a page can have major effects on your conversions.

Experimenting with navigation can also impact performance. Perhaps a sticky navigation bar works better, or maybe a sidebar navigation is more user-friendly. The goal should be to create a layout that is both aesthetically pleasing and facilitates a seamless user journey.

PRO TIP: If you want to experiment with layout, move one thing at a time and keep all other elements on the page the same. Otherwise, it’ll be difficult to isolate the changes that work.

Virtually anything else you can think of

The reality is, you can test anything you want.

The obvious and most commonly high-impact elements are headings, CTA buttons, and all of the others covered above, but the list of possibilities doesn’t end there.

Want to test 10 testimonials on-screen vs just one big one?

Do it.

Maybe you’re real confident that a “dark mode” approach will work better?

Test it.

If you can think it, you can test it. After all, the worst thing that can happen is… the challenger variant loses and you stick with the control. No sweat. Even a failed test will give you more context into what your ideal users do or don’t seem to value.

If you’re interested, we’ve built out a massive checklist of A/B test ideas to help inspire your next test—big or small. Download the guide for free today.

How do you run an A/B test?

Cool, so now you know the basics of A/B testing. But how exactly do you go about setting up and running an A/B test to improve your campaign performance?

Here’s the step-by-step process of running an A/B test, from the initial stages of identifying your goals and formulating hypotheses, to creating variants and analyzing the results.

Step 1: Identify your key metric and goal

Before you start A/B testing your campaign, you should get super clear on the outcome you’re hoping to achieve. For example, you might wanna increase your ad clickthrough rate or reduce your landing page bounce rate. (Whatever metric you wanna influence, though, remember that the ultimate aim of A/B testing is to increase your campaign conversion rate.)

A clearly-defined goal will help you shape the hypothesis of your A/B test. Say you’re getting lots of traffic to your landing page, but visitors aren’t clicking on your CTA—and you wanna change that. Already, you’ve narrowed down the number of variables you might test. Could you improve CTA clicks by making the button bigger, or increasing the color contrast? Could you make the CTA copy more engaging?

Once you’ve got your testing goal, forming a hypothesis is a whole lot easier.

Step 2: Form your hypothesis

The next step is to formulate a hypothesis for you to test. Your hypothesis should be a clear statement that predicts a potential outcome related to a single variable. It’s essential that you only change one element at a time so that any differences in performance can be clearly attributed to that specific variable.

For example, if you wanna improve the clickthrough rate on your landing page CTA, your test hypothesis might be: “Increasing the color contrast of my CTA button will help catch visitors’ attention and improve my landing page clickthrough rate”. The hypothesis identifies just one variable to test, and it makes a prediction that we can definitively answer through experimentation.

Make sure that your hypothesis is based on some preliminary research or data analysis so that it’s grounded in reality. (We already know high-contrast CTA buttons get more clicks, for instance.) Whatever you test, you still wanna be reasonably confident that it’ll be effective for your audience.

Step 3: Create your variants

Creating variants means developing at least one new version of the content or element you wanna test, alongside your control version. In a standard A/B test, you’ll have two variants: variant A and variant B

“Variant A” is typically your control variant—the original version of whatever you’re testing. Since you already know how this version is performing, it becomes our baseline for any results. This is your “champion” by default. It’s the one to beat.

“Variant B” should incorporate whatever changes to your variable that you’ve hypothesized will improve performance. If our hypothesis is that a different color CTA button will get more clicks, this is the variant where we’ll make that change.

Although most A/B tests have just two variants, you can test additional variants (variant C, variant D) simultaneously. But be aware that more variants mean it’ll take longer to achieve statistical significance—and if you introduce any additional variables to the test (like a different page headline), it can become almost impossible to say why one version is outperforming another.

Step 4: Run your test

Once you’ve got your variants, you’re ready to run your A/B test.

During this phase, you’ll divide your audience into two groups (or more, if you’ve got more than two variants) and expose one half to variant A, the other to variant B. (Ideally, the groups should be totally random to avoid any bias that might influence the results.)

It’s essential that you run your test for long enough to reach statistical significance. (There’s that term again.) Essentially, you need to make sure you’ve exposed each variant to enough people to be confident that the results are valid.

The duration of your test can depend on things like your type of business, the size of your audience, and the specific element being tested. Be sure to calculate your A/B test size and duration to ensure your findings are accurate.

Step 5: Analyze your results

After you’ve got a large enough sample size, it’s time to analyze the data you’ve gathered. This means scrutinizing the metrics relevant to your variable—clickthrough rate, bounce rate, conversion rate—to determine which variant performed better. The winner becomes your new “champion” variant.

Say, for example, you’re testing a new CTA button color on your landing page to see if it gets more clicks. You’d wanna compare the clickthrough rate on the button of your page variants and see which is getting more visitor engagement.

Depending on what you’re testing, you might need to use analytical tools to dig into the data and extract actionable insights. This step is critical—it not only helps you identify the winning variant, but can also provide valuable information you can leverage in future marketing campaigns.

Step 6: Implement the winning version

The final step of your A/B test is to implement your learnings across your campaign. With these new insights, you can confidently roll out your “champion” variant and expect higher overall performance. Nice.

But the process doesn’t stop here. You should keep monitoring the performance of your changes to make sure they’re getting you the expected results. You also should already be starting to think about what you might test next, looking for new ways to improve your performance.

Which brings us to the final step:

Step 7: Run another A/B test (and then another)

After you’ve implemented the winning version and wrapped up your initial test, the best thing you can do is simple:

Run another test.

Quite literally speaking, you can and should always be testing something. There’s no reason not to. It doesn’t matter if your last 10 tests all fell flat on their faces or all crushed your loftiest expectations—just keep testing.

Test your H1.

Test your buttons.

Test your form length.

Test your hero images.

Test the testimonials you’re using.

Test your section order and overall layout.

Just keep going. Take what you learn from one test on one page and apply it to another. Then take what you learn from that test and apply it to the next, and so on.

Optimization is a mindset. Never stop testing.

A/B testing metrics to measure

First, the metrics you’ll already be familiar with.

More often than not, conversion rate will be the ultimate metric you’re looking to improve when you A/B test.

It may be more indirect at times (i.e. a test focused on improving a leading indicator that will likely result in more conversions) but the end goal will remain the same:

Get more conversions.

Conversion rate metrics

Conversion rate can then be split into three primary categories, depending on what the desired action is on the page:

1. Form submission rate

# of visitors that submit a form / # of page visitors

For lead capture pages with a form directly on the page, the conversion action you’re optimizing toward will be the number of form submissions. If you can improve the rate at which form submissions happen, you’re moving in the right direction.

2. Purchase rate

# of visitors that purchase / # of page visitors

For ecommerce businesses and product pages especially, the desired action on the page will be to complete a purchase. Depending on your checkout process, you may use add to cart rate as an alternative metric here, but keep the end goal in mind if you do—driving purchases.

3. Click-through rate (CTR)

# of visitors that click / # of page visitors

Lastly, the “catch all” metric for pages where the desired action is just clicking something, typically a button. If the rate at which visitors click your call to action (CTA) button goes up, chances are that’s a good thing.

User experience and visitor behavior signals

In addition to the primary metrics above, you can also test against plenty of user experience-focused metrics to optimize the things that indicate a conversion is likely.

For example, if you can improve the percentage of users who start filling out a form, you’ll likely see a lift in your overall conversion rate as a result.

  1. Time on page: Average time spent by users on a specific page.
  2. Form start rate: Percentage of users who start filling out a form.
  3. Form abandonment rate: Percentage of users who start filling out a form but don’t complete it.
  4. Pages per session: Number of pages a user visits in a single session.
  5. Session duration: Total length of time a user spends on the site during a single visit.
  6. Scroll depth: How far down the page users scroll, indicating content engagement.
  7. Bounce rate: Percentage of visitors who leave after viewing only one page.
  8. Exit rate: Rate at which visitors leave from a specific page.
  9. Navigation path analysis: Common paths taken through your site, indicating user flow.
  10. Interactive element engagement: User interactions with elements like sliders, calculators, or quizzes.
  11. Video engagement metrics: Includes views, play rate, and average watch time.
  12. Heatmap analysis: Visual data on where users click, move, and scroll on your pages.
  13. Page load time: Speed at which your pages become fully interactive.
  14. Mobile responsiveness score: How well your site adapts to mobile devices.

Marketing campaign, funnel, and business metrics

Beyond the direct on-page metrics, you can also monitor plenty of higher level metrics related to overall campaigns, lead quality, and return on investment.

For each of the metrics below, you could segment users to compare those who converted through a given landing page vs all others to optimize accordingly. For example, you may run an A/B test that ultimately results in a 50% lower conversion rate but 200% higher lead quality score, which should still go down as a win in the record books.

  1. Lead quality score: Average quality of the leads your page is generating.
  2. Funnel conversion rates: Conversion rates at different stages of the marketing funnel.
  3. Cost per lead (CPL): Cost to acquire a lead, indicating campaign efficiency.
  4. Return on ad spend (ROAS): Revenue generated per dollar spent on advertising.
  5. Email open rate: Percentage of recipients opening your emails.
  6. Email click rate: Percentage of recipients clicking links within your emails.
  7. Customer acquisition cost (CAC): Total cost to acquire a new customer, indicating campaign efficiency.
  8. Organic vs paid traffic conversion rate: Conversion rates from organic versus paid sources.
  9. Referral traffic conversion rate: Conversion rate of visitors coming from referral links.
  10. Retention rate: Percentage of customers who continue to purchase over time, indicating long-term value.
  11. Loyalty program engagement: Participation and engagement in loyalty programs, indicating customer loyalty.
  12. Net promoter score (NPS): Customer willingness to recommend your product/service based on their segment.
  13. Average time to conversion: Average time it takes for a lead to become a customer.

How to analyze A/B test results

After your test has been active for a little while, it’s time to start analyzing. How long your A/B test should run will depend on a ton of different factors, so how early you dive into the data will shift on a case-by-case basis.

Once you do start to evaluate your test, here’s a simple framework to follow:

Review your goal metric or metrics

First thing’s first, analyze how your primary metric is trending.

  • Improving: That’s a good start.
  • Declining: Not ideal, but it’s not game over yet.

If you set a target during the A/B test planning phase, also assess your progress toward that target. Did you reach the target already? Are you close to hitting it but just slightly behind? Are you miles away and moving in the wrong direction?

All you’re looking for right now is the top-line data. We’ll analyze it more granularly next.

Look into related metrics that may have changed

As we’ve covered a ton already, even though you’re optimizing for one metric, there are also dozens of other metrics that may be impacted by your test as well.

Before you call an A/B test a winner or loser based solely on the primary metric, spend a bit of time exploring other metrics that may be related.

Think time on page, lead quality, cart checkout rates, and more.

Ask yourself a few high-level questions:

  • Is anything significantly different with the metric?
  • Are there any red flags to be aware of before I end the test?
  • What else could I test based on what I’m seeing?

Confirm or reject your hypothesis

After you’ve dug into the data, if your test results are statistically significant, you should be in position to “call” your hypothesis. You have two options at this stage:

Confirm your hypothesis: What you thought would happen, happened. You were correct. Your challenger beat the control and you can declare the challenger as the champion.

Reject your hypothesis: What you thought would happen, didn’t happen OR the results are inconclusive. If you were incorrect and performance decreased, you can declare the control variant as the champion. If the results are inconclusive, you can choose to extend the test or declare a winner and launch a new test.

Come up with 5-10 new questions or ideas

This might just be the most important step of them all.

Before you close the book on an A/B test, come up with another 5-10 new ideas or questions that could be explored based on what you learned from this test.

They don’t need to be perfect.

Honestly, they don’t even need to make sense yet.

What matters most is that you’re continually “feeding the machine” that is your A/B testing engine. If you come up with five new ideas at the end of every A/B test you run, the number of ideas in your system is going to compound—quickly.

Important note: This doesn’t mean you must launch every single one of these ideas.

You may only launch 10% of the ideas you come up with, and that’s perfectly okay. Just keep building your idea bank and you’ll never get stuck staring at a blank “create a new variant” screen again.

Always. Be. Testing.

Common A/B testing mistakes to avoid

Marketers often make mistakes when A/B testing—they’ll stop the test too soon, jumping to conclusions before they’ve got the necessary data to make an informed decision. When you run your own test, make sure to avoid these common pitfalls.

Optimizing for the wrong audience

This one can be the silent killer, unfortunately.

It’s the silent killer because on the surface, your A/B test results can look fantastic. We’re talking click-through rates through the roof and average cost per lead hitting record lows.

What’s not to love, right?

If you’re optimizing for the wrong target audience, you may just be opening the door to visitors that have zero intent to actually pay for what you’re offering. That’s right, your lead list could be full of tire-kickers.

Let’s use an ecommerce example to illustrate this:

Audience A: The window shoppers

  • They love to look around and browse
  • They’ll click “add to cart” like there’s no tomorrow
  • They have no intentions of actually checking out though

Audience B: The serious customers

  • They know what they want and don’t browse much
  • They do deeper research before adding something to their cart
  • They have every intention of buying if the product is right for them

If you’re only optimizing for add-to-cart rate without any measures beyond that, it’s easy to fall into the trap of optimizing for the window shoppers and leaving the serious customers behind (which you obviously don’t want to do).

When you’re evaluating test results in particular, make sure you don’t have blinders on. Understand who your ideal target audience and ideal buyers are, understand what your ideal ultimate business conversion action is, then optimize for what gets them to act.

Testing too many things at once

We touched on this one briefly in the beginning when we compared A/B testing to A/B/n testing and multivariate testing.

If you try testing too many things at the same time, you run the risk of muddying up your data. In other words, if you test 10 different variables at once and performance improves, how do you know which variables actually led to the increase?

Now you may be thinking…

“Who cares? Performance is up so it’s a win regardless.”

You wouldn’t be incorrect to say that, but consider this:

What if of the 10 changes you made, two were incredibly positive and the other eight were all negatives?

Performance should never feel random. A/B testing is the gateway to understanding why certain things perform well so you can leverage those insights and keep improving from a solid foundation.

Declaring a winner too early 

Marketers often make the mistake of stopping the test too soon and jump to conclusions before they’ve got the necessary data to make an informed decision.

It can be tempting to roll out a winning variation as soon as you start to see a lift in conversions, but it’s crucial that you don’t jump to conclusions before you see the bigger picture. In the words of CRO expert Michael Aargaard:

You need to include enough visitors and run the test long enough to ensure that your data is representative of regular behavior across weekdays and business cycles. The most common pitfall is to use 95% confidence as a stopping rule. Confidence alone is no guarantee that you’ve collected a big enough sample of representative data. Sample size and business cycles are absolutely crucial in judging whether your test is cooked.

Michael himself runs tests for four full weeks, with a minimum of 100 conversions (preferably closer to 200) on each variant and a 95% confidence level being prerequisites for declaring a champion.

Despite his own methodology, Michael stresses that there’s no one-size-fits-all rule for declaring a champion, as there are many contextual factors that make each test unique. Focus on covering both a large enough sample size and a long enough duration of time to ensure that you’re getting a complete view of the page’s performance before calling it.

A/B testing tools

We obviously have a slight bias here, but our top pick is…

*Pretends to be shocked*

Before we hit on why, we’ve also published a complete breakdown of the best A/B testing tools on the market today. There’s 16 different tools on the list (Unbounce included).

If you’re serious about wanting to improve your conversion rates and maximize the value of every marketing campaign you run, definitely take a look at the full list.

Here’s why we’re confident in Unbounce as a landing page A/B testing tool for you:

  • A/B test anything: Drag-and-drop testing for headlines, buttons, images, and more.
  • Testing type options: Run A/B tests, A/B/n tests, and multivariate tests with unlimited variants for landing pages, popups, and sticky bars.
  • Drag-and-drop builder: Build beautiful, responsive landing pages even if you’re not a designer or developer.
  • Landing page templates: You don’t need to start your A/B tests with a blank page. Unbounce has a library of templates we’ve built up over more than a decade.
  • AI conversion optimization: Tap into AI to optimize your landing pages more effectively and efficiently. Once you’ve found a winning variant, you can tap into AI to send your visitors to the page where they’re most likely to convert.
  • Native analytics: You can analyze your A/B tests directly inside the Unbounce platform.
  • Reporting dashboard: With confidence intervals and real-time reporting, you can clearly identify the winning version of your landing page.
  • One-click A/B testing: Unbounce’s no-code A/B testing tool is built for marketers with no time to ask for help (no designers or devs needed) so you can own experiments from start to finish.
  • Manual traffic allocation: Everything doesn’t have to be 50/50. Split traffic across your page variants however you like, whether it’s 60/40 or 10/30/50.
  • Natively integrated testing: Create a great visitor experience without UX disruptions (like page loading delays or flickers) with server-side loading.

A/B testing FAQs

How long should I run an A/B test?

The sample size calculator above is also a great way to estimate how you should plan to run your A/B test for. The reality is it’s going to depend on a number of variables, and there’s no one-size-fits-all answer here.

Some A/B tests may lead to a statistically significant answer within a matter of days if there’s enough traffic and the impact is significant.

Can I test more than one thing at a time?

Yes you can, but you should be aware of the potential drawbacks of doing so.

We’ve covered them in-depth throughout this guide, but the main flag to be cautious of when testing multiple things at a time is that it can be tough to isolate which changes actually led to the improvements or declines.

Can I run an A/B test with three or more variants?

Similar to the question above, the answer is yes, but with some words of caution. The more variants you introduce into an A/B/n test, the longer it will take for you to achieve statistically significant results.

What’s a typical A/B test success rate I should aim for?

Generally speaking, you should be striving for a statistically significant A/B test success rate of between 20-30% on average. This is more of an art than a science though.

If your success rate is greater than 50%, it’s likely a sign that your control pages are falling short on a number of landing page best practices and could be improved significantly without needing to A/B test.

If your success rate is 10% or less, you may be setting unrealistic targets or going too granular with your tests (i.e. changing a single word in a subheading).

SUBSCRIBE
Don’t miss out on the latest industry trends, best practices, and insider tips for your marketing campaigns

Start A/B testing today

The best time to launch your A/B test was [insert some number of days or months or decades here] ago.

The next best time is today.

We’ve said it plenty, and we’ll say it again—you should always be testing.

The downside is virtually zero, and the upside can be massive. You get to experiment with any idea under the sun. If it doesn’t work? No problem, on to the next. If it does work? You get to enjoy the rewards (which in this case means more conversions!)

Here’s the best way to get started:

The ball’s in your court now. You got this!

Explore our resource library

Get actionable insights, expert advice, and practical tips that can help you create high-converting landing pages, improve your PPC campaigns, and grow your business online.

Landing pages
Conversion optimization
Digital marketing
Lead generation