25 A/B testing mistakes that are killing your conversion rates

It has been referred to as “the biggest marketing blunder of all time.”

In 1985 the Coca-Cola Company introduced New Coke, a modified formula created to combat the growing popularity of its chief rival, Pepsi. Coke gambled that this new formula would be loved even more than the original flavor.

They couldn’t have been more wrong. After being swamped by a tidal wave of negative responses from the public, Coke brought back the original formula as “Coca-Cola Classic” and eventually abandoned New Coke entirely.

Some mistakes are huge (maybe do a pulse check before fundamentally changing your core product, eh, Coke?), while others are small and easily fixed. However, there’s one thing all mistakes have in common: They contain valuable lessons that can lead to brighter outcomes—as long as you reflect and learn from them.

In this post we’re covering the most common A/B testing mistakes, as well as how to avoid them and get the results—and marketing success—you’re aiming for.

TABLE OF CONTENTS

  1. Pre-A/B testing mistakes
  2. Mid-A/B testing mistakes
  3. Post-A/B testing mistakes
  4. Which A/B testing mistakes are you going to avoid?

Pre-A/B testing mistakes to avoid: how to plan properly

A/B testing might sound like a straightforward game of comparing A to B, but if you’re not careful, you could end up with a big ol’ mess of useless data. Here’s the lowdown on some rookie mistakes that can happen before you even launch your test.

1. Not having a clear hypothesis

Ah, the gut feeling. It’s as tempting as that third cup of coffee, but just as risky. Starting an A/B test solely on a hunch or guess is very likely to lead to untrustworthy results and a heaping pile of disappointment.

How to do it right

To nail useful A/B testing results you need a rock-solid A/B test hypothesis, which is a clear, testable statement that predicts how changes to a landing page or element will impact user behavior. Here’s how to create a good hypothesis:

  • Dig into your web analytics: Look for patterns in user behavior that you’d like to make changes to.
  • Guess what’s stopping them: Use your Sherlock skills to speculate why users aren’t converting.
  • Craft your hypothesis statement: Make it clear what you’re testing and why it matters.

And always ask yourself the golden questions: 

  • Who’s visiting my landing page
  • Where did they come from? 
  • Why are they here, and why should they care about what I’m offering? 
  • What’s the secret sauce that could convert them? How do I sprinkle that sauce to boost conversions?


Recommended reading: How to formulate a smart A/B test hypothesis (and why they’re crucial)

2. Failing to segment your audience properly

One of the most common pre-testing blunders is failing to segment different populations in your A/B test. It’s tempting to lean on the overall conversion rate (CVR) as your go-to metric because it’s straightforward. But this shortcut can lead to skewed results and misguided decisions, leaving you with a pile of unqualified leads and wasted resources.

When you focus on the overall CVR, you ignore the nuances of your diverse audience. Different segments—new visitors, returning users, mobile users, and so forth—each have unique behaviors and preferences. 

Without segmenting your traffic, you might end up optimizing for visitors who aren’t your primary target, boosting conversion rates among less valuable segments while missing out on potential gains from your target audience. This misalignment means your optimizations won’t drive the impactful results you’re aiming for.

How to do it right

  • Build or identify the landing page: Whether you’re creating a new page or using an existing one, ensure it’s ready for testing.
  • Determine segmentation criteria: Based on your hypothesis, decide how you will segment your audience. This could be based on demographics, behavior, geography, device type, etc.
  • Set up at the ad level or marketing list: Make sure your test runs exclusively with your chosen subset. This setup is crucial for gathering relevant data.

After you’ve taken those steps, get even better results with our AI-powered optimization tool Smart Traffic. It can automate and deepen personalization by considering variables like device type, location, and more. It dynamically routes visitors to the page variant most likely to convert, taking the guesswork out of audience segmentation.

By focusing on specific audience segments, you’ll gather stronger insights that lead to more effective optimizations. Plus, with Smart Traffic, you can sit back and let the automation work its magic, ensuring each visitor gets a personalized experience that drives conversions.

3. Running A/B tests on pages that aren’t impactful

Sure, your “About Us” page is awesome, filled with delightful designs and fun, informative copy. But is it worth your time running an A/B test on? If it isn’t directly driving any conversions, then the answer is “no.” You’re plenty busy already, so it’s best to spend your time and effort on pages that will actually make some magic happen.

Animated GIF of a person saying MAGIC

How to do it right

  • Focus on impactful pages: Test high-traffic pages directly tied to your sales funnel, like product, checkout, or registration pages. These pages are crucial touchpoints in your customer journey.
  • Use your customer journey map: To ensure you’re testing the right pages, review your customer journey map. Identify the steps leading to conversion, such as clicking an advert, reviewing a product page, or reading shipping information, and create hypotheses with this journey in mind.
  • Evaluate page importance: Ask yourself key questions about the page you’re testing: What decisions has the user already made? Are they even on the path to conversion yet?

By aligning your A/B tests with the customer journey, you’ll focus your efforts on the most impactful areas, driving meaningful improvements in your conversion rates. 

4. Running a test before you’ve got enough users

If your page is a ghost town, your A/B test results will be spookily unreliable. Without significant traffic, you’re not going to hit that sweet 95% statistical significance mark, meaning your results are about as trustworthy as a fortune cookie prediction. (The 95% statistical significance is an industry standard, and it means that if you ran the experiment 20 times, you would likely get the same results 19 times.)

How to do it right

Check your traffic and conversions using a sample size calculator (we’ve got a pretty good one). If you’re running low on visitors, remember: A/B testing isn’t the only trick in your CRO (conversion rate optimization) toolbox. Try using surveys or heatmaps instead, or do holdout experiments where a small group isn’t exposed to changes, helping you spot long-term effects.

5. Forgetting that customers are connected

Standard A/B testing operates on the assumption that users don’t influence each other, but this isn’t always true in the online world. Users interact, share experiences, and even sway each other’s decisions. 

These interactions can mess with your results, leaving you scratching your head over misleading data. Imagine running a test where Group A sees a new feature and Group B doesn’t. If Group A users rave about it on social media or through word of mouth, Group B users might get influenced, skewing your test results.

Ignoring these interactions can lead to inaccurate conclusions and flawed optimization strategies. If you think your test results are solely based on isolated user behavior, you might miss out on understanding how social influence and network effects impact your data.

How to do it right

To get a clearer picture of user behavior, use network A/B testing to account for group interactions or avoid them altogether. Here’s how you can do it:

  • Isolate test groups: Ensure that users in Group A don’t interact with users in Group B. This might mean creating separate environments or communication channels for each group.
  • Analyze network effects: Use tools that allow you to measure the extent of group interactions. Understanding how much influence users have on each other can help you adjust your strategies.
  • Adjust for social influence: If isolating users completely isn’t feasible, factor in the social influence when analyzing your results. Look for patterns that suggest cross-group interactions and adjust your conclusions accordingly.
  • Monitor social channels: Keep an eye on social media and other communication platforms to see if your test is being discussed across groups. This can give you insights into how users might be influencing each other.

By accounting for these interactions, you’ll gain a more accurate understanding of user behavior, leading to better, more reliable optimization decisions.

6. Not involving your team in A/B tests

One of the most overlooked aspects of A/B testing is failing to involve your colleagues from different departments. When only a few individuals handle the testing process, you miss out on valuable insights and innovative ideas that could significantly impact your results. Collaboration across departments brings in fresh perspectives and diverse experiences, which can lead to more effective and creative testing strategies.

Another reason why it’s beneficial to keep other teams in the loop is because your test might impact different areas of marketing or down-funnel activities. For example, you might soft launch a feature that other teams weren’t aware was available to customers, leading to possible confusion.

How to do it right

Involve team members from different departments in the A/B testing process. Here’s how to do it effectively:

  • Cross-department collaboration: Bring in colleagues from SEM, SEO, content, design, and development. Their unique insights can shape more well-rounded and impactful tests.
  • Shared understanding: Help your team understand the A/B testing process by working together on a single test from start to finish. This builds a shared knowledge base and fosters better cooperation.
  • Encourage enthusiasm: When team members see the direct impact of their contributions—such as a significant increase in conversions—they’re more likely to be enthusiastic and supportive of future tests.
Animated GIF of four people doing a high five and saying teamwork

Recommended resource: Paid media experiment brief—use this template to plan, build, and optimize your experiments so you can run more experiments, more efficiently.

Mid-A/B testing mistakes to avoid: how to build better

Now let’s dive into the common mistakes that happen during the testing process and how to avoid them like a pro.

7. Prioritizing beautiful design over conversion

It’s tempting to think that a stunning design will naturally lead to higher conversions. But a beautifully designed page won’t always take the cake. Design is important, but only if it supports the real star of your page: the copy. A visually appealing page might not necessarily resonate with your audience or drive them to take action.

How to do it right

Start with strong, persuasive copy and then create a design that complements it. Always prioritize functionality and clarity over aesthetics. Test different design elements to see which ones truly enhance user experience and drive conversions.

  • Write persuasive copy first: Ensure your message is clear and compelling.
  • Design to support the copy: Create visuals that enhance the user’s understanding and engagement.
  • Test design elements: Validate assumptions about design impact through A/B testing.

8. Assuming testimonials are a magic bullet

Testimonials can be powerful, but they aren’t a guaranteed win. It’s a common mistake to assume that adding testimonials will always boost conversions without testing them. Even elements as trusted as testimonials need to be tested to ensure they’re effective for your specific audience and context.

How to do it right

Approach testimonials with the same scrutiny as any other content. Test different formats, placements, and styles to find out what resonates best with your audience.

  • Test testimonials rigorously: Don’t skip testing just because they’re trusted elements.
  • Experiment with variations: Try different types of testimonials to see which works best.
  • Measure their impact: Use analytics to determine the actual effect on conversions.

9. Losing track of your company’s voice

In the pursuit of higher conversions, it’s easy to lose sight of your brand’s unique voice and personality. Over-optimizing for conversion can sometimes dilute what’s special about your brand, leading to a disconnect with your loyal customers.

How to do it right

Maintain a balance between optimizing for conversions and preserving your brand’s voice. Use A/B testing to find the sweet spot where your brand’s personality shines through while also driving conversions.

  • Preserve your brand’s voice: Don’t sacrifice your unique identity for higher conversion rates.
  • Test language and tone: Find the right balance between engaging your audience and optimizing for conversions.
  • Focus on quality conversions: Optimize for leads that align with your brand values and have long-term potential.

10. Running the test for too short a time

We get it—waiting is hard. But cutting your test short is like leaving a cake half-baked. Without enough time, your results won’t reach statistical significance, and you’re just gambling with your data.

How to do it right

Stay disciplined. Don’t stop your test before hitting the 95% significance mark. Let your A/B testing tool declare a winner, or better yet, wait until you’ve reached your pre-calculated minimum sample size. Patience is a virtue, especially in A/B testing.

Animated GIF of a man saying sometimes we must have patience

11. Using a testing tool that slows down site speed

Some A/B testing tools can slow your site by up to a second. It may not sound like much but it’s actually a big deal since, according to Google, 53% of users abandon mobile sites that take more than three seconds to load. If your site slows down, your conversion rate will likely drop, skewing your test results.

How to do it right

Run an A/A test first—test your tool without any changes to see if it impacts your site’s performance. This will help you identify any performance issues before your real test starts. 

Tools with server-side loading (just like our own A/B testing tool) can help avoid delays and flickers, so you’ll get the cleanest, most accurate results.

12. Running too many tests at once

Simplicity is key. Running multiple tests simultaneously can muddle your results. While it’s okay to test different versions of a single element, running too many tests at once demands a larger sample size and complicates your analysis.

How to do it right

Limit yourself to just the number of tests that you and your team can handle without getting overwhelmed, and focus on significant elements like your CTA button or headline. By keeping things streamlined, you’ll gather clearer, more actionable insights.

13. Comparing different time periods

Traffic fluctuates, and comparing results from different periods can mislead you. Comparing a high-traffic Wednesday to a low-traffic Tuesday is like comparing apples to oranges. Seasonal events or external factors can further distort your results.

How to do it right

Run your tests over similar and comparable time periods to get consistent data. For instance, if you’re an ecommerce retailer, don’t compare holiday season traffic with post-holiday slumps—instead, try comparing similar holiday seasons across different years. Consistency is crucial for reliable insights.

14. Changing parameters mid-test

Tweaking your test mid-way is the quickest route to invalid results. Whether it’s adjusting traffic allocation or altering variables, mid-test changes can skew your data and lead to false conclusions.

How to do it right

Set your parameters and stick to them. If you absolutely need to make changes, start a new test. Consistency ensures that your results are valid and actionable.

Post-A/B testing mistakes to avoid: how to optimize and improve

You’ve run your A/B test, collected your data, and declared a winner. But hold your confetti, because the end of the test doesn’t mean the end of your work. There are several common mistakes that can mess with your results after the test is over, if you don’t know how to avoid them.

15. Leaving too little documentation

Between waiting for statistical significance and making incremental changes, A/B tests demand high maintenance. That’s why thorough documentation is crucial to squeeze every drop of learning from your experiments. Without proper records, you miss out on valuable insights, waste resources, and lack direction for future tests.

How to do it right

Create a template for documenting internal A/B tests and ensure everyone sticks to it. Your documentation should include:

  • The analytics data that inspired your hypothesis
  • Your assumptions about why this data looks the way it does
  • Audience targeting and segments
  • Your hypothesis, formed as a clear statement and goal
  • The KPIs and metrics you decided to measure
  • The stakeholders who need to be involved
  • Timelines (e.g. how long the tests will run)
  • Your test results, including a discussion and a list of further actions

16. Not iterating on the test

It’s easy to shrug off a failed hypothesis and move on, especially if you’ve been waiting weeks for the results. But giving up too soon means you’re not fully digesting your learnings.

How to do it right

If your hypothesis was grounded in data but the test didn’t achieve the desired result, tweak your approach and try again. Here are your options:

  • Iterate on the test: Conduct further tests on the page, fine-tuning the original hypothesis.
  • Test new research opportunities: Use your results to identify new hypotheses.
  • Investigate further: If the results are unclear, dig deeper before deciding on your next steps.
  • Pivot: If your data clearly indicates a wrong hypothesis, look for other issues on the page.

17. Making too many changes based on your results

Convincing A/B test results can be persuasive, but overestimating their implications can lead to trouble. For example, if adding a sign-up pop-up increases your mailing list on one page, it doesn’t mean you should plaster pop-ups everywhere. Overdoing it might annoy users and increase your bounce rate.

How to do it right

Go slow and steady with your changes. Remember, an A/B test answers a specific question. Implement changes gradually and monitor their impact carefully before rolling them out site-wide.

18. Measuring results inaccurately

Accurate measurement is as crucial as accurate testing. If you don’t measure results properly, your data becomes unreliable, making it impossible to make informed decisions.

Animated GIF of a man saying I'm not indecisive I just can't decide

How to do it right

Ensure your A/B testing solution integrates with Google Analytics for better control and insights. This way, you can track your test results accurately and gain actionable insights.

19. Blindly following A/B testing case studies

It’s tempting to copy what worked for others, but what works for one company might not work for yours. Every business is unique, and blindly following case studies can lead you astray.

How to do it right

Use case studies as a reference point to generate ideas, but develop your own A/B testing strategy tailored to your audience. This approach ensures that your tests are relevant and effective for your specific needs.

20. Not considering small wins

A 2% or 5% increase in conversion might seem insignificant, but small gains add up over time. Ignoring them is one of the biggest A/B testing mistakes you can make.

How to do it right

Embrace small wins. Look at them from a 12-month perspective—even a small percentage of steady, continuous growth can lead to huge returns over a full year.

21. Not running your A/B tests strategically

Without a clear plan, A/B tests can become a random guessing game. It’s challenging to draw significant conclusions without a strategic approach, leading to wasted resources and fleeting wins.

How to do it right

  • Document learnings: Maintain a record of your test results and insights.
  • Test sequentially: Run one test at a time, analyze results, and build on your learnings.
  • Establish a feedback loop: Regularly share insights with your team to inform product enhancements.

22. Not being aware of validity threats

Even with a decent sample size, confidence level, and test duration, your test results can be invalid due to several threats like the instrumentation effect (when a flawed instrument skews the data), the selection effect (when you incorrectly assume that a small portion of the traffic represents all of the traffic), and the broken code effect (when the page didn’t display properly on certain devices or browsers).

  • Monitor every metric: Ensure all goals and metrics are correctly recorded.
  • Watch external factors: Be aware of events that could skew your data.
  • Ensure quality assurance: Test your variations across all browsers and devices.

23. Assuming that “wins” apply across all customer segments

A winning variation for one segment might not work for another. It’s crucial to segment your audience and understand different user behaviors.

How to do it right

Like we mentioned before, it’s crucial to segment your users by demographics, behavior, and source when analyzing data. This approach helps you understand how different groups interact with your changes and ensures you’re optimizing for the right audience.

24. Not watching out for downstream impacts

Changes that improve one metric might negatively impact another. It’s essential to consider the overall effect on your site’s performance.

How to do it right

Monitor downstream impacts carefully. Ensure that improvements in one area don’t lead to declines in another. This holistic approach helps maintain a balanced and effective optimization strategy.

SUBSCRIBE
Don’t miss out on the latest industry trends, best practices, and insider tips for your marketing campaigns

25. Labeling an inconclusive test as a “failed” test

Innovator and inventor Thomas Edison once said, “I have not failed. I’ve just found 10,000 ways that won’t work.” The same principle applies to A/B testing results. It’s not about “pass” or “fail”—when you run testing experiments, you’ll get results that are either impactful or inconclusive, but don’t just throw the non-impactful ones away. 

How to do it right

Look at your inconclusive results through a different lens: They show you what not to do. These results will reveal the factors that don’t have a strong impact on your conversion, so you can instead focus on what matters.

 

Which A/B testing mistakes are you going to avoid?

A/B testing is a powerful tool in your conversion optimization arsenal, but it’s not foolproof. From planning your hypotheses to analyzing your results, every step of the testing process requires careful attention to avoid common pitfalls. By steering clear of these mistakes—whether they occur before, during, or after your tests—and following A/B testing best practices, you can ensure your experiments yield meaningful, actionable insights.

Ready to start your own A/B testing journey (with fewer mistakes along the way)? Check out our A/B testing tool, which is built into the Unbounce builder. It’s super easy to build your own pages, test them, and analyze the results—optimization and higher conversion rates are just a few clicks away.

Explore our resource library

Get actionable insights, expert advice, and practical tips that can help you create high-converting landing pages, improve your PPC campaigns, and grow your business online.

Conversion optimization
Landing pages
Digital content
Campaign strategy