A/B Testing Methods That Transcend Logic
AB testing methods take on to ‘Butterfly Effect’ these days, a small localized change in your website or product can have large triggers in your conversion rate. Would you turn down the opportunity to boost your conversion rate by 200% just by changing the button color? Will you remain skeptical about making one small change that could double your income?
These triggers seem easy to identify but they are hard to analyze and achieve the desired results. Almost every conversion guy would wish for such big wins. In order to win big, you have to stand out from the crowd and not follow the same AB testing methods that your competitors practice.
You heard how great their conversion rate was and felt you had to do it? If you have answered ‘Yes’, then you have been blindly following the pack! Here are 3 AB testing methods that might sound questionable at first but could make all the difference your site needs to maximize user interactions.
DO IT ONE AT A TIME
Running A/B tests on different pages makes it tough to have insightful results and leads to a higher chance of dead loss. You have to realize that rolling out quick site updates matter but for getting accurate, actionable results, don’t gamble while testing your product. Testing multiple pages with many variations at the same time will not help you get conclusive results. It is very easy to dive deep and run A/B tests on different pages. It sounds level-headed, but they breach ABCs of website optimization.
Let’s consider this:
A visitor comes to your home page, gets to be part of test A. Moves on to the category page, gets to be part of test B. Goes to product page — test C. Adds product to cart — enters into test D. Completes checkout — test E.
At last, they land up buying something, that’s when you record a “conversion”!
Test Pages: A,B,C,D & E
Which variation of which test gets chalked up? Which of the variation tested *actually* triggered the user into buying something?
Moreover, running multiple A/B tests at the same time increases the risk of human errors. No matter how much data you crunch, when running multiple A/B tests at the same time on the same website or product there are bound to be computation errors. If your focus is to decide on which design, text elements to test and fix upon or fixing on the sample size of your experiment, the smart way would be to test it one at a time.
Running multiple A/B tests on different pages may get you answers faster but the ability to spot the winning variation becomes very challenging and there is the possibility of arriving at wrong conclusions.
DON’T GIVE UP IF YOUR FIRST TEST FAILS
When does an A/B test fail?
When you haven’t understood your audience better
When you haven’t set a proper vision
When you haven’t tested appropriate elements
&
When you have given up too soon
Most tests fail at first. I know repeated failed tests will certainly leave you frustrated and exhausted with little to no gains for all your effort. But you should understand that the key to winning big is to test exhaustively without giving up. Stay on the track even if it means a delay in rolling out a site update.
Trial and error help improve your knowledge about user engagement and in turn your product. Know your audience better, learn what works and what doesn’t for your audience. The insights gained from your first few A/B tests can be used in other day-to-day campaigns and other site changes as well. The best insights that you can derive is from your user interactions, use that as a learning curve and go about with iterative testing.
DON’T KNOCK OUT POORLY PERFORMING VARIATIONS WHEN THE TEST IS STILL RUNNING
Calling off tests early when you have found a poorly performing variation doesn’t guarantee to arrive with conclusive results. This approach will not ensure that your failure rates will eventually be low, absolutely not. Seasoned optimizers may agree with tweaking your A/B tests when the tests are still running. But that is not the wise way to run an A/B experiment.
If you know that a variation is underperforming than expected, keep your fingers crossed and persist with it till you get the test audience numbers reach a sizable figure. Bringing the variation to a premature halt, and instead utilizing that time to test a new variation replacing the poorly performing one is not going to help in the longer run.
Of course, you will be able to get to conclusions on a faster timescale. But these results could be misleading if the sample size is less. As a professional optimizer, you should not call off tests before you have reached 95% or higher approval for a variation. This saves us from not falling into the trap of fake results. 95% means that there is only 5% chance of landing with a fake result. This number along with a good sample size makes more sense than quitting on a variation after it poorly performs with the first thousand visitors.
Keep an eye on your AB testing methods regularly and never stop optimizing. Learn different types of A/B testing goals and how to use them.
DON’T A/B TEST WHEN THERE IS A SPIKE IN TRAFFIC
Let’s take the strategy of famous E-commerce sites as an example here:
E-com giants like Flipkart, Amazon, Walmart etc. observe high traffic during its perked-up sale days like (Big Billion Sale, Black Friday etc). It’s going to be a real waste of time if they plan to run A/B test during this period of spike in traffic because a majority of these visitors will not behave like your average visitor. This sways your results in a big way.
Imagine for a moment that you are spotting a 30% increase in conversions with variation A due to a large of amount of traffic from the sale period. This traffic is known to be an arrive-and-depart type. Accept that conversions will go down for all variations, however, the difference between performance of variations will thin out. It’s always advisable to consider pausing an A/B test if you expect some major traffic.
AB testing methods have been the most-used website optimization strategy even before the existence of standalone CRO tools. Though the methods are ancient as the internet itself, but there is still a lot of debate on the right practices and strategies. The above 4 are perhaps the most discussed AB testing methods and though debatable they make the foundation for a good CRO plan of action.
Check out my YouTube Channel All About B2B Marketing