10 steps to set up an A/B Test
We get asked this question often because everyone wants it, but the “how to” mechanics of how A/B testing should be done are seldom understood.
Lifted and condensed from our internal Wiki on the subject, here’s how we do it:
1. Pick one variable to test.
Isolate one “independent variable” and measure its performance. Avoiding this means you can’t be sure which variable was responsible for changes in performance. Email subject lines are an obvious choice. So are calls to action and personalization.
2. Create a ‘control’ and a ‘test’ (a.k.a. ‘challenger’).
Set up the unaltered version of whatever you’re testing as your control scenario. Now build the test – the altered email or landing page that you’ll test against your control.
3. Split your sample groups equally and randomly.
For tests where you have more control over the audience — like emails — you need to test with two or more audiences that are equal in order to have conclusive results.
4. Determine your sample size.
If you’re A/B testing an email, you’ll want to send an A/B test to a subset of your list that is large enough to achieve statistically significant results. To calculate the sample size you’ll need to use this Sample Size Calculator.
If you’re testing something that doesn’t have a finite audience, like a web page, then how long you keep your test running will directly affect your sample size.
5. Decide how significant your results need to be.
Statistical significance is a super important part of the A/B testing process that’s often misunderstood. In most cases, you’ll want a confidence level of 95% minimum — preferably even 98%.
Keep in mind that the more radical the change, the less scientific the process needs to be.
6. Make sure you’re only running one test at a time on any campaign.
Testing more than one thing for a single campaign — even if it’s not on the same exact asset — can complicate results.
7. Use an A/B testing tool.
To do an A/B test on your website or in an email, you’ll need to use an A/B testing tool. Google Analytics has one. So does Hubspot and Lemlist.
8. Test both variations simultaneously.
when you run A/B tests, you’ll need to run the two variations at the same time, otherwise you may be left second-guessing your results.
9. Measure the significance of your results.
Now it’s time to determine whether your results are statistically significant. To find out, you’ll need to conduct a test of statistical significance. Plug in the results from your experiment to the A/B Testing Calculator.
10. Take action
If one variation is statistically better than the other, you have a winner! If neither variation is statistically better, you’ve just learned that the variable you tested didn’t impact results, making the test inconclusive. You can use the failed data to help figure out a new iteration on your next test.