What is A/B Testing and Why Should I Care?

If you’ve kicked around anything to do with Conversion Rate Optimization (CRO), you’ve probably come across mentions of A/B testing, and that it’s important, and that you should probably be doing it.

So what is A/B testing?

Do you remember back in middle school science class when you first learned what a basic experiment was? In order to test something sciencey, you’d have one setup with an expected result (or “control”), and another setup where a variable had changed so you could measure whether or not that variable caused a different result. This very simple kind of experiment, is essentially what an A/B test is.

A/B tests are typically run with two different versions of a website, landing page, app, email or advertisement. When running an A/B test, you simultaneously serve up two different versions of something, let’s say a landing page for example. These landing pages will be exactly the same except for one single difference. Maybe they have a different headline, or a different sized call-to-action button. Whatever it is, you let the two versions run, with half your site’s visitors getting version A and the other half getting version B, then compare the results to see which version had a better conversion rate. That’s it.

Why run A/B tests?

A lot of times, we think we have a better idea of what our customers want or what our visitors will like than what they actually end up preferring in reality. A/B testing helps to remove some of the guesswork from planning new ad campaigns or launching a website redesign. There’s no guarantee that doing these kinds of tests will reveal any significant difference between the two variants that you are testing, and you may have to run several A/B tests before you are able to see actionable results. But, if changing the wording of your ad copy can impact conversions to the tune of raising conversions by two or three times, that can have a huge impact on your bottom line – and that’s definitely something you want to be aware of.

What should I test?

You can test basically anything you want on your website or in marketing materials like emails or ads. You can even test different versions of apps. Basically, anything that can be changed can be tested. That doesn’t, however, mean all of those things SHOULD be tested.

The most common things on a website that are likely to show results that are worth testing are:

  • Headlines
  • Product Descriptions
  • Images and Graphics
  • Calls-To-Action

Really, anything that you feel like has a direct impact on your visitor’s conversion experience is probably worth testing. This is going to be a bit different for everyone.

What am I actually testing for?

Traditionally, you are looking for increases in conversion rates. But that means first, you have to know what you are converting. Is it sales? Newsletter signups? Podcast downloads? Make sure you know what the conversion you are looking to impact is before you try to do any testing around it.

In fact, it’s important to have a plan overall when embarking on any A/B testing. Like with our middle school science experiment, you should have a hypothesis about what you think is going to happen during the test, then set out to prove or disprove that hypothesis.

Also, a quick note about setting baselines. Science experiments have a “control” group, or a baseline with expected results. If you are changing a pre-existing element on your site, you should already have data on what your current conversion rate is. This is your baseline. So, you can run your A/B test against the existing element if you want, with A being the existing and B being the new. Or, since you already have a baseline, you can have both A and B be new variations which are both being tested against the existing baseline. However, for something like an email or a marketing campaign, you might not have an existing element to test off of. In these situations, it may be useful to first run an A/A test in order to establish that baseline, and figure out what natural variations exist on your site.

A/B testing isn’t perfect, and it certainly isn’t foolproof. By its nature, it can be all too easy to mistake correlation of a particular result with causation. In other words, just because more people converted on version B than version A, doesn’t necessarily mean it was because of the difference. This is why it’s important to leave your tests running for a long enough period of time before drawing any conclusion. That said, with the stakes often being relatively low for a lot of these kinds of changes, it’s worth it to do a lot of testing and see if you can make some changes for the better.

Need help with your overall marketing strategy? Contact the team at 10twelve today.