Perfection is always in your approach. It’s in the manner that you move toward an untouchable ideal. This is what I believe A/B testing is all about, the pursuit of perfection. In the case of A/B testing, perfection would be a conversion rate of 100% — a metric that is completely impossible to achieve.
Any optimization expert’s dream would be to get that rate for the stuff they promote. However, be it longform articles, CMS news or ways to create a mesmerizing website look it’s hard to even approach that ideal.
In order to achieve success with A/B testing, one needs to know what to look for. It’s easy to start — what matters, however, is how you decide to continue. Today, we are going to reveal some of the finer points in achieving statistical validity.
It’s A/B testing to the max. Even if that maximum is so miserably unachievable, we’re set on winning anyway.
Running an A/B test effectively requires you to:
1. Define a specific test subject
2. Define testing groups
3. Perform a P&L analysis
4. Design backend environment to support testing (or use an A/B testing tool)
5. Execute the test
6. Analyze data
7. Interpret findings
8. Implement changes
Each of those steps is very important. A closer inspection is required.
The first and most important aspect of A/B testing is the control group. You’ve got to keep a baseline result in mind to mark any improvement from your test subjects. Pretesting, therefore, becomes vital. This consists of gathering information about your users. Controlling the criteria of your subjects so that they have greater statistical validity.
To begin with, you need to define your ideal test subject based on the data you’ve gathered. Put together a profile of whose personal leanings you want to gauge. Doing so will help you draw meaningful conclusions from the results of your testing.
Next, you need to gather as much data on your test subjects as possible. Such data can be gathered through your site analytics or purchased from a third party data vendor.
The kind of data that you’re looking depends on who you’re dealing with.
The list goes on, and this kind of info is just the tip of the iceberg. Ideally, you want a profile that consists of between 10 and 20 separate criteria.The more specific you are, the better your results will be. You’ll build your customer profile based on certain criteria, and anyone who doesn’t meet those criteria won’t be included in your test results.
Now assuming that you’ve ran the numbers and determined that the potential windfall from testing for conversion improvement is greater than the cost of testing (the P&L analysis), you’ll need to think about what aspect of your site should be altered in your test.
You only want to change one feature of your web page at a time. If you test more than one variable, then you don’t know what’s responsible for the difference in visitor behavior. In other words, the test result you’vegeneratedwon’t be as valid. A/B tests undoubtedly provide the best answers to your questions, but they do take longer because you’re only manipulating one variable at a time.
So what should you change? The most common subjects of scrutiny are as follows:
Again, the items you’re testing should be subjective to your service offering and to your customer profile. Use the data at your disposal to determine what’s most important to your customers and test your hypothesis. And don’t forget about the details.
Now you just need to speak to your developers and have them implement the test. Make sure they’re sending 50% of the subjects to either the control version of the site, or the variable. More importantly, make sure they’re tracking thecontrol and testing groups so you can gather data for a statistically valid summary/finding.
Run the test for a significant period of time, enough to gather your preferred test size. The greater the number of subjects, the more accurate your data. So be patient.
Interpreting results isn’t as simple as picking the winner. The better performer has to reach a level of statistical validity—at least 95%. Beyond that, you need to let the test run its course. Nothing ruins results like impatience.
Interpreting the data is, in all honesty, the arena of experts. It’s easy to pick a winner between the control and the variable, all you have to do is see which converted the most users at a faster rate. What’s not so easy? Determining what you’ve learned about your users. Finding out requires a good deal of math.
The most important thing to remember, however, is that A/B testing is a slow process that results in incremental improvement. Each test is useful, even the hypotheses that fail can still teach you something about your customer’s psychology.
What best practices have you implemented in you’re A/B tests? Share your experiences in the comments.
Image Copyright timbrk / 123RF Stock Photo