A/B Testing, Subtract the Statistics

TL;DR: You want to get started with A/B testing.
1. Pick a default scenario and a variant scenario.
2. Present 1 scenario to each customer at random. Record each customer and which was presented.
3. Go to this site.
4. Enter the number of customers and successes for each scenario and click “Submit”.
5. The confidence that the variant outperforms the default and the expected improvement are displayed.
6. If the expected improvement is low, the default might outperform the variant, switch the numbers and repeat step 4.
7. Read on to learn more.

What is A/B testing?
If you haven’t heard about A/B testing before, allow me to give you a brief introduction. A/B testing is a simple, scientific approach to evaluating two different scenarios, allowing you to predict the outcome of each scenario, while also understanding the confidence that the prediction will be correct. This can provide tremendous value when applied to business decisions, and it’s a lot simpler to get started with than you might imagine. Most articles you can find on the internet describing A/B testing are either statistics heavy, or attempt to sell you a full A/B test system to integrate with your software product. The truth is that you don’t need to shell out any money, know statistics, or even be in the software industry in order to get started using A/B testing. You simply need 2 different ideas to compare.

A simple example
For this scenario, let’s say you have a lawn mowing business. When you sell your lawn care service you often just charge a monthly fee, but you wonder if you can attract more customers by offering a 10% discount and charging for the full summer up front. With an A/B test you can determine which sales strategy will be more successful and by how much. This knowledge allows you to determine which sales strategy is best, and you can even determine if the discount will be offset by the additional customers, allowing you to maximize your revenue. So, let’s lay this test out: your normal, monthly service charge is the default (or control) scenario; the new, discounted upfront rate is the variant; and the hypothesis is that a discounted, upfront rate will attract more customers than the monthly rate.

In this test, for simplicity’s sake, we’re going to split the customers 50/50. Half of your potential customers will be offered the default rate, and half will be offered the upfront rate. All the equipment you’ll need to perform this test is a notebook, a pen or pencil, and a coin. The way this test works is simple: every time you speak to a new customer, you write down their name and you flip a coin. If the coin lands on heads, you offer them the monthly rate, and you write “default” beside their name. If the coin lands on tails you write “variant” beside their name. Then, in either case, you write down whether they signed up for your service or not. This is your data collection phase.

After a week of talking to customers you count up the number of customers you offered each option to, as well as the number that signed up with each option. Now it’s time to do some analysis. This step would normally involve lots of statistics, but, luckily, I made a tool for that! Head over to this site. Now let’s look at the data you collected.

90% confidence that the variant outperforms the default is often confidence enough to change a business practice, but the ultimate decision depends on the context.

Let’s say you were able to talk to 200 potential customers. You pitched the monthly price to 105 people (21 people signed up) and the upfront price to 95 (27 signed up). You plug those numbers into the tool and voila! As you can see, there’s almost a 90% chance the upfront price leads to more signups, which is a lot of confidence for the number of people you talked to. If you wanted to be even more sure you could talk to more people, but for many scenarios 90% confidence is enough to make a decision. So, it looks like the upfront price gets more people to signup, but that’s not the whole story; the second piece of information you want to look at is the expected improvement. You can expect to get 42% more signups from the upfront price. There is some variability here, and you could improve the confidence in that number by, again, talking to more people, but on average you’ll likely get close to 40% more signups. Now, taking into account how the margins of this fictitious business work, you have all of the information needed to make an informed decision. If 40% more business at a 10% discount leads to more profits (probably also accounting for hiring another 40% of a person to handle the extra work), then you’ll see success by dropping your monthly offering and just going with the upfront price.

A/B testing can be applied to any business question
This may have been a simplified problem, perhaps you want to keep both price offerings and instead just lead with the price you know performs better, but the same methodology can be applied to any business decision. You can A/B test your company image (testing 2 slogans or logos), marketing campaigns (testing email openers), websites (testing multiple layouts or designs), tech products (testing different features), and anything else you can think of.

More details on the tool

When the default performs better than the variant, you’ll see an expected improvement close to or lower than 0.

The website provided only performs a one-sided test. That is, it only looks to see if the variant performs better than the default. In many scenarios the default performs better than the variant, in which case you can still use the same tool, simply switch the numbers for the default and variants. The tool also uses thresholds to classify the confidence level from “Very Low” to “Very High”. These are set at reasonable numbers, but context must be applied in order to interpret these classifications. For many low-risk business decisions, a medium or high confidence level might be acceptable. For high-risk business decisions, it may be necessary to achieve very high levels of confidence before making any changes. This context is very specific to each circumstance, but this tool can provide you the information needed to make an informed decision.

Conclusion
A/B testing is an informative tool for making business decisions, and the barrier to entry is much lower than many websites would have you believe. This tool can allow you to analyze an A/B test without having to know any statistics. However, this is only an automated tool. There are a lot of intricacies that are glossed over, a lot of parameters that can be tuned to specific situations, and a lot of experimental design that was overlooked. If you want to improve your A/B tests without having to learn statistics, reach out to flatland.AI, we can help you with experimental design, data collection, analysis, and even get you started with machine learning. If you use this tool, reach out to logan@flatland.AI or leave a comment, I’m always looking for feedback.

Spreading the love for data, and maybe other things too. flatland.ai

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store