Skip to main content
  1. Learn
  2. Design & UX
  3. Posts
  4. A complete guide to A/B testing in product management

A complete guide to A/B testing in product management

PostsDesign & UX
Georgina Guthrie

Georgina Guthrie

November 08, 2022

Product AB testing is like trying different recipes in your kitchen. You might have an old favorite that works like a charm, but it’s fun to try something new once in a while. Maybe, you’ll end up liking the new recipe better, or you’ll decide your old standby is still your favorite. Either way, trying different things is always beneficial. You either discover something new or confirm you were doing it right all along — and that’s a confidence booster.

When it comes to websites and apps, the same rule holds true — except the stakes are higher. If your website isn’t performing as well as it could, you won’t convert visitors into customers, and you’ll lose out on potential revenue.

A/B testing is a powerful tool that can help you optimize products and make better decisions about your product roadmap. By trying different variations of your website or app, you’ll see which one leads to the most conversions. Simply put, A/B testing ensures your website or app is doing its job.

In this guide, we’ll cover the basics of A/B testing in product management, lead you through your first experiment, and finish with some essential tips for running successful tests.

What is product AB testing?

Product AB testing, or A/B testing, is a method of experimentation where you compare two or more variants of a product to see which performs better. Typically, one variant is the control (the current version of the product), and the others are called treatments (the new versions you want to test).

To run an A/B test, you first need to identify a metric you want to optimize, such as conversion rate, click-through rate, or time on site. Once you select a metric(s), you need to create two or more variants of your product and deploy each of them to a subset of your users.

During the test, you’ll need to collect data and analyze the results to see which variant performed better. If the treatment outperforms the control, you can then roll it out to all users.

Although this isn’t an exhaustive list, A/B testing is useful for:

  • Testing new features before a build
  • Testing different versions of a landing page
  • Optimizing email subject lines
  • Fine-tuning onboarding flows
  • Improving in-app messaging
  • Conducting user feedback surveys

Popular tools for A/B testing

  • Optimizely – Allows you to experiment with your product without writing any code.
  • Google OptimizeOffers fast experimentation and integrates with Google Analytics.
  • Adobe Target – Lets you test different versions of web pages and personalize the user experience.
  • Userpilot – Enables you to test different versions of in-app messages and user onboarding flows.
  • Hotjar – While not strictly for A/B testing, this tool enables you to see how users interact with your product through heatmaps and session recordings
  • Cacoo – Lets you create wireframe prototypes, share with the team, add comments, and edit as you go. 

Four types of A/B testing you need to know 

Broadly speaking, there are four main types of product A/B testing. Product management involves knowing how to run each type and what each one achieves. 

1. Feature tests

Feature testing involves evaluating new features before they’re built. Also known as pre-launch testing, this is typically used to validate hypotheses and gather data about how users will interact with a new feature. 

2. Live tests

Live testing involves evaluating different versions of a product that are already live with the goal of optimizing them. 

3. Trapdoor tests

Also known as redirection tests, trapdoors test different versions of a product before it’s live. When a user arrives on the page, they’re redirected to a different version depending on which variant they were supposed to see.

4. Multi-armed bandit tests (MAB) 

A/B is a form of split testing, where half of your traffic goes to one variant and the other half to another variant. This comes with a downside: a big chunk of your traffic is routed to a losing variant, which reduces sales/conversions. 

Multi-armed bandit tests use machine learning to adapt based on data gathered during the test, directing visitors toward the better-performing variant. Variants that aren’t performing so well get less traffic over time.

Image Source 

“Some like to call it earning while learning. You need to both learn in order to figure out what works and what doesn’t; but to earn, you take advantage of what you have learned. This is what I really like about the Bandit way of looking at the problem. It highlights that collecting data has a real cost, in terms of opportunities lost,” says Matt Gershoff, speaking to cxl.com.

Essentially, it boosts conversions while the test is in progress. The downside to MABs is that statistical certainty takes second place to conversions. 

What’s multivariate testing?

Multivariate testing is similar to A/B testing, but instead of comparing two variants of a product, you’re testing multiple variants on one page at the same time. Unlike MAB testing, it isn’t dynamic. So, instead of having one treatment and one control, you might have two treatments and one control or three treatments and one control, with conversion rates showing you which option is most successful. 

Image Source

The benefit of multivariate testing is that it can show you which elements on one page play the biggest role in helping you achieve that page’s objective.

Key differences between A/B testing and multivariate testing:

  • A/B testing compares two variants of a product, while multivariate testing compares multiple variants of a product.
  • A/B testing is typically used to test one change at a time, while the multivariate approach tests multiple changes at once.
  • A/B tests are generally easier to set up and analyze than multivariate tests.

What is user testing?

User testing is an evaluation technique that involves observing users interacting with your product and collecting feedback about their experiences. User testing helps you validate design changes, assess a product’s usability, and gather qualitative data about user behavior.

There are two main types of user testing

  • Guerrilla testing: this is a quick and informal type of user testing you can do anywhere, anytime. For example, you might approach someone in a coffee shop and ask them to try out your app for a few minutes.
  • Lab-based testing: this is a more formal type of user testing that takes place in a controlled environment, such as a usability lab. Lab-based testing is typically more expensive and time-consuming than guerrilla testing, but it can be more reliable.

User testing is an important form of product research that can help you gather data you can’t necessarily get from A/B tests or analytics. However, you shouldn’t treat it as a replacement for product A/B testing. Instead, think of user testing as complementary.

Key differences between user testing and A/B testing

  • User testing is a method of evaluation, while A/B testing is a method of experimentation.
  • User testing involves observing real users using your product, while A/B testing involves comparing two variants of a product.
  • User testing can help you gather qualitative data, while A/B testing is for gathering quantitative data.

Why is A/B testing important for product managers?

A/B testing helps you test your assumptions about how users will interact with your product. It also provides data that helps you make decisions about which product changes you need to implement (and which you shouldn’t). Product managers need a strong understanding of how to design and implement A/B tests, as well as how to interpret the results.

What are some common mistakes product managers make with A/B testing?

  • Testing every tiny change: some changes are too small to warrant an A/B test, while other changes are too complex.
  • Running every test to completion: you might want to stop a test early if you see a clear winner.
  • Drawing conclusions from small samples: run A/B tests on a large enough sample size in order to be reliable.
  • Making it a one-time test: your website should be treated as an ever-evolving thing, and as such, product managers need to take a continuous optimization approach. That way, you can ensure your website or app is firing on all cylinders and delivering the highest possible conversion rate. 
  • Not realizing the importance of testing: fine-tuning your product means higher conversions. Over time, this allows you to derive the greatest value from every dollar you spend on customer acquisition. 
  • Rushing the process: A/B testing is something you should have simmering away in the background all the time. The more you test and optimize, the more your site will convert and keep pace with the competition. 
  • Not knowing when to trust your gut: sometimes, you might have a strong feeling about which variant will perform better, even if the data doesn’t necessarily reflect that. A/B tests don’t take conversions into account (unlike MAB tests), so pay attention to your intuition. 

How to get started with product A/B testing

Here’s a general step-by-step guide for product managers. And once we’ve gone through a general overview, we’ll share some examples and tips. 

Step 1: define your goals

You can’t run a successful test without establishing goalposts or metrics for success. What do you hope to learn from the A/B test? Develop a hypothesis, such as, ‘I think variant B will result in more conversions than variant A.’ Then,make sure your team is aligned on the goal of the test, what success looks like, and how to measure performance.

Product managers might have a variety of goals when it comes to A/B testing, including (but not limited to):

Step 2: choose your metric

Which metric will you use to measure the success of your A/B test? There are several to choose from, depending on your product goals.

Some common metrics include (but are not limited to):

  • Engagement: this is time spent on your site, the number of page views, or the number of clicks.
  • Conversion rate: this is the number of people who take the desired action, such as making a purchase or signing up for a newsletter.
  • User satisfaction: this represents the user’s overall impression of the product. Use surveys, rating systems, and interviews to gauge satisfaction.

Step 3: identify your hypothesis

What do you think will happen as a result of the A/B test? You’ll need to make a prediction about which variant is more likely to be successful.

Base the hypothesis on your goals for the test. For example, if you’re trying to increase user engagement, you might hypothesize that Variant A will result in more engaged users than Variant B.

Step 4: design the test

Now it’s time to decide on the specific changes you want to test and how you want to measure the results.

Some common things to consider when designing an A/B test include:

  • The independent variable: this is the variable you’re changing to test the hypothesis. In most cases, it’ll be the treatment group (Variant A, for example).
  • The dependent variable: this is the variable you’re measuring to see if there’s a difference between the groups.
  • The control group: this is the group that serves as a baseline for comparison. In most cases, it will be the group that doesn’t receive any change (Variant B, for example).
  • The sample size: this is the number of users who will participate in the test. Choose a large enough sample size to produce reliable results.
  • Length of testing phase: you’ll need to decide how long the test needs to run to get accurate results.
  • Testing metrics: this includes how you’ll track the data and analyze it.

Step 5: run the test

It’s time to roll! Once you design your test, it’s time to run it, implement the changes you want to evaluate, and track the results. 

Step 6: analyze the results

Using the metrics you’ve chosen, assess the outcome of your tests and how it relates to your hypothesis. You’ll need to compare the two (or more) variants to see if there’s a statistically significant difference.

There are a variety of statistical methods you can use to analyze the results of an A/B test, but here are two popular options.

  • Chi-squared test: use this to compare two categorical variables.
  • ANOVA test: ANOVA stands for ‘Analysis of Variance.’ Use it to test for a statistically significant difference among the mean values of two or more groups.

Step 7: make a decision

After analyzing the A/B test rests, it’s time to decide whether or not to implement the change based on your findings. If the results are statistically significant, you can feel confident that the change is beneficial and move forward with implementing it.

If the results are inconclusive or insignificant, you’ll have to decide whether to continue running the test. Further evaluation may help determine if you should try a different variant or stick with your current design. 

A/B testing: some real-world examples 

Here are some illustrations of A/B testing in action.

Example 1: a product manager testing a new feature before building

As a product manager, you want to increase user engagement on your site. You hypothesize that adding a new feature will result in more engaged users.

You design an A/B test where the independent variable is the presence of the new feature and the dependent variable is time on site. The control group (Variant A) is the group of users who don’t see the new feature, and the treatment group (Variant B) is the group of users who do see it.

You run the test for a month. After analyzing the data, you find a statistically significant difference between the two groups in time on site. The treatment group (Variant B) spends more time on the site than the control group (Variant A). Based on the results of the test, you decide to go ahead and implement the new feature.

Example 2: a product manager A/B testing a change to the product pricing

You want to increase product sales, and you hypothesize that a price change will lead to more conversions You design an A/B test where the independent variable is the product price, and the dependent variable is the number of sales. The control group (Variant A) sees the old price, and the treatment group (Variant B) sees the new price.

You run the test for two weeks and find no statistically significant difference in sales between the two groups. Based on the test results, you decide not to implement the price change. You also make a note to revisit the test in six months, when economic conditions may be different. 

Example 3: a product manager A/B testing a change to landing page copy

You want to increase conversion rates on your site and hypothesize that updated copy will result in more conversions.

You design an A/B test where the independent variable is the landing page copy, and the dependent variable is the page conversion rate. The control group (Variant A) sees the old copy, and the treatment group (Variant B) sees the new copy.

You run the test for one month and find that the treatment group (Variant B) has a significantly higher conversion rate than the control group (Variant A). The results are a clear sign the new copy is better at converting customers.

A/B testing: tips and best practice

Here are some tips to help you get the most out of your test:

  • Have a clear vision: make sure you have a well-defined hypothesis before starting the test. Otherwise, you might not have a clear baseline to evaluate.
  • Use a big sample size: design a test that allows you to collect enough data to get statistically significant results. If you don’t have enough traffic, your results may not be accurate.
  • Track the right metric: the metric should align with your hypothesis.
  • Give the test enough time to run: A/B tests generally need to run for at least two weeks in order to get accurate results. 
  • Be patient: don’t make the mistake of changing the variable you’re testing before the test is complete, as this will invalidate your results. And also remember that A/B testing is a process of fine-tuning over time, with no end date. Markets develop, customer needs change, and new technology becomes available. So, think of it as an ongoing journey, rather than a one-time exercise.
  • Use tools built for the job: diagramming tools can help you with your A/B testing. With Cacoo, you can create wireframes, save different versions, visually represent your data, and share it all in real-time. 
Wireframes are a product management essential. With Cacoo, you can create different designs in a snip with drag-and-drop templates, then tweak them as A/B testing data comes in.

Final thoughts 

Letting conversions slide is like filling a leaky bucket with water. Why spend all that time, energy, and money on marketing only to have those customers leave as soon as they hit your website or app? 

Continuous optimization via A/B testing give you a higher chance of converting new customers and building loyal with existing ones, because you have a product they like. It also helps you keep pace with the competition. 

If you have a large audience and low conversions, and you want to boost sales, A/B testing is a good place to start. Through testing, you can create the best version of your product and channel more resources into attracting new customers to a product they’ll love.

Keywords

Related

Subscribe to our newsletter

Learn with Nulab to bring your best ideas to life