Skip to content
  • There are no suggestions because the search field is empty.

Statistical background of CPP A/B Testing

CPP A/B Testing uses statistical methods to estimate how much data you need before you can compare custom product pages with confidence.

This article explains the statistical model behind CPP A/B testing and what desired precision means. If you want to see how the system turns these inputs into an estimated duration, learn how is test duration calculated in CPP A/B Testing.

Nature of the data in CPP A/B Testing

In a custom product page test, each time a user lands on a product page, one of two outcomes occurs:

  • They install (success), or
  • They do not install (failure)

Each user interaction can be treated as a Bernoulli trial (an outcome that is either success or failure).

When this trial is repeated n times (for example, 1,000 users), the number of conversions follows a binomial distribution. If the sample size is large enough, this distribution approximates a normal distribution.

statistical-background-1

statistical-background-2

 

The internal logic uses this model to plan how much traffic is needed to reach reliable results.

Example based on custom product page conversion rate results

Let’s say the true conversion rate of a custom product page variant is 5%.

  • Variant showed 1000 different users and assumed the process is repeated multiple times (for example, 1000 different days, with 1000 users per day).
  • At the end of each day, you record the conversion rate from that day’s 1000 users (for example, 4.8%, 5.3%, 4.9%, etc.).
  • Over time, you accumulate 1000 data points made up of these sample means.
  • Initially, these values may appear scattered, but according to the Central Limit Theorem (CLT), the distribution of these rates will approach a normal distribution. In other words, if you plot a histogram of these 1000 daily conversion rates, you’ll get a bell curve.
  • The center of this bell curve represents the true average conversion rate (5%), while the spread around it represents the random variation.

statistical-background-3

 

Why results vary (random variation)

Even if you show the same ad and product page, user behavior can change from day to day.

For example, if an ad group’s conversion rate is around 5% on average:

  • One day it may look like 5.7%
  • Another day it may look like 4.3%

This is not a systematic error. It is a random variation that happens naturally when you observe user behavior over time.

Standard Error (SE)

To measure natural variability in conversion rate estimates, Standard Error (SE) is used.

Standard Error describes how much the observed conversion rate can deviate from the true conversion rate on average.

The formula is:

Standard Error = √(p(1−p) / n)

Where:

  • p is the estimated conversion rate
  • n is the sample size (number of users tested)

As n increases, random fluctuations balance out and the standard error decreases. Lower standard error means a more precise estimate of the conversion rate.

Margin of error (ε)

When you plan a test, one of the first questions is: How precise do I want the results to be?

In CPP A/B Testing, desired precision defines the margin of error you are willing to accept in your results. In other words, it defines how much the test result is allowed to vary from the true value.

Margin of error is defined as:

ε = z × √(p(1−p) / n)

Where:

  • ε is the margin of error
  • z is the value selected based on the confidence level
  • p is the estimated conversion rate
  • n is the sample size

Example z-values used in duration planning:

  • For a 95% confidence level, z ≈ 1.96
  • For a 90% confidence level, z = 1.65

How desired precision settings work in CPP A/B Testing

When you create a test, you select a desired precision value:

  • 1% precision means a narrower margin of error (more accurate results).
  • 10% precision means a wider margin of error (less accurate results).
  • To maintain reliability, the system caps precision at 10%.

CPP A/B Testing supports desired precision values from 1% to 10%.

Sample size (n) and why it matters

To reach a given margin of error, the system uses a sample size calculation:

n = z² × p(1−p) / ε²

This shows the trade-off:

  • Smaller margin of error (more precision) requires more users.
  • Larger margin of error (less precision) requires fewer users.

Related links


Need more help?

If you have further questions on the process, contact your dedicated Customer Success Manager or contact the support team via live chat!