Skip to content
  • There are no suggestions because the search field is empty.

About CPP A/B Testing

CPP A/B Testing helps you compare Apple Ads custom product pages and see which variant performs best.

This article explains what CPP A/B Testing does and what the system automates from setup to monitoring. If you want to set up a test now, see your test setup options. If you’d rather check what you can test and the key constraints first, see the requirements and limits.

How CPP A/B Testing works

A typical test includes these steps:

  1. Choose the test setup you need (for example, testing multiple custom product pages within one ad group or testing multiple ad groups with a single product page).
  2. You choose a test setup (Parallel or Switch).
  3. You select your desired precision (margin of error, 1%–10%). The system uses a 90% confidence standard.
  4. The system calculates the test duration (and, for Switch tests, the switching schedule) based on the original ad group’s recent traffic and fluctuations.
  5. You monitor performance and confidence signals in the dashboard.

What CPP A/B Testing does

CPP A/B Testing automates the main parts of the workflow:

  • Setup automation
    • The platform duplicates the selected ad group or ad automatically based on the chosen test method.
    • You can run tests with 2–4 custom product pages (including the default product page if desired).
    • Test duration and switch intervals (hourly, daily, weekly) are automatically calculated according to your selected desired precision (1%–10%).

  • Traffic handling
    • Parallel method: keeps variants live simultaneously in the same timeframe.
    • Switch method: shows only one variant at a time, using switching periods designed to neutralize seasonality effects.
  • Duration estimation
    • Uses the selected ad group’s last 28-day traffic as a benchmark.
    • Uses your selected desired precision (1%–10%) and the fixed 90% confidence standard to calculate test duration.
  • Safe test environment and reversibility
  • During the test, the original ad group is paused, ensuring no outside influence affects the experiment.
    • Automations, Smart Bidding, and Budget Allocation actions are automatically disabled for test entities to preserve data accuracy.
  • If the users do not want their strategies to be inactivated during the test, they can select a different method that covers that case.

What CPP A/B Testing helps you decide

CPP A/B Testing is designed to help you compare variants and decide which product page or ad group performs best, based on metrics and confidence signals in a controlled environment.

If no variant clearly outperforms the others, you can still review metrics such as impressions, conversion rate, and tap-through rate to choose the most promising option. A “no significant difference” result means the tested variants are likely to perform similarly over time.

Related links


Need more help?

If you have further questions on the process, contact your dedicated Customer Success Manager or contact the support team via live chat!