1. Help Center
  2. Products
  3. SearchAds.com Platform Overview

Simplify and automate A/B testing for custom product pages

CPP A/B Testing enables you to test different Apple Ads custom product pages with minimal manual effort and statistically reliable outcomes. Whether you're optimizing for ad group structure or custom product pages variants, this tool ensures that your tests are set up right and monitored with precision.

Why use CPP A/B Testing?

Manual A/B testing is both time-consuming and prone to errors. CPP A/B Testing automates the entire process, from setup to monitoring. The tool resolves the main issues that app marketers can run into when handling tests manually: 



Let’s dive into all the methods you can use to eliminate the problems and reach test results you can count on. 

Test setup options

The journey begins in the left‑hand navigation. Open CPP A/B Testing and press Create New. Give the experiment a descriptive name; this name will appear in every table, log, and alert that follows. Here we need to take a pause and give you details on the test methods and types you can utilize. 


Before selecting anything in the creation screen, it is helpful to understand the two test methods and their two implementation types. Review the options below, decide which one fits your strategy, and then make your choice when the platform prompts you.

What is the Switch test method?

A switch test shows one custom product page at a time to your audience, then rotates to another using a switching mechanism that ensures each variant receives equal exposure. The rotation occurs within specified timeframes (hourly, daily, or weekly), allowing all variants to experience traffic under comparable timeframes.

Switching comes in two flavours:

  • Switching ad groups
    The platform clones your chosen ad group for every custom product page variant (or the default product page). Only one clone is live at any moment; the others remain paused until their turn arrives.
  • Switching ads
    Everything is contained within a single ad group. The tool creates one ad per clone of a custom product page variant and toggles the ad status to provide each variant with equal exposure. A duplicate of the default product page can also be included in this test method. 

What is the Parallel test method?

A parallel test keeps all variants live simultaneously, so the comparison comes from side‑by‑side traffic rather than variant rotation.

Parallel also offers two distinct paths:

  • One ad group, multiple custom product pages

You pick one ad group and choose up to four custom product pages (the default product page can be one of them). The original ad group is paused, and all duplicate ads run side‑by‑side for the full test window, letting you see which page wins with identical keyword, bid, and audience settings at the same time frame.

  • Multiple ad groups, one custom product page

You select up to four existing ad groups and then pick one custom product page (selecting the default is not possible in this case). The system attaches the selected custom product page to each ad group. Every ad group, with the same custom product page, runs concurrently, so you can spot which ad‑group setup (keywords, bids, audiences) delivers the best results for that page.


Let’s dive into how you can create a test with each method and type. 

How to create a test with the Switch method

You test several variants by letting the platform create and cycle duplicate ads or ad groups. By keeping all but one variant paused, the platform cancels out seasonality and time‑of‑day bias and delivers a result with 90% statistical confidence. And as said before, there are two types of tests you can run: switching ad groups and switching ads. Let’s take a look at how you can create a test with either. 

Switching ad groups

In this version, each custom product page, as well as the default page, receives its own duplicate ad group. Only one clone is active at any moment; the others stay paused until their turn. The test environment is completely isolated from external influences to ensure that the test yields accurate results.

How to set it up

  1. Open CPP A/B TestingCreate New, then name the experiment.
  2. Choose the ad group that you’d like to use in this test environment, which has traffic in the last 28 days.

  1. In the Ad Creatives section, pick two to four custom product pages, including the default product page.
  2. On the Testing Method screen, select Switch and keep Ad Groups as the switch entity. Then decide on the test duration.
    • Switch time → hourly, daily, or weekly (the platform suggests the fastest viable option),
    • Desired precision → 1% (default) up to 5%.

As you adjust these settings, the platform displays the calculated start date, end date, and total duration.

  1. Review the summary and press Start.

The system duplicates the ad group for every variant, pauses the original, and begins the first active/paused cycle.

Automations will never pause or activate ad groups involved in tests, regardless of user settings, as this would compromise the integrity of the tests. The selected original ad group is paused during the test period. Consequently, Automations, Smart Bidding, and Budget Allocation will not take any actions since these tools do not operate on paused entities.

When the timer ends, the clones are paused, the original ad group resumes, and the table line flips from 'Running' to 'Completed'. If you stop the test before its scheduled end date and time, the system will automatically revert to the original asset, with no manual reactivation required.

Switching ads

In this test type, you keep one ad group and simply add one ad per duplicate custom product page, and if included, a duplicate of the default product page, and the algorithm switches those ads. 

How to set it up

  1. Start a new test in CPP A/B Testing and enter a name.
  2. Select the ad group that you’d like to use in this test environment that receives enough traffic.
  3. In Ad Creatives, select two to four custom product pages (including the default page).

  1. On the Testing Method screen, choose Switch and change the switch entity to Ads. A checkbox (on by default) instructs CPP A/B Testing to exclude Automations, Smart Bidding, and Budget Allocation actions during the test run. We recommend keeping the box checked for optimal results.

After selecting the method, choose the switch time preset and desired precision.

  • Switch time → hourly, daily, or weekly (the platform suggests the fastest viable option),
  • Desired precision → 1% (default) up to 5%.

As you adjust these settings, the platform displays the calculated start date, end date, and total duration.


Then, confirm the auto-calculated start and end dates by clicking Continue

  1. Review the summary one more time and click Start. The tool creates additional ads, activates just one, and rotates them precisely on schedule.

When using this test method, if you uncheck the protection checkbox, Automations, Smart Bidding, and Budget Allocation will continue to take actions on the selected ad group, as this method operates at the ad level without pausing the ad group itself. However, Automations will never pause or activate ad groups involved in tests, regardless of user settings, as this would compromise the integrity of the tests.

When the timer ends, the clones are paused, the original ad group resumes, and the table line flips from 'Running' to 'Completed'. If you stop the test before its scheduled end date and time, the system will automatically revert to the original asset, with no manual reactivation required.

How to create a test with the Parallel method

A parallel test keeps every custom product page variant live simultaneously, allowing performance to be compared side by side instead of through timed rotation. You have two options for running it: same ad group, different custom product pages, or different ad groups, with the same custom product page. You should choose the path that aligns with your current strategy and structure your test. Let’s look into how to do that for each option. 

One ad group, multiple custom product pages

When you select a single ad group and assign several custom product pages (including the default product page), the system creates a separate copy of that ad group for the test. The original ad group is paused to ensure that the test environment is unbiased. Because every active, duplicated ad shares the same keywords, bids, and audiences, the only variable you’re measuring is the custom product page performance.

How to set it up

  1. Open CPP A/B Testing > Create New and name the experiment.
  2. Select the ad group you’d like to test with.
  3. In Ad Creatives, select 2 to 4 custom product pages (the default page can be one of them).
  4. On the Testing Method screen, choose Parallel. The platform estimates the test length based on your desired precision and shows the start/end dates.


    In some tests, Apple’s traffic distribution can favor one variant over others disproportionately. If this happens, and one variant receives significantly more traffic than the average of the rest, our system can detect the imbalance and temporarily pause the overperforming variant. The “Stabilize Traffic” setting can be enabled protect the fairness of your test. If your priority is to keep all variants live at all times, you may want to disable this option when running a Parallel test.
  5. Verify the summary and click Start testing your custom product pages.

During the tests, Automations will never pause or activate ad groups involved in tests, regardless of user settings, as this would compromise the integrity of the tests. The selected original ad group is paused during the test period. Consequently, Automations, Smart Bidding, and Budget Allocation will not take any actions since these tools do not operate on paused entities.

When the timer ends, the clones are paused, the original ad group resumes, and the table line flips from 'Running' to 'Completed'. If you stop the test before its scheduled end date and time, the system will automatically revert to the original asset, with no manual reactivation required.

Multiple different ad groups, one custom product page

This variant answers a different question: Which ad‑group setup works best for a specific custom product page? You assign the same custom product page to several existing ad groups, create an ad under each ad group with it, and let the ads run concurrently. Keep in mind that this method does not allow including the default product page in the test environment. 

How to set it up

  1. Go to CPP A/B Testing > Create New and give the test a name.
  2. Pick your ad groups that you’d like to test with.
  3. In Ad Creatives, select a single custom product page (as previously stated, choosing the default is not possible). The system attaches the selected custom product page to every selected ad group and creates an ad for each ad group with the chosen custom product page.
  4. The tool recognizes you’re running multiple ad groups and locks the tested entity to Parallel > Ad Groups without the need for method selection. Choose your desired precision (1%–5%) in the Test Duration section after selecting your custom product page. The platform calculates the overall test duration and displays start/end times.
  5. Confirm the summary and hit Start. All ad groups with the same product page will run side‑by‑side for the entire test period.

During the test period, the system will block the activation or pause of automated actions on these ad groups, while allowing other automated functions to continue operating, as these ad groups maintain different structural configurations during testing.

When the test ends, test-related ads are paused, and original ad group activity continues as configured. No manual intervention is required for the test variants; the test table line automatically updates from 'Running' to 'Completed', and the test results will be available. If you stop the test before its scheduled end date and time, the system will automatically revert to the original asset, with no manual reactivation required.

Monitoring health

After you launch a test, you track everything from the CPP A/B Testing dashboard. Each experiment appears as a single row, displaying its name, method, precision target, start and end dates, and a status badge indicating whether it is Running, Completed, or Stopped. Click the little chevron on the right side of the row to open the live view.

Inside the expanded panel, you will see two tabs: Performance and Logs. The Health Check tab is for a later release, so we will focus on the first and second.

Performance tab

The system opens this tab by default and shows the table view. A date picker is located on the right; it displays data for the period between test start and end dates, but you can widen or narrow the window as needed for extra context.

The table lists every variant in the first column, then shows Impressions, Tap‑Through Rate, Conversion Rate, Cost Per Tap, Average CPA, and a Confidence Level badge. If you’re unsure what each metric means, hover over the tooltips in the Performance tab, these tips offer short explanations to help you interpret the results confidently. You can also see the visual previews of your custom product pages directly in this view, helping you connect creative elements with performance. 

The Confidence Level badge turns green when a variant reaches 90 percent confidence and gray while it is still gathering evidence. When one row is green and clearly ahead, you have enough proof to end the test early if you wish.

To help compare performance across variants, a benchmark based on the original product page is shown for each metric. To see the benchmark all you need to do is to hover over the metric. Additionally, each value is paired with a dynamic badge that reflects how much it has changed over time, and whether that change is positive or negative, relative to the baseline. These features make it easier to identify standout variants at a glance.

Pro Tip: If you want to explore additional metrics, including custom or cohort-based KPIs, use the buttonnext tot he variant name to access a filtered view of the asset.

If you prefer lines to numbers, click the chart icon next to the table icon. The grid disappears, and a chart takes its place. You’ll see a drop‑down in the upper‑left corner that lets you switch the metric from Conversion Rate to anything else; Impressions, Tap‑Through Rate, Installs, Cost Per Tap, and so on. 

Each line on the chart represents a variant, either an ad or an ad group, depending on your setup. The colors match the legend at the bottom. Hover anywhere on the graph to read the precise value for that day. 

Logs tab

Shows a time‑stamped record of every major event. The table has two columns: date of change and the action. 

  • Started – The moment the test became active.
  • Completed – The exact time the test reached its scheduled end.
  • Stopped – Indicates someone ended the test early.


Use this list to verify that the test kicked off on schedule, ran for the expected duration, or to trace when it was stopped by a team member or completed.

That's a wrap on CPP A/B Testing! If you have issues or questions, don't hesitate to get in touch with your customer success manager or contact us via live chat!