Frequently asked questions
This page answers common questions about CPP A/B Testing.
What does “desired precision” mean, and how does it affect the test?
Desired precision defines the margin of error you are willing to accept in your test results, especially for key metrics like Conversion Rate (CR) and Tap-Through Rate (TTR).
- 1% precision means the results are more accurate (narrower margin of error).
- 10% precision means a wider margin of error and less accuracy.
- To maintain reliability, the system caps precision at 10%.
To learn more, go to Statistical background of CPP A/B Testing
How does traffic stabilization work in Parallel tests?
When multiple ad group variants run in parallel, Apple Ads automatically distributes traffic among them. Since all variants share the same keywords, Apple Ads may favor one variant and allocate most traffic to it. This can lead to biased results.
To prevent this, you can enable Stabilize Traffic.
When Stabilize Traffic is enabled:
- The system monitors each variant’s traffic hourly.
- If one variant starts receiving disproportionately high traffic, it is temporarily paused to let others catch up.
- The system keeps the difference in traffic distribution between the most-traffic and least-traffic variants within 25%.
To learn more, go to How traffic stabilization works in CPP A/B Testing
Do I lose traffic when running CPP A/B Tests?
Some traffic loss is possible and expected during CPP A/B Testing.
Because the system switches ad group statuses and Apple Ads needs time to reflect these updates, you may see a temporary dip in total traffic.
To learn more, go to Test health and issue management in CPP A/B Testing
Can I create a CPP A/B Testing for an ad group that I recently created?
No. To generate statistically valid results, CPP A/B Testing requires historical data from the ad group. The system uses the previous month’s traffic as a benchmark to estimate required duration and traffic volume.
To learn more, go to What are the requirements and limits for CPP A/B Testing
How does the system ensure equal exposure for each variant in a Switch test?
The system analyzes traffic trends from the past month, including high and low traffic days, and designs a switching schedule to balance exposure.
- It adjusts switch timings so each variant experiences a similar mix of traffic conditions.
- When traffic is stable, switching may occur more frequently (for example, hourly or daily). This can reduce test duration while maintaining fairness.
To learn more, go to How are switch periods chosen in CPP A/B Testing
Why is 90% confidence commonly used?
It is a practical balance between reliability and runtime. Higher confidence usually requires much more traffic and longer duration.
To learn more, go to Statistical background of CPP A/B Testing
What’s the minimum amount of traffic needed for a test to be statistically valid?
There is no fixed number.
CPP A/B Testing calculates required duration and traffic volume based on:
- Past traffic levels of the selected ad group
- Number of variants
- Desired confidence level
- Desired precision
- Traffic fluctuations
The goal is to ensure each variant receives enough exposure to detect meaningful performance differences and reach statistically reliable results.
To learn more, go to What are the requirements and limits for CPP A/B Testing
What might affect the health of the test in a negative manner?
The following actions can negatively affect test health:
- Bid changes
- Budget changes
- Status changes of ad groups, campaigns, and keywords (KWs)
For healthier results, keep these stable during the test:
- Custom product page assignments and screenshots
- Promo text
To learn more, go to Test health and issue management in CPP A/B Testing
What is the difference between an ad switch and an ad group switch?
Ad switch:
- Rotates ads in one ad group
- Simpler operational model
- Can keep strategies active if needed
Ad Group switch:
- Rotates duplicate ad groups
- Stronger isolation
- Better fit when external influence is a major concern
To learn more, go to Ad Switch vs Ad Group Switch in CPP A/B Testing.
How do I shorten the test duration?
You can shorten the test duration by:
- Increasing desired precision (for example, from 1% to 3–5%)
- Decreasing confidence level (lower confidence requires less data and can shorten the test duration, but slightly reduces statistical certainty)
- Selecting higher-traffic ad groups
- Testing fewer variants
- Choosing a longer switch interval (Switch tests)
- Keeping traffic stable
To learn more, go to Test health and issue management in CPP A/B Testing
What should I do if the test ends with no significant difference between variants?
If no variant clearly outperforms the others, you can still review metrics like impressions, CR, and CTR to choose the most promising option.
A lack of a significant difference indicates that the variants are likely to perform similarly over time. The test doesn’t guarantee one variant will outperform others; it simply ensures that, given your selected precision and confidence level, the results are statistically sound and comparable.
To learn more, go to How do you monitor and interpret tests in CPP A/B Testing
Related links
- About CPP A/B Testing
- What are the requirements and limits for CPP A/B Testing
- Test setup options in CPP A/B Testing
- Test creation in CPP A/B Testing
- Test duration and switching logic in CPP A/B Testing
- How do you monitor and interpret tests in CPP A/B Testing
- Test health and issue management in CPP A/B Testing
Need more help?
If you have further questions on the process, contact your dedicated Customer Success Manager or contact the support team via live chat!