Custom A/B Test Segmentation
Learn how to configure A/B test segmentation using our built-in mechanism or manage it manually for full control.
Each machine learning model deployed to your app includes an associated A/B test split. This allows us to accurately compare performance data between control and treatment groups, ensuring that observed changes are driven by our model rather than by random variation or other product-level adjustments.
We provide two options for configuring the A/B test split:
Built-In A/B Test Mechanism (Recommended)
We recommend using our built-in A/B test mechanism, which automates split assignment, deployment, and metric adjustments on your behalf.
No action is required to opt in to this approach.
Refer to Analytics & Reporting to learn how to report user segmentation groups to your internal analytics system.
Custom A/B Test Segmentation
If your app requires direct control over A/B test segmentation, we also support that, and it takes 2 simple steps:
Step 1: Segment Your Users
Passing true
assigns the user to the control group, preventing the SDK from making any decisions, as shouldUpsell
will always be true
then. Passing false
allows the SDK to utilize the machine learning model to optimize conversions, provided that a model has already been deployed to the designated flow.
Even when using setControlMode
, it’s important to keep all the context capturing and logging fully operational for both cohorts (control and treatment) as this is critical to train the ML models.
The SDK initialization via ContextManager.applicationDidFinishLaunchingWithOption
and other configurations should also remain unchanged.
Step 2: Inform Us Of The Segmentation Split
Since your app controls the split, we request that you inform our team of the initial split percentage and notify us of any planned changes, as this impacts our reporting.
Last updated
Was this helpful?