Traffic Allocation
How variant weights work in Split Test Pro — setting splits, when to use uneven allocations, and the practical guidance most teams need to ignore.
Traffic allocation is how you split incoming visitors between variants. By default Split Test Pro divides them evenly — for two variants that’s 50% / 50%, for three it’s 33% / 33% / 33%. You can adjust the split per variant when you have a reason to.
How Weights Are Set
In the Variants tab, each variant has a weight slider. Drag it to set that variant’s share of traffic. The display shows percentages so the math is obvious.
The minimum is two variants (control + one treatment). There’s no enforced maximum, but in practice most A/B tests use two and most multi-arm tests use three or four.
When to Use a 50/50 Split
This is the right default for almost every test. Equal splits give you the fastest path to statistical significance because both variants accumulate samples at the same rate — you reach the threshold for the underdog variant as quickly as possible.
Use 50/50 when:
- You’re equally curious about both variants.
- You don’t have a strong reason to expect either side to be much worse.
- You want results sooner rather than later.
When to Use an Uneven Split
There are a few legitimate cases where you’d weight the split unevenly:
Risk-mitigated rollout
You’ve made a substantial change and you’re not sure it won’t break something. Set the new variant to 10–20% of traffic. If something goes wrong, the blast radius is small. If it does well, ramp it up by editing the weights.
Control: 90%
Variant B: 10%
The downside: you collect data slowly on the new variant, so you’ll wait longer for significance. That’s the trade.
High-cost variant
Your variant requires a paid third-party widget that bills per impression, or sends visitors to an experimental flow you’d rather not over-expose. Cap it at a smaller share.
Multi-arm with a star variant
You have a variant you’re confident will beat control and one or two long-shot variants. Weight the confident one heavier so the experiment can graduate it sooner, and let the long shots accumulate data more slowly:
Control: 25%
Variant B (confident): 50%
Variant C (long shot): 25%
What Doesn’t Help
- Changing weights mid-experiment to “let the winner accumulate faster” is a form of peeking and contaminates your data. Don’t.
- Setting different weights per device. Split Test Pro doesn’t support per-device weights. Use device targeting instead — run separate experiments per device segment if that matters.
- A 1% / 99% canary split for the first 24 hours, then a 50/50 split. Two different experiments stitched together. Pick one model and run it cleanly.
Visitor Assignment Is Sticky
Whatever split you set, an individual visitor’s assignment is deterministic and sticky. Once they’re bucketed into Variant B, they stay in Variant B for the life of the experiment via a cookie. Changing the weights mid-experiment only affects new visitors.
This is a feature: it means returning visitors see a consistent experience and your conversion rates aren’t being polluted by the same person seeing both variants.
A Note on Multivariate Tests
The data model supports multivariate experiments (testing several dimensions at once), but the feature is not currently exposed in the UI. For now, every experiment is A/B/n — one control plus one or more treatments, each a distinct variant. If you want to test combinations, run sequential experiments instead.
Next Steps
- Decide what each variant should change: Variant Types Overview.
- Run multiple experiments without traffic conflicts: Running Multiple Experiments.
- Learn what happens after a visitor is assigned: How CSS Variants Work.
Ready to start testing?
Install Split Test Pro and run your first experiment today.