Split Test Pro
Intermediate 5 min read

Segmenting Results

Filter your experiment results by device type to see how variants performed across desktop, tablet, and mobile — and what to do when the segments tell different stories than the aggregate.

The aggregate result of an experiment can hide important patterns. A flat overall result might mask a strong mobile win and a desktop loss that cancel each other out. Segmenting results — looking at the data filtered to subgroups — is how you catch those patterns.

This guide covers what segmentation Split Test Pro offers today and how to use it.

What’s Available

Split Test Pro currently supports device-type segmentation on the Results dashboard. The three segments are:

  • Desktop — visitors whose User-Agent identifies them as a desktop browser.
  • Tablet — iPads, Android tablets, and similar.
  • Mobile — phones.

You can view results filtered to any one segment, or compare all three side-by-side.

How Device Segmentation Works

Device type is detected at variant-assignment time by parsing the visitor’s User-Agent string. The classification logic:

  • Mobile — User-Agent matches phone patterns (iPhone, Android Mobile, BlackBerry, Opera Mini, Windows Phone).
  • Tablet — User-Agent matches iPad, Android tablet (Android without “Mobile”), Windows ARM, or viewport width 600–1024px.
  • Desktop — everything else.

The classification is per-session (recorded once when the visitor is bucketed) and feeds the device-segment cards on the Results page.

Reading Device Segment Cards

Below the main variant comparison table, you’ll see one card per device segment showing the same per-variant breakdown — sessions, conversions, conversion rate, probability — but filtered to that segment.

What to look for:

  • Consistent lift across all three segments — the cleanest result. The variant works for everyone. Easy decision.
  • Big lift on one segment, flat on others — partial win. Consider applying the variant only to the device class where it works (using device targeting on a re-run).
  • Mixed direction (Variant B wins on mobile, loses on desktop) — the change has different effects on different devices. The aggregate result averages them out and may hide a real opportunity (or risk). Investigate further.
  • One segment with very few sessions — its probability is fragile. Don’t read too much into a “winner” with 50 mobile sessions.

Segment-First vs Aggregate-First Decision Making

There are two patterns:

Aggregate-first: Look at the overall result. If it’s significant, ship it. Use device segments only as a sanity check that you’re not shipping a regression on one device.

Segment-first: Treat the segments as primary. Decide per-segment whether to ship. Useful when you know your audience splits unevenly across devices and you’re willing to ship variant-by-segment.

Most teams default to aggregate-first. Segment-first is appropriate when:

  • One device type drives a disproportionate share of revenue.
  • The variant change is fundamentally different on mobile (e.g., a sticky bar that only renders below 768px width).
  • You’re running on a site where mobile and desktop visitors are functionally different audiences (B2B desktop researchers vs B2C mobile shoppers).

Combining Device Segmentation With Device Targeting

Don’t confuse the two:

  • Device targeting (set on the experiment) — filters who runs the experiment to a chosen device class. Restricts the audience.
  • Device segmentation (on the Results page) — filters which results you view. Doesn’t change the audience.

If you ran an experiment on all devices and the segment view shows a strong mobile-only win, the right next step is often to:

  1. Complete the original experiment.
  2. Apply the change to mobile only (CSS @media query in your theme, or a new device-targeted experiment).
  3. Optionally run the inverse on desktop to confirm the change there is neutral or negative.

Where to Go Beyond Device

For segmentation Split Test Pro doesn’t natively offer:

  • Geographic — cross-reference your experiment data with GA4 or your analytics tool. The variant assignment is recorded as a custom event in GA4 (see Google Analytics 4), so you can build a GA4 exploration that segments by country.
  • Traffic source — same: use GA4’s source/medium dimension.
  • New vs returning — same: use GA4’s user-type dimension.
  • Logged-in vs anonymous — fire a custom event from your site that tags the visitor’s auth state, then segment in your analytics tool.

The pattern: Split Test Pro owns the experiment + variant + conversion data. Your analytics tool owns the visitor segmentation. Joining them happens in your analytics tool.

What’s Missing (Today)

A few segmentation features that don’t exist yet:

  • Exposed-only vs all-sessions toggle — there’s no way today to filter results to “only sessions that actually saw the variant render.” All sessions assigned to a variant are counted. For most CSS-based tests this doesn’t matter (assignment = exposure), but for tests with event activation the distinction matters more — those tests already exclude non-activated sessions because activation is what records the page-view.
  • Custom segment builder — no UI to define a segment by URL pattern, UTM, or visitor attribute. Use targeting at the experiment level instead.

When these ship, this doc will be updated.

Common Mistakes

  • Reading a tiny segment as conclusive. A 100-session mobile segment with 95% probability is not the same kind of confident as a 5,000-session desktop segment with 95% probability.
  • Ignoring the segments altogether. Aggregate-only thinking misses segment-specific wins and losses. Even a quick scan of the device cards before declaring a winner is worth it.
  • Shipping per-segment without re-running. If you decide to ship a change to mobile only based on segment-level data, the cleanest move is to confirm with a follow-up device-targeted experiment. Segment data from a broad-targeted test is suggestive; a re-run is conclusive.

Next Steps

Ready to start testing?

Install Split Test Pro and run your first experiment today.

Install on Shopify