Continuous Metrics
When to track a numeric value alongside a conversion event — revenue, AOV, time spent — and how Split Test Pro switches from binomial to Normal-distribution modeling under the hood.
A binary metric measures whether something happened — a click, a signup, a purchase. A continuous metric measures how much — revenue per session, average order value, dollars per signup. Continuous metrics give you a richer signal but require slightly different statistical handling. This guide covers when to use them and what changes when you do.
Binary vs Continuous
| Question | Metric type |
|---|---|
| ”Did the visitor convert?” | Binary — Beta-Binomial model |
| ”How much revenue did the visitor produce?” | Continuous — Normal model |
Both can run on the same goal at the same time. If you fire a conversion event with a value, Split Test Pro records it as a binomial count (the visitor converted) AND as a continuous data point (the visitor produced $X). You’ll see both modes reflected in the Results dashboard.
How to Trigger Continuous Mode
Continuous mode is opt-in per event fire, not per goal definition. To produce continuous data for a goal, pass a value (or amount — they’re aliases) when firing the event:
window.SplitTestPro.trackConversion("purchase", {
value: 49.0,
currency: "USD",
});
Without value, the event is recorded as a binary conversion only.
You don’t need to declare anything in the goal config — the value field on the event is what flips the engine into continuous mode for that data point. A goal that sometimes has value and sometimes doesn’t will produce both binary and continuous results, with the continuous one based on the subset of events that included a value.
What Changes Statistically
The Bayesian engine handles the two cases differently. (See Bayesian Stats Explained for the full treatment.)
Binary metric
- Models each variant’s conversion rate as a Beta distribution with a uniform Beta(1, 1) prior.
- Probability to be best is computed via Monte Carlo sampling across all variant distributions.
- Credible interval is the central 95% of the posterior conversion rate.
Continuous metric
- Models each variant’s mean value as a Normal distribution parameterized by the sample mean and standard error.
- Probability to be best is computed the same way (Monte Carlo).
- Credible interval represents the range of plausible values for the per-visitor mean (e.g., “we’re 95% sure the average revenue per session for Variant B is between $1.20 and $1.45”).
The continuous credible interval is reported in the Results view as a percent improvement vs control, e.g., “+8% to +18%.”
Reading Continuous Results
Continuous metrics show up on the experiment’s Results page in a Custom Metrics card (HTML) or in the built-in revenue panels (Shopify — Revenue per Session, AOV, Revenue per Purchaser).
For each continuous metric, you’ll see:
- Mean — the per-session/per-visitor average for that variant.
- Total — the sum across all events.
- Trials — how many events contributed (i.e., events that actually had a
value). - Probability — the probability this variant beats control on the continuous metric.
- Credible interval — the range of plausible lift, expressed as a percent.
When to Use Continuous Metrics
Use continuous when:
- Revenue varies meaningfully per conversion — testing a free-shipping threshold that might lift conversion rate but lower AOV. Binary alone misses this.
- Engagement is the goal, not just a yes/no — testing a new content layout where you care about how long visitors stay or how many items they view.
- You’re comparing high-margin vs low-margin outcomes — a discount variant that converts more sessions but at lower revenue per conversion.
Don’t use continuous when:
- The action is binary by nature — newsletter signup, account creation. There’s nothing to count beyond the binary “did it happen.”
- Values are noisy and don’t correlate with the test variable — testing button color and tracking the value of whatever they happened to buy. The variant doesn’t influence the order size; you’ll get noise.
Continuous Metrics on Shopify
Shopify workspaces get Revenue per Session and Average Order Value as built-in continuous metrics — no setup needed. The Web Pixel reads order totals and feeds them in automatically. Both are selectable as primary metrics.
If you need a custom continuous goal on Shopify (e.g., revenue from a specific product line), the path today is to fire a custom event with value from theme JavaScript — same pattern as HTML.
Trade-Offs
Continuous metrics are richer but they’re also more variable — a few outlier high-value conversions can swing the result. The Normal model handles this in principle (variance is part of the parameters), but in practice:
- Continuous metrics often need more data to reach the same probability-to-be-best as a binary metric on the same goal.
- A handful of huge orders early can give Variant B a misleadingly large lead. Run for at least a full week to wash this out.
- Outliers — like a $5,000 order during a B2B test — can dominate. Consider whether to clip or trim values before reporting.
The Bayesian engine doesn’t currently auto-trim outliers. If you have a known anomaly, the cleanest move is to wait until the cumulative data shifts the average back to representative values.
Common Mistakes
- Reporting on a continuous metric with very few trials. A continuous mean with N=8 isn’t telling you anything reliable. Wait for at least 100+ trials per variant before drawing conclusions.
- Forgetting to pass
value. If you wanted continuous tracking but calledtrackConversion("purchase")without value, you only get binary. Fix it forward and look at the data from the fixed point onward. - Using continuous as the primary metric on a low-trial goal. Pick a binary metric as primary in this case; treat the continuous metric as a secondary signal.
Next Steps
- Understand the Bayesian model that powers continuous metrics: Bayesian Stats Explained.
- Set up the events that produce continuous data: Custom Events.
- See how Shopify’s Revenue per Session works under the hood: Shopify Funnel Tracking.
Ready to start testing?
Install Split Test Pro and run your first experiment today.