Split Test Pro
Beginner 4 min read

Glossary

A quick-reference list of A/B testing terms used throughout Split Test Pro and its documentation — variant, exposure, lift, credible interval, probability to be best, and more.

A/B testing has its own vocabulary, and some terms differ between tools. This glossary collects the terms used across the Split Test Pro documentation.

A

Activation

The moment an experiment “fires” for a visitor, applying the variant’s CSS or JS and recording the page-view event. Usually happens on page load; can be deferred via event activation.

A/B test

An experiment comparing two versions of a page (a control and a variant) to determine which performs better against a defined goal.

A/B/n test

An experiment with one control and multiple variants (B, C, D, …). Same statistical model as A/B; just more variants.

AOV (Average Order Value)

Total revenue divided by number of orders. A continuous metric available on Shopify experiments.

Archived

An experiment state for hiding completed or abandoned experiments from the main list. Data is preserved; the experiment just doesn’t appear in active views.

Assignment

The act of bucketing a visitor into a specific variant. Deterministic — based on a hash of the visitor ID and experiment ID — so the same visitor always gets the same variant.

B

Bayesian statistics

The statistical approach Split Test Pro uses. Computes posterior probabilities (e.g., “73% chance Variant B is best”) rather than frequentist p-values. See Bayesian Stats Explained.

Beta distribution

The probability distribution used to model conversion rates in the binomial case. Each variant’s conversion rate is treated as a Beta(α, β) where α = conversions + 1 and β = (sessions − conversions) + 1.

C

Conversion

The action that defines success for an experiment — a purchase, a signup, a click, a custom event. Each experiment has at least one conversion goal.

Conversion rate

Conversions divided by sessions, expressed as a percent. The most common primary metric.

Control

The variant a visitor sees if the experiment didn’t change anything. Usually labeled Variant A. The baseline against which other variants are compared.

Credible interval

A range with a stated probability of containing the true value of a parameter. “We’re 95% confident the true conversion rate is between 4.2% and 5.8%” is a credible-interval statement. Different from frequentist confidence intervals — see Bayesian Stats Explained.

Custom event

A merchant-defined conversion event, fired from your site via SplitTestPro.trackConversion(eventKey, opts). See Custom Events.

D

Declare winner

The action of formally ending an experiment with a chosen winning variant. Available when the leading variant crosses 95% probability to be best on the primary metric. See Declaring a Winner.

Device targeting

Restricting an experiment to one or more device classes (mobile, tablet, desktop). Set per-experiment, applies before variant assignment. See Device Targeting.

E

Event activation

Configuring an experiment to wait for a specific custom event before activating. Useful when the change only matters after a user interaction. See Event Activation.

Experiment

A single A/B test, with a name, hypothesis, targeting rules, variants, and conversion goals.

Exposure

A visitor session in which the variant actually applied. Distinct from assignment — a visitor can be assigned to a variant but never have it activate (e.g., with deferred activation that never fires).

F

Funnel

The sequence of steps a visitor takes from arriving on the site to converting. On Shopify, the Web Pixel auto-tracks the full purchase funnel — see Shopify Funnel Tracking.

H

Hypothesis

A testable statement of belief that motivates an experiment. Formatted as: “We believe [change] will [increase/decrease] [metric] because [reason].” See Testing Methodology.

I

ICE (Impact, Confidence, Ease)

A simple framework for prioritizing experiments. Score each idea 1–3 on each axis; test the highest sums first.

L

Lift

The percent improvement of a variant over the control on the primary metric. A 5% lift means the variant performed 5% better than control.

M

Member

A team role with access to view and edit experiments, but not invite other team members or manage billing. See Team and Invites.

Modeled improvement

The Bayesian engine’s distribution of likely lift values, plotted as a violin/density chart in the Statistical Analysis accordion.

Monte Carlo sampling

The technique used to compute probability to be best — sample from each variant’s posterior distribution thousands of times, count which variant had the highest sample.

N

Normal distribution

The probability distribution used to model continuous metrics (revenue, AOV). Each variant’s mean is treated as Normal(µ, σ) where µ is the sample mean and σ is the standard error.

O

Owner

A team role with full permissions: experiments, billing, team management, workspace deletion.

P

Peeking

Repeatedly checking experiment results and stopping the moment they look favorable. Inflates false-positive rate. See Common Mistakes.

Posterior

The updated distribution of a parameter after observing data. In Bayesian terms, “prior + data → posterior.”

Pre-launch QA

The checklist that runs when you click Start Experiment, plus an optional AI review. See Pre-Launch QA.

Preview URL

A URL set on each experiment used for screenshot capture and the targeting-match banner. See Screenshots and Preview.

Primary metric

The conversion goal an experiment is judged on. Drives the “probability to be best” calculation and the “Declare Winner” CTA. See Conversion Goals.

Prior

The distribution of belief about a parameter before observing data. Split Test Pro uses Beta(1, 1) (uninformative) for binary metrics.

Probability to be best

The probability that a variant is the best performer across all variants in the experiment, computed via Monte Carlo. The headline number on the Results dashboard.

Probability to beat control

The probability a non-control variant beats the control. Equal to “probability to be best” in two-variant experiments; differs for three+.

R

Redirect test

An experiment where one or more variants redirect visitors to a different URL instead of inline-modifying the current page. See Redirect Tests.

Revenue per Session

Total revenue divided by sessions. A continuous metric and the default primary metric on Shopify.

S

Session

A single visit by a single visitor, identified by a deterministic visitor ID. The denominator of the conversion rate.

Sample Ratio Mismatch (SRM)

A critical anomaly: the actual traffic split deviates significantly from the configured split. Usually indicates broken assignment logic. See Anomaly Alerts.

Sequential testing

Running experiments one after another on the same surface, rather than in parallel. The right approach when tests would interact. See Running Multiple Experiments.

Significance

The threshold at which we trust the result. Split Test Pro uses 95% probability to be best as its convention.

T

Targeting

The rules that determine which visitors and which pages an experiment applies to — URL targeting plus optional device targeting and event activation.

Theme App Extension

The Shopify component that injects variant CSS and JS into the storefront. See Shopify Theme App Extension.

Traffic allocation

The percentage of visitors each variant receives. Defaults to even (50/50 for two variants); can be adjusted per variant.

V

Variant

One of the versions an experiment compares. Variant A is conventionally the control; B, C, D… are the treatments. Each variant has its own CSS, JS, and/or redirect URL.

Variant types

The three things a variant can change: CSS (visual), JS (behavioral), redirect (full-page substitute). See Variant Types Overview.

W

Web Pixel

The Shopify component that captures funnel events and attributes them to the visitor’s experiment-variant assignment. See Shopify Web Pixel.

Winner

The variant declared best after an experiment ends. The workflow is: leading variant crosses 95% → click “Declare Winner” → apply the change to your theme. See Declaring a Winner.

Workspace

The container for one site or store’s experiments, team, and billing. See Workspaces.

Ready to start testing?

Install Split Test Pro and run your first experiment today.

Install on Shopify