Your First Experiment
An end-to-end walkthrough of creating, configuring, and launching your first A/B test in Split Test Pro — from naming the experiment to interpreting the launch readiness check.
This guide walks you through your first experiment end-to-end — picking what to test, configuring targeting and variants, passing the pre-launch checklist, and going live. Plan on about 15 minutes for the full setup.
Before You Start
Pick something specific to test. Good first experiments are simple, single-element changes:
- Add-to-cart or signup button color, size, or label
- Headline copy or font weight
- Trust badges or social proof placement
- Hero image swap
Resist the urge to test multiple changes at once. If your variant changes three things and it wins, you’ll have no idea which change caused the lift.
Step 1: Create the Experiment
Click Create Experiment
From the Experiments view, click Create Experiment in the top right. A wizard opens with four tabs: General, Targeting, Variants, and Results.
Name it and write a hypothesis
In General, give the experiment a clear name like “Product page — orange CTA vs blue control”. Vague names (“Button test”) become useless once you have ten of them.
Add a hypothesis in the description field. The format that works best is:
“We believe [change] will [increase/decrease] [metric] because [reason].”
Example: “We believe an orange CTA button will increase add-to-cart rate because it contrasts more strongly with our white theme than the current blue.”
Step 2: Set Up Targeting
The Targeting tab tells Split Test Pro which pages the experiment runs on.
Choose where the experiment runs
Add a URL targeting rule. The simplest first-experiment setup is:
- URL part: Path
- Operator: Equals
- Value:
/products/your-product-slug
This restricts the experiment to one specific product page. You can target broader patterns (like /products/* for all product pages) later — see URL Targeting for the full set of operators.
Set a Preview URL
Paste the full URL of the page you’re targeting (e.g., https://yourstore.com/products/your-product-slug) into the Preview URL field. The app uses this to:
- Show a banner confirming whether your targeting rules would activate on that page.
- Capture screenshots of each variant before you launch.
If the banner says “Preview URL doesn’t match targeting rules,” fix the targeting before continuing.
Step 3: Add a Variant
Every experiment starts with Variant A (Control) — your unchanged page. You add a Variant B that contains the change you want to test.
Add Variant B
In the Variants tab, click Add Variant. A new variant appears below the control. Open it to edit.
Each variant has three independent fields you can fill in: CSS, JavaScript, and Redirect URL. Use one for a simple test or combine them for a more complex change. For your first experiment, use the CSS field — see Variant Types Overview for when to reach for the others.
Write your CSS
For an orange-CTA test, your variant CSS might look like:
.btn-primary,
form[action="/cart/add"] button[type="submit"] {
background-color: #f5620a !important;
border-color: #f5620a !important;
color: #ffffff !important;
}Use your browser’s DevTools to find the right selector — open the page, right-click the button you want to change, and copy a stable class or ID. See Selector Cookbook for patterns.
Confirm the traffic split
By default each variant gets an equal share of traffic — 50% / 50% for two variants. Leave it at the default for your first test. You can use uneven splits later for risk-mitigation rollouts (see Traffic Allocation).
Step 4: Pick a Primary Metric
Choose what 'winning' means
Back on General (or in the Metrics section, depending on platform), select the primary metric the experiment will be judged on.
- Shopify users: built-in goals like Add to Cart, Checkout Started, Purchase, and Revenue per Session are pre-wired and require no setup.
- HTML users: pick from the conversion goals you defined in Settings → Conversion goals, or define a new one. See Conversion Goals.
Whatever you pick is the metric the “Declare winner” button will judge.
Step 5: Run the Pre-Launch Check
Click Start Experiment
When you click Start Experiment, Split Test Pro shows a Pre-Launch Checklist before going live. The checklist surfaces a reminder to manually verify your variant renders correctly on the target page (since automated visual diffing isn’t part of the launch flow), and an optional Get AI Review button that runs a Claude-powered check of your experiment setup.
The checklist doesn’t block you — even with the warning, you can proceed via Start Test Anyway. The decision to go live is always yours. See Pre-Launch QA for what to look for before you click.
What Happens Next
Once the experiment is live:
- Visitors who match your targeting are randomly assigned to a variant. The assignment is sticky — they’ll keep seeing the same variant on subsequent visits (we use a cookie).
- Sessions and conversions stream into the Results tab in near real time.
- You’ll see the probability to be best update as data accumulates.
Next Steps
- Learn how to read the results: Reading Your Results Dashboard.
- Understand what “probability to be best” actually means: Bayesian Stats Explained.
- Decide when it’s safe to call a winner: Declaring a Winner.
Ready to start testing?
Install Split Test Pro and run your first experiment today.