Split Test Pro
Beginner 5 min read

Pre-Launch QA

What to check before you start an experiment — the manual reviews that actually matter, what the launch checklist surfaces, and when to use the AI Review.

When you click Start Experiment, Split Test Pro shows a Pre-Launch Checklist before going live. The checklist is intentionally short — most things you should verify happen before you reach the start button. This guide covers the in-app checklist, what it actually checks, and the manual review steps that catch the most common mistakes.

What the In-App Checklist Does Today

Click Start Experiment and you’ll see a Pre-Launch Checklist banner with:

  • A variant rendering reminder — a warning row that prompts you to manually verify your variant changes appear correctly on the target page. Automated visual diffing isn’t part of the launch flow, so this nudge exists to catch the case where your CSS or JS doesn’t actually do what you intended.
  • An optional Get AI Review button — sends your experiment configuration to Claude and returns a short critique. Worth using on important launches.
  • A Start Test Anyway button — proceeds to start the experiment. The checklist is advisory, not blocking. You can launch even with the warning.

That’s it. The checklist isn’t trying to enforce a long QA process; it’s trying to prevent the single most common cause of wasted experiment runtime: shipping a variant that doesn’t actually render the change.

The Manual Pre-Launch Review

The checklist is short because the meaningful verification happens before you click Start. Run through this list yourself:

1. Confirm targeting is right

Open your Targeting tab and check:

  • The targeting rules match the URL pattern you intended (a typo’d path means zero traffic).
  • The Preview URL banner says “Match” — if it’s red, fix the targeting before you launch.

2. Verify each variant renders

The most important step. For each non-control variant:

  1. Open the page that matches your targeting in a new browser tab.
  2. Open DevTools (F12 or right-click → Inspect).
  3. Paste your variant’s CSS into the Styles panel + button (or paste the JS into the Console).
  4. Confirm the change appears, on desktop and mobile viewport.

If your CSS doesn’t apply because of theme specificity, fix it now — see CSS Variants for the specificity escape hatches. The variant rendering warning on the in-app checklist is a reminder to do exactly this.

3. Pick the right primary metric

The metric you select drives every “winner” calculation. Check:

  • The metric is wired up and producing data on the workspace today (look at recent sessions on your dashboard).
  • It’s tied to the actual outcome you care about — testing button colors but tracking pageviews as your primary metric is a mismatch.
  • For Shopify: built-in goals (Add to Cart, Checkout Started, Purchase) are pre-wired. For HTML: confirm your custom event is firing in the browser console.

4. Sanity-check traffic and runtime

How long is this likely to take? A rough heuristic: at 500 sessions per variant per day, a typical 10% relative improvement test reaches 95% probability in 5–10 days. If you’re getting nowhere near 500 sessions/day on the targeted page, reset expectations — this might be a 4-week experiment.

If your traffic is too thin to ever reach significance in a reasonable window, either:

  • Broaden the targeting (e.g., test on /products/* instead of one product page).
  • Pick a more impactful change so the effect size is larger.
  • Switch the primary metric to one that fires more often (e.g., add-to-cart instead of purchase).

5. Check for conflicts

If you’re already running other experiments, look at the conflict warning — running two tests on the same DOM region at the same time creates interaction effects that contaminate both.

When to Use the AI Review

The Get AI Review button is most useful when:

  • This is your first experiment and you want a sanity check.
  • The change is risky or large (homepage hero, checkout flow).
  • You’re unsure if your hypothesis is well-formed.
  • You want a second opinion before exposing real traffic.

It’s less useful for routine tests where you’ve already done the manual review above.

After Launch

Once you click Start Test Anyway, the experiment goes live immediately. Visitors who match your targeting begin getting bucketed on the next pageview. Open the Results tab — you should see sessions accumulating within minutes if the targeted page has any traffic. If sessions are stuck at zero after an hour on a normally-trafficked page, your variant script isn’t activating and something in your install or targeting is wrong.

Next Steps

Ready to start testing?

Install Split Test Pro and run your first experiment today.

Install on Shopify