Split Test Pro
Beginner 6 min read

Frequently Asked Questions

Quick answers to the questions merchants and developers ask most often about Split Test Pro — flicker, performance, multiple experiments, privacy, and what's not supported today.

Quick answers to the questions that come up most often. Each links to a fuller doc when there’s more to say.

Will visitors see flicker?

No. Variant CSS is injected synchronously into <head> before the browser paints, so visitors never see the original styling for a moment before the variant appears. JS variants execute in <body> and may run a fraction of a second after CSS, but well-written JS variants don’t visibly disrupt the page.

The exception: if your variant changes element sizes, you may get layout shift (CLS) as the change applies. See CSS Variants for how to avoid that.

Does this slow my site down?

The script is small (~10KB minified) and loads defer so it doesn’t block page render. Typical overhead is 1–3ms for the variant injection step.

For visitors not in any experiment on the current page, the script does a cookie check + API fetch + “no targeted experiments” no-op — usually under 50ms total, none of it blocking visible render.

If you’re running 10+ concurrent experiments and they all target the same page, the cumulative variant injection adds up — but for most setups (1–3 experiments per page), the impact is negligible.

Can I run two experiments on the same page?

Yes, with a caveat. Two experiments on the same page but different DOM regions (e.g., one on the header, one on the footer) work fine in parallel. Two experiments on the same DOM region create interaction effects that contaminate the data.

See Running Multiple Experiments for the full breakdown.

What happens if my JS variant throws an error?

The script doesn’t wrap variant JS in try/catch. If your code throws, it can break subsequent scripts on the page for visitors in that variant.

Always test JS variants thoroughly before launching. Use null-checks before manipulating DOM elements, and wrap risky third-party calls in try/catch. See JavaScript Variants for safety patterns.

Will Split Test Pro auto-stop my experiment?

No. The decision to end an experiment is always manual. The “Declare Winner” button becomes available when the leading variant crosses 95% probability to be best, but you have to click it.

This is a deliberate design choice — auto-stop introduces edge cases (what about novelty effects? day-of-week patterns?) that are better handled by a human reading the dashboard. See Declaring a Winner.

Does the app email me when an experiment reaches significance?

No. There are no email notifications for experiment milestones today. Email is used only for one-time-password (OTP) login codes and team invitations. Monitor the dashboard manually for now.

Is there an audit log of experiment changes?

Not currently. There’s no per-experiment timeline showing who edited what when. If you need this, treat it as a soft layer (Slack, project management tools) rather than relying on an in-app audit.

Can I export results as CSV?

Coming soon. The button exists in the UI but is disabled with a “Coming soon” tooltip. The functionality isn’t wired up yet. As a workaround, you can read the experiment results via the API and process them yourself.

How is GDPR / privacy handled?

The HTML script respects Do Not Track and Global Privacy Control by default. There’s a public API (SplitTestPro.optIn() / SplitTestPro.optOut()) for explicit consent management, and a localStorage key (stp_consent) you can integrate with your cookie-consent banner.

On Shopify, the Web Pixel respects Shopify’s customer-privacy settings automatically.

See Privacy and Consent for full details.

What’s the difference between variants and experiments?

An experiment is the test itself — the hypothesis, the targeting, the metrics. A variant is one version within the experiment — the control (Variant A) or one of the treatments (B, C, D). Each variant has its own CSS, JS, or redirect URL.

A two-variant A/B test = one experiment with two variants. A four-variant A/B/C/D test = one experiment with four variants.

Can I test on different domains under one workspace?

No. A workspace is tied to a single domain (HTML) or shop (Shopify). For multiple sites, create multiple workspaces. See Workspaces.

What happens to my data if I cancel my subscription?

Your data is preserved. Cancellation stops billing and (after the current period ends) restricts access to the Results dashboard, but the underlying data — experiments, variants, conversion history — stays in the database. Re-subscribing restores full access.

If you want to fully delete the data, use Settings → Workspace → Danger zone → Delete workspace. See Workspaces.

How long should I run an experiment?

At least 7 days minimum to capture full weekly traffic patterns, AND until probability to be best crosses 95% on the primary metric, AND until you have at least 300–500 sessions per variant. All three.

For low-traffic sites, 3–4 weeks is realistic. For high-traffic sites, the week minimum still applies — don’t shortcut weekly cycles. See Declaring a Winner.

What if my visitor is on a single-page app?

The script evaluates targeting on initial page load only. SPA route changes via pushState don’t retrigger experiments. See Single-Page Apps for workarounds.

Can I edit a variant while the experiment is running?

Don’t. Editing a running variant contaminates your data — visitors who see the variant before and after the edit are mixed in the same bucket but seeing different things.

If you need to fix a variant, end the experiment, fix the variant, and start a fresh one. See Experiment Lifecycle.

What metrics can I track on Shopify vs HTML?

Shopify auto-tracks the full purchase funnel: product views, cart, checkout started, payment info submitted, purchase completed. Plus continuous metrics: revenue per session, average order value. All available as primary metrics.

HTML tracks custom events you fire from your site via SplitTestPro.trackConversion(eventKey). You define the goals; the script tracks them.

See Conversion Goals for setup.

Why isn’t Variant B at 100% probability if it’s clearly winning?

Bayesian probability reflects uncertainty in the data. A variant can be obviously ahead in raw conversion rate but still only at 91% probability if:

  • Sample sizes are small (so the credible intervals are wide).
  • The lift is modest enough that it’s possible (just unlikely) the true rates are reversed.

100% probability would require infinite samples. 95% is the convention for “confident enough to act.” See Bayesian Stats Explained.

What are credible intervals?

The range of plausible values for a parameter, with a stated probability. “Variant B’s conversion rate has a 95% credible interval of 4.2%–5.8%” means there’s a 95% chance the true rate is in that range, given the data observed.

Different from frequentist confidence intervals — and works the way most people intuitively expect confidence intervals to work. See Bayesian Stats Explained.

What’s the difference between custom events for tracking and event activation?

Custom events for tracking (SplitTestPro.trackConversion("event_name")) record that a visitor converted on a goal. Drives results data.

Event activation uses a custom event to defer when an experiment activates for a visitor. Drives when the variant starts showing.

You can use both at once — an experiment that activates on cta_click and tracks signup as its primary conversion goal. See Custom Events and Event Activation.

Does Split Test Pro work with my CMS / framework / page builder?

Anything that lets you add a <script> tag to <head> works:

  • WordPress — header injection plugin or theme customizer.
  • Webflow / Squarespace / Wix — custom code / header injection in site settings.
  • Next.js / Nuxt / Astroapp/layout.tsx, _app.js, or global layout <head>.
  • Shopify — install the app from the App Store; no script tag needed.
  • Google Tag Manager — Custom HTML tag firing on All Pages.

See Installing on an HTML Site for the platform-specific instructions.

Can I A/B test something other than visual changes?

Yes. A few less-common patterns:

  • Test a JavaScript behavior with a JS variant — change the form validation, the lazy-loading threshold, the third-party SDK initialized.
  • Test a different page entirely with a redirect variant — useful for full-page redesigns.
  • Test only after a specific interaction with event activation — useful when the change is in a step the visitor only sees after clicking.
  • Test a server-side change by setting a cookie value via JS variant, then having your server respond differently based on the cookie.

The pattern: you control the JavaScript that runs for each variant. Whatever you can express in JS, you can test.

What’s not supported today (the honest list)?

Things merchants ask about that aren’t built:

  • Email notifications for experiment milestones.
  • Auto-stop when significance is reached.
  • Audit log of who edited what.
  • CSV / data export (button exists, not wired).
  • Shopify Plus checkout block variant editor (model exists, no UI yet).
  • SPA-aware retargeting on pushState route changes.
  • Custom segmentation beyond device (no geo, no traffic source segments in-app).
  • Holdout / mutex groups for guaranteed experiment exclusion.
  • Server-side variant assignment (everything’s client-side).
  • Multivariate experiments (model exists, UI not exposed).

Most of these are on the roadmap. Cross-reference with what you need before committing to a particular workflow.

How do I contact support?

In the app: Support in the sidebar. Or email the team at the address listed there. Response time is one business day.

When opening a ticket, include:

  • Your workspace ID (Settings → Workspace).
  • The experiment ID, if relevant.
  • The exact symptom and a URL we can reproduce on, if possible.
  • Browser and OS.

Next Steps

Ready to start testing?

Install Split Test Pro and run your first experiment today.

Install on Shopify