Statsig

Statsig

Vendor

Statsig

Overview

Statsig is a full-stack experimentation and feature management platform founded in February 2021, with offices in Kirkland, Washington. The company was started by Vijaye Raji—who spent nearly a decade at Microsoft building product infrastructure and experimentation systems before joining Facebook—along with other engineers who worked on experimentation infrastructure at Meta. The founding thesis was direct: replicate the growth infrastructure that helped Facebook scale to more than two billion users and make it available as a commercial product. That lineage shows up in the platform's design, which treats feature flags, experiments, and analytics as parts of a single workflow rather than separate tools bolted together.

Beyond flags and A/B tests, Statsig layers in product analytics, web analytics, and session replay so teams can connect releases to user behavior without stitching together disconnected point solutions. Warehouse Native mode lets organizations run experiments on top of their own data warehouse—BigQuery, Snowflake, or via Microsoft Fabric's lakecentric architecture using open formats like Delta and Iceberg—with no ETL required. That combination positions it as a modern alternative to older enterprise feature-flag vendors and standalone analytics stacks, serving a customer base that ranges from OpenAI and Atlassian to Series A startups.

The platform ships quickly and iterates in public. All SDKs are fully open-source, hosted under the statsig-io GitHub organization across 108 public repositories. Statsig also invests in community events—its annual Sigsum conference and initiatives like Experimentation Week 2025—that push the broader experimentation conversation forward rather than only marketing the product.

Key Features

Feature flags and dynamic configs let teams gate functionality, tune parameters, and roll out gradually with targeting by user, segment, or environment. Client SDKs cover JavaScript, React, React Native, Next.js, Angular, Swift (iOS/macOS/tvOS), Android (Kotlin/Java), .NET, Roku, Unity, Dart/Flutter, and C++. Server-side SDKs support Node.js, Java, Python, Go, Ruby, .NET, PHP, Rust, and C++—meaning most production stacks can integrate without a custom shim.

Experiments support multivariate and layered designs, with analysis spanning frequentist, Bayesian, and sequential methods. The platform includes a simulation-based power analysis engine that calculates population sizes and relative variance from historical behavior, so teams can estimate experiment duration and adjust minimum detectable effects before committing to a test. Variance reduction techniques like CUPAC (Correlation-based Unblinded Prediction Adjustment Control) are built in, helping teams reach reliable conclusions faster on the same traffic. The UI follows a progressive disclosure model—simple surfaces for standard tests, with deeper controls for teams that need them.

Product and web analytics capture events, funnels, and retention views inside the same project as flags and experiments, reducing context switching between tools. Session replay adds qualitative debugging alongside quantitative metrics. Third-party integrations with Segment, Rudderstack, Hightouch, mParticle, Webflow, Shopify, Framer, and Slack extend the platform into existing data and communication pipelines. For data teams that need full control downstream, Warehouse Native options keep experiment assignments and event data in the organization's own warehouse as the single source of truth.

What Makes It Notable

The team's background building experimentation infrastructure at Facebook and Microsoft gives Statsig practical credibility on statistical rigor, scale, and how large product organizations actually run tests in production. That experience surfaces in specific choices: built-in sequential testing so teams can monitor results responsibly while tests run, power analysis that accounts for real traffic patterns, and CUPAC for variance reduction—features that reflect lessons learned operating at billions-of-users scale.

A generous free tier lowers the barrier for startups and mid-size teams to adopt full-stack experimentation, and the fully open-source SDK strategy means teams can audit, fork, or contribute to the client libraries they depend on. Statsig also publishes actively—engineering blog posts on topics from causal inference to LLM evaluation frameworks like TruthfulQA, a Medium publication documenting methodology, and a YouTube channel with over 230 videos covering everything from feature flag fundamentals to enterprise deployment patterns. For many teams, the draw is unifying "ship safely" (flags, rollouts) with "learn what worked" (experiments, analytics) under one vendor, with warehouse integration for organizations that want the platform's statistical engine without giving up ownership of their data.

Key Facts

Methodology
frequentist bayesian sequential
Platform Type
client-side server-side full-stack
Year Started

~2021

Product

Feature flags A/B testing Product analytics Session replay Web analytics Warehouse native
#feature-flags#analytics#warehouse-native#free-tier

Last updated: 2026-03-28

Related Platforms