Quick answer: Crash-free rate is the percentage of play sessions that do not end in a crash. Calculate it as (sessions without crashes ÷ total sessions) × 100. Target 99%+ for desktop games, 99.5%+ for mobile. Track it over time per build, use it as a release gate, and monitor non-fatal error rate alongside it.

A 1% crash rate sounds almost negligible. It is not. If your game is played in 50,000 sessions a week, a 1% crash rate means 500 sessions ending in crashes every single week. At 500,000 sessions — a modest number for a game that has had a sale or a streamer pickup — that is 5,000 crashes per week. Every one of those is a player whose experience ended badly, who is now choosing whether to leave a negative review. Crash-free rate is the single most important stability metric for a live game, and most indie developers are not tracking it systematically.

Definition and Calculation

Crash-free rate (also called crash-free session rate) is the percentage of play sessions that complete without a hard crash.

The formula:

Crash-free rate = (Sessions without crashes / Total sessions) × 100

A “session” is typically defined as a continuous period of gameplay from launch to quit. A session is “crash-free” if the game process exits normally (player quits the game) rather than abnormally (the process is terminated by an unhandled exception, a signal, or an OS-level fault).

For example: if your game is played in 10,000 sessions in a week and 87 of those sessions end in a crash, your crash-free rate is:

(10,000 - 87) / 10,000 × 100 = 99.13%

Bugnet calculates crash-free rate automatically per build version and displays it in the game health dashboard, so you can see how each release performs relative to the previous one without doing the math manually.

Industry Benchmarks

What constitutes a “good” crash-free rate depends on platform.

“A 1% crash rate sounds like a rounding error. To your players, it’s a 1-in-100 chance that the time they sit down to play your game ends in frustration. Multiply that by your player count and it stops sounding small.”

These benchmarks assume a reasonably large sample size. A crash-free rate calculated from 50 sessions is statistically unreliable. Start taking crash-free rate seriously as a metric once you have at least 1,000 sessions in the measurement window.

Crash-Free Sessions vs. Crash-Free Users

These two metrics measure related but different things.

Crash-free sessions measures the percentage of individual play sessions that complete without a crash. It is the more commonly reported metric and the one shown on Bugnet’s game health dashboard. It weights players who play frequently more heavily than casual players, since each session counts independently.

Crash-free users measures the percentage of unique players who have experienced zero crashes. A player who plays your game daily and encounters one crash per week has a crash-free session rate contribution that looks decent, but counts as a non-crash-free user every week. Crash-free users is a more conservative metric that answers the question: “What fraction of my player base has never crashed?”

In practice, crash-free sessions is more useful for release decisions because it reflects the per-play-session experience. Crash-free users is more useful for player satisfaction analysis, because it tells you how many players have a relationship with your game that has never been interrupted by a crash.

When in doubt, track both. Report crash-free sessions for build quality gating and crash-free users for community health.

Why a 1% Crash Rate Is Not Small

The intuition that 1% is negligible comes from thinking about the percentage abstractly. The perception changes when you translate it into affected players.

Steam review data consistently shows that players who experience crashes are significantly more likely to leave negative reviews than players who play the same number of hours without crashes. A 1% crash rate at scale is not a minor quality issue — it is a systematic negative review generator.

The practical implication: even when your crash-free rate looks good on paper, reduce it further. Going from 99% to 99.5% cuts your crash count in half. Going from 99.5% to 99.9% cuts it by another 80%. Each improvement has a compounding positive effect on player experience and review sentiment.

Trending Over Time to Catch Regressions

A single crash-free rate measurement is useful. A trend line is far more useful.

Viewing crash-free rate per build version over time tells you immediately whether stability is improving or regressing with each release. A drop from 99.3% in build 1.2 to 98.7% in build 1.3 is a regression signal that warrants investigation before it reaches more players. Without build-over-build trending, that regression could go unnoticed for weeks.

Bugnet’s game health view shows crash-free rate trending by build version on a timeline. Set a threshold alert so that any build with a crash-free rate below your defined floor triggers a notification before you widen the rollout. For studios using a staged rollout strategy — releasing to 10% of players first, then expanding — monitoring crash-free rate at each stage is how you catch regressions before they affect the full player base.

Using Crash-Free Rate as a Release Gate

A release gate is a condition that must be met before a build is promoted to the next stage of deployment. Crash-free rate is one of the most effective release gate conditions available to game developers.

A simple release gate policy:

  1. Deploy the new build to a beta branch or 10% staged rollout
  2. Wait for at least 1,000 sessions (or 24 hours, whichever comes first)
  3. Check crash-free rate in Bugnet
  4. If crash-free rate is above the threshold (e.g., 99%), promote to full rollout
  5. If crash-free rate is below the threshold, halt the rollout, investigate, patch, and repeat from step 1

This process prevents the most common cause of widespread crash spikes: a regression in a new build that reaches all players simultaneously. Staged rollouts with crash-free rate gating contain the blast radius of any regression to a small fraction of the player base.

Crash-Free Rate vs. Error Rate: Why Both Matter

Crash-free rate measures hard crashes — the game process terminates unexpectedly. Error rate measures something subtler: non-fatal exceptions and handled errors that do not end the session but indicate code executing in unexpected ways.

A game can have a 99.8% crash-free rate and still have a significant non-fatal error problem. Common examples:

Non-fatal errors often precede crashes. They represent code paths executing in states the developer did not intend, and each one is a potential crash waiting for the right conditions. A high non-fatal error rate with a good crash-free rate is a warning sign, not a clean bill of health.

Bugnet captures both crash reports and non-fatal error events. Monitoring the ratio of non-fatal errors to sessions alongside crash-free rate gives you a more complete picture of stability than crash rate alone.

Making Crash-Free Rate Part of Your Studio Culture

The most effective way to improve crash-free rate is to make it visible. When crash-free rate is a number that the team sees on a shared dashboard — not something the engineer occasionally checks in a log file — it becomes a shared responsibility.

Crash-free rate is ultimately a proxy for one thing: how many players are having their experience with your game interrupted by a technical failure. Every percentage point improvement is real players having a better time with your game. That is a goal worth measuring.

Track the number. Trend the number. Gate on the number. The rest follows.