Quick answer: Integrate a crash reporting SDK into your game and configure webhook notifications. Most crash reporting services support Discord webhooks, Slack integrations, and email alerts.
Learning how to set up crash alerts for your game is a common challenge for game developers. Your game just launched. Players are streaming it on Twitch. Reviews are coming in. And somewhere, on a machine you have never tested, your game is crashing silently. Without crash alerts, you will not know until the negative reviews pile up. Setting up real-time crash notifications takes minutes and gives you the ability to respond to production issues before they become reputation damage.
Why Real-Time Alerts Matter
The window between a crash happening and a player leaving a negative review is measured in minutes. If you find out about a crash from a Steam review, you are already too late. The player has already had a bad experience, written about it publicly, and moved on. Real-time crash alerts let you see the problem as it happens, diagnose it immediately, and ship a hotfix before most players are affected.
This is especially critical during the launch window, when player count is highest and first impressions determine your game’s trajectory. A crash that affects 5% of players on launch day can destroy your rating. The same crash caught and fixed within an hour barely registers.
Step 1: Set Up the Crash Reporting SDK
Before you can receive alerts, your game needs to capture and send crash data. Initialize the SDK as early as possible in your game’s startup sequence so that even crashes during loading are captured.
// Unity: Initialize crash reporting in your boot scene
using Bugnet;
public class CrashAlertSetup : MonoBehaviour
{
void Awake()
{
BugnetSDK.Init("your-project-key");
BugnetSDK.SetContext("build", Application.version);
BugnetSDK.SetContext("platform", Application.platform.ToString());
// Enable automatic crash capture
BugnetSDK.EnableCrashReporting(true);
// Capture the last 100 log lines as breadcrumbs
BugnetSDK.SetBreadcrumbLimit(100);
}
}
# Godot: Initialize in an autoload singleton
extends Node
func _ready() -> void:
Bugnet.init("your-project-key")
Bugnet.set_context("build", ProjectSettings.get_setting("application/config/version"))
Bugnet.set_context("platform", OS.get_name())
Bugnet.enable_crash_reporting(true)
Bugnet.set_breadcrumb_limit(100)
Step 2: Configure Discord Webhook Alerts
Discord is the most popular choice for indie game developers because it is free, most teams already use it, and webhooks are simple to set up. Create a dedicated channel for crash alerts so they do not get lost in general chat.
In Discord, go to your server’s channel settings, select Integrations, then Webhooks, and create a new webhook. Copy the webhook URL. In your Bugnet dashboard, go to Project Settings > Integrations > Discord and paste the URL.
You can also set up a custom webhook handler if you want more control over the message format:
// Example: Custom Discord webhook payload for crash alerts
// This runs on your server, not in the game client
const payload = {
"embeds": [{
"title": "New Crash Report",
"color": 15158332, // Red
"fields": [
{ "name": "Error", "value": crash.message, "inline": false },
{ "name": "Platform", "value": crash.platform, "inline": true },
{ "name": "Build", "value": crash.build_version, "inline": true },
{ "name": "Occurrences", "value": crash.count.toString(), "inline": true }
],
"timestamp": new Date().toISOString()
}]
};
await fetch(DISCORD_WEBHOOK_URL, {
"method": "POST",
"headers": { "Content-Type": "application/json" },
"body": JSON.stringify(payload)
});
Step 3: Configure Slack Alerts
For teams using Slack, the setup is similar. Create a Slack app, enable incoming webhooks, and add the webhook to a dedicated channel. The key difference from Discord is Slack’s Block Kit format for rich messages:
// Slack Block Kit payload for crash alerts
const payload = {
"blocks": [
{
"type": "header",
"text": { "type": "plain_text", "text": "Crash Alert" }
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": `*${crash.message}*\nPlatform: ${crash.platform} | Build: ${crash.build_version} | Count: ${crash.count}`
}
}
]
};
Step 4: Set Up Email Alerts
Email is the fallback channel. Discord and Slack messages are easy to miss during off-hours, but email reaches your phone. Configure email alerts for critical thresholds: new unique crashes that have never been seen before, and crash rate spikes that exceed your baseline by a significant margin.
Keep email alerts sparse. If every crash triggers an email, you will start ignoring them within a day. Reserve email for situations that require immediate human attention.
Configuring Alert Thresholds
Not every crash deserves an immediate notification. A single crash from one player on an unusual hardware configuration is worth investigating but not worth an alert at 3 AM. Here is a tiered approach:
Tier 1: Immediate alert (Discord/Slack + email). Any crash that is completely new—a stack trace your system has never seen before. This often indicates a regression in a new build. Also trigger immediate alerts when any single crash type exceeds 50 occurrences in one hour, which indicates a widespread issue.
Tier 2: Hourly digest (Discord/Slack). A summary of all crash activity in the past hour, including counts per crash type and affected platforms. This gives you a regular pulse check without overwhelming you with individual notifications.
Tier 3: Daily summary (email). Your crash-free session rate for the day, the top five crashes by occurrence, any new crash types discovered, and a comparison to the previous day. This is your daily health check.
Monitoring Crash Rates Over Time
Individual crash alerts tell you what is happening right now. Crash rate monitoring tells you whether things are getting better or worse. The most useful metric is the crash-free session rate: the percentage of game sessions that complete without a crash.
A healthy game should maintain a crash-free rate above 99%. During launch, aim for 99.5% or higher. Track this metric over time and set up an alert when it drops below your threshold. A sudden drop usually means a bad build went out. A gradual decline might indicate a memory leak or a bug that triggers more often as players progress further into the game.
After shipping a hotfix, verify that the crash-free rate recovers. If it does not, the fix was incomplete or a different crash has taken the spotlight.
Launch Day Alert Strategy
Launch day deserves its own alert configuration. Lower your thresholds, increase the frequency of digests, and make sure someone is actively monitoring the crash channel. Here is a practical launch day checklist:
Set immediate alerts for any new crash type. Set the hourly digest to every 15 minutes instead. Assign someone to be on-call for the first 12 hours. Pre-write your hotfix deployment pipeline so you can ship a fix within 30 minutes of identifying a critical crash. Have a rollback plan if the launch build is fundamentally broken.
"The best time to set up crash alerts is before you need them. The second best time is right now, before your next update goes live."
Related Issues
For the initial SDK setup that feeds crash data into your alert system, see our guide on adding crash reporting to Unity or Godot in 10 minutes. To understand how crash reports are grouped and deduplicated before triggering alerts, read about capturing and symbolicating crash dumps from player devices. For building a full post-launch monitoring workflow, see our guide on prioritizing bugs after early access launch.
A crash alert at 2 AM is annoying. A one-star review at 2 AM is permanent. Choose the alert.