Quick answer: Most game engines provide built-in methods for measuring frame time. In Godot, use Engine.get_frames_per_second() for FPS and the delta parameter in _process() for frame time. In Unity, use 1.0f / Time.deltaTime for FPS and Time.deltaTime for frame time.

Learning how to track game performance issues across devices is a common challenge for game developers. Your game runs at a smooth 60 FPS on your development machine. Then the Steam reviews start coming in: "unplayable stuttering," "drops to 15 FPS in the ice cave," "massive lag on AMD cards." You cannot reproduce any of it because you only have one GPU. This is the core challenge of performance optimization for indie games—your players have thousands of hardware configurations, and you have one. The solution is not buying more machines. It is instrumenting your game to tell you exactly what is going wrong, on which hardware, and why.

FPS Is a Lie: Why Frame Time Is the Metric That Matters

Every developer knows what FPS means, and almost every developer uses it wrong. FPS is an average, and averages hide the spikes that players actually feel. A game running at "60 FPS" could be delivering perfectly smooth 16.67ms frames, or it could be delivering 55 frames at 10ms and 5 frames at 80ms. The average is the same. The player experience is wildly different. The second scenario produces visible hitching every second that makes the game feel broken despite a respectable frame rate number.

The metric you want is frame time, measured in milliseconds per frame. Specifically, you want the 99th percentile frame time (P99), which tells you "99% of frames were rendered faster than this value." If your P99 is 20ms, the game feels smooth. If your P99 is 80ms, the game stutters regularly regardless of what the average FPS counter says.

Here is how to track frame time properly in Godot:

# performance_tracker.gd — attach as autoload
extends Node

var frame_times: Array[float] = []
var sample_window := 300  # 5 seconds at 60fps
var report_interval := 30.0  # Report every 30 seconds
var report_timer := 0.0

func _process(delta: float) -> void:
    frame_times.append(delta * 1000.0)  # Convert to milliseconds
    if frame_times.size() > sample_window:
        frame_times.pop_front()

    report_timer += delta
    if report_timer >= report_interval:
        _send_performance_report()
        report_timer = 0.0

func _send_performance_report() -> void:
    var sorted := frame_times.duplicate()
    sorted.sort()
    var p50 := sorted[sorted.size() / 2]
    var p95 := sorted[int(sorted.size() * 0.95)]
    var p99 := sorted[int(sorted.size() * 0.99)]
    var max_ft := sorted[sorted.size() - 1]

    var report := {
        "fps_avg": Engine.get_frames_per_second(),
        "frame_time_p50": snapped(p50, 0.01),
        "frame_time_p95": snapped(p95, 0.01),
        "frame_time_p99": snapped(p99, 0.01),
        "frame_time_max": snapped(max_ft, 0.01),
        "scene": get_tree().current_scene.name,
    }
    Bugnet.log_metric("performance", report)

In Unity, the equivalent approach uses Time.unscaledDeltaTime to avoid being affected by time scale changes during slow motion or pause:

using UnityEngine;
using System.Collections.Generic;
using System.Linq;

public class PerformanceTracker : MonoBehaviour
{
    private List<float> _frameTimes = new();
    private const int SampleWindow = 300;
    private float _reportTimer;
    private const float ReportInterval = 30f;

    void Update()
    {
        float frameMs = Time.unscaledDeltaTime * 1000f;
        _frameTimes.Add(frameMs);

        if (_frameTimes.Count > SampleWindow)
            _frameTimes.RemoveAt(0);

        _reportTimer += Time.unscaledDeltaTime;
        if (_reportTimer >= ReportInterval)
        {
            SendReport();
            _reportTimer = 0f;
        }
    }

    void SendReport()
    {
        var sorted = _frameTimes.OrderBy(t => t).ToList();
        float p99 = sorted[(int)(sorted.Count * 0.99f)];
        float p95 = sorted[(int)(sorted.Count * 0.95f)];

        BugnetSDK.LogMetric("performance", new Dictionary<string, object>
        {
            {"fps_avg", 1000f / sorted[sorted.Count / 2]},
            {"frame_time_p95", p95},
            {"frame_time_p99", p99},
            {"scene", UnityEngine.SceneManagement.SceneManager.GetActiveScene().name}
        });
    }
}

The distinction between average FPS and percentile frame times is not academic. It is the difference between "the game runs fine" and "the game stutters every few seconds on 30% of player machines." Track percentiles from day one and you will catch performance regressions before they become player complaints.

Frame Time Spikes: Identifying the Culprits

A frame time spike is any frame that takes significantly longer than the target. If you are targeting 60 FPS (16.67ms per frame), a spike is anything above 33ms—a full frame drop that the player will perceive as a hitch. The most common causes of frame time spikes in indie games are garbage collection pauses, shader compilation stutters, synchronous asset loading, and physics simulation overload.

Garbage collection is the most insidious because it is invisible in the profiler during normal gameplay and then hits all at once. In C# (Unity), allocating and discarding objects in Update() fills the managed heap until the GC runs a collection pass, freezing the game for 5–50ms depending on heap size. The fix is to pool objects and avoid per-frame allocations:

// BAD: allocates a new list every frame
void Update()
{
    var nearby = new List<Enemy>();  // GC pressure every frame
    foreach (var e in _allEnemies)
        if (Vector3.Distance(e.transform.position, transform.position) < 10f)
            nearby.Add(e);
}

// GOOD: reuse a pre-allocated list
private List<Enemy> _nearbyCache = new(64);

void Update()
{
    _nearbyCache.Clear();  // No allocation, just resets count
    foreach (var e in _allEnemies)
        if (Vector3.Distance(e.transform.position, transform.position) < 10f)
            _nearbyCache.Add(e);
}

Shader compilation stutters happen when a new material or shader variant is encountered for the first time during gameplay. The GPU driver compiles the shader on demand, causing a 50–200ms freeze. The solution is to warm up shaders during a loading screen by rendering every material variant off-screen before gameplay begins. Both Unity and Godot provide shader precompilation or warmup APIs for this purpose.

Synchronous asset loading is the simplest spike to diagnose and fix. If you call load() or Resources.Load() during gameplay, the main thread blocks until the file is read from disk. Switch to asynchronous loading (ResourceLoader.load_threaded_request() in Godot, Addressables.LoadAssetAsync() in Unity) and the loading happens on a background thread without blocking the frame.

Memory Profiling: The Silent Performance Killer

Memory issues do not always cause crashes. More often, they cause gradual performance degradation as the operating system starts paging memory to disk, the garbage collector runs more frequently, or texture memory overflows and the GPU starts thrashing. Players describe this as "the game gets slower the longer I play," and it is one of the hardest problems to diagnose without proper instrumentation.

Track memory usage over time and send it alongside your frame time reports. In Godot:

func _get_memory_report() -> Dictionary:
    return {
        "static_memory_mb": Performance.get_monitor(
            Performance.MEMORY_STATIC) / 1048576.0,
        "message_buffer_mb": Performance.get_monitor(
            Performance.MEMORY_MESSAGE_BUFFER_MAX) / 1048576.0,
        "object_count": Performance.get_monitor(
            Performance.OBJECT_COUNT),
        "node_count": Performance.get_monitor(
            Performance.OBJECT_NODE_COUNT),
        "orphan_nodes": Performance.get_monitor(
            Performance.OBJECT_ORPHAN_NODE_COUNT),
        "video_memory_mb": Performance.get_monitor(
            Performance.RENDER_VIDEO_MEM_USED) / 1048576.0,
    }

The orphan_nodes metric is particularly valuable. Orphan nodes are nodes that have been removed from the scene tree but not freed, which means they are leaking memory. If this number grows over time, you have a memory leak. Track it, graph it, and alert on it. A common pattern is to use remove_child() without calling queue_free(), leaving the node alive in memory with no way to reach it.

In Unity, use Profiler.GetTotalAllocatedMemoryLong() and Profiler.GetTotalReservedMemoryLong() to track managed and native memory. Watch for the gap between allocated and reserved memory growing over time, which indicates fragmentation. Monitor Texture.currentTextureMemory to catch texture memory bloat from uncompressed or oversized textures that a player’s GPU cannot handle.

Device Telemetry: Understanding Your Player Hardware

You cannot optimize for hardware you do not know about. Device telemetry collects the hardware specifications of every player who runs your game, giving you a real picture of what machines your game needs to support. This is not about spying on players. It is about knowing that 40% of your audience is on integrated Intel graphics so you can prioritize optimizations that help them.

Collect hardware info once at startup and attach it to every performance report:

# device_info.gd
extends RefCounted

static func collect() -> Dictionary:
    return {
        "os": OS.get_name(),
        "os_version": OS.get_version(),
        "cpu_name": OS.get_processor_name(),
        "cpu_cores": OS.get_processor_count(),
        "ram_mb": OS.get_memory_info()["physical"] / 1048576,
        "gpu_name": RenderingServer.get_video_adapter_name(),
        "gpu_vendor": RenderingServer.get_video_adapter_vendor(),
        "gpu_api": RenderingServer.get_video_adapter_api_version(),
        "screen_size": "%dx%d" % [
            DisplayServer.screen_get_size().x,
            DisplayServer.screen_get_size().y
        ],
        "game_version": ProjectSettings.get_setting(
            "application/config/version"),
    }

Once you have telemetry from a few hundred players, you can group performance reports by GPU family and RAM tier. This reveals patterns that are invisible from your development machine: "all players with less than 8 GB of RAM experience frame drops in the forest level," or "AMD RX 580 users have P99 frame times above 40ms in the particle-heavy boss fight." These are actionable insights that tell you exactly where to focus your optimization time.

Aggregate the data into buckets rather than storing per-frame data. A summary sent every 30 seconds containing the scene name, P95 frame time, P99 frame time, memory usage, and device ID is enough to diagnose most problems. This keeps telemetry bandwidth under 1 KB per report and has no measurable impact on game performance.

Min-Spec Testing: Drawing the Line

Every game has a minimum specification, whether you define it explicitly or your players discover it through poor experience. The difference between those two outcomes is min-spec testing: deliberately running your game on the weakest hardware you intend to support and verifying it meets your performance targets.

Use your device telemetry data to identify your actual min-spec. Sort players by hardware capability and find the 10th percentile. If your 10th percentile player has an Intel HD 630, 8 GB of RAM, and a dual-core CPU, that is your min-spec. Get a machine with those specs (or something close) and test every release against it.

If you cannot afford dedicated test hardware, cloud testing services provide access to real devices. AWS Device Farm, BrowserStack, and LambdaTest all offer remote access to machines with specific hardware configurations. Alternatively, you can throttle your development machine to simulate lower specs. Most GPU drivers support clock speed limiting, and you can restrict CPU cores and available RAM through OS-level settings:

# Simulate a low-spec machine on Linux for testing
# Limit to 2 CPU cores
taskset -c 0,1 ./your_game.x86_64

# Limit available RAM to 4GB using cgroups
systemd-run --scope -p MemoryMax=4G ./your_game.x86_64

# Combine both constraints for a realistic min-spec test
systemd-run --scope -p MemoryMax=4G \
    taskset -c 0,1 ./your_game.x86_64

Automate min-spec testing as part of your CI pipeline. After every build, run the game on your constrained environment for five minutes and check that the P99 frame time stays below your target. If a commit pushes the P99 above the threshold, the build fails and the developer knows immediately that their change has a performance cost that needs to be addressed.

"Players on min-spec hardware are the most vocal reviewers. They are also the most grateful when you optimize for them. One targeted performance fix can turn a wave of negative reviews into a wave of positive ones."

Connecting Performance Data to Bug Reports

Performance data becomes truly powerful when it is connected to your bug tracking system. When a player reports "the game is laggy," you need to see their hardware specs, their frame time history, their memory usage trend, and the scene they were in. Without this context, "the game is laggy" is an impossible bug to fix. With it, you can see that the player has 4 GB of RAM, the game was using 3.8 GB at the time of the complaint, and memory had been climbing steadily for 20 minutes—a clear memory leak in the current scene.

Configure your crash reporting SDK to attach performance snapshots to every bug report. In Bugnet, this means adding context that persists alongside the standard device info:

# Attach performance snapshot to every report
func _on_report_sending() -> void:
    var perf := {
        "fps_current": Engine.get_frames_per_second(),
        "frame_time_p99": _get_p99_frame_time(),
        "memory_mb": Performance.get_monitor(
            Performance.MEMORY_STATIC) / 1048576.0,
        "video_memory_mb": Performance.get_monitor(
            Performance.RENDER_VIDEO_MEM_USED) / 1048576.0,
        "object_count": Performance.get_monitor(
            Performance.OBJECT_COUNT),
        "draw_calls": Performance.get_monitor(
            Performance.RENDER_TOTAL_DRAW_CALLS_IN_FRAME),
    }
    Bugnet.set_context("performance", JSON.stringify(perf))

Now every bug report in your dashboard includes a performance snapshot. You can filter reports by frame time to find all players experiencing stuttering, group them by GPU to identify driver-specific issues, and correlate memory usage with crash frequency to diagnose leaks. This transforms performance debugging from guesswork into data-driven engineering.

Related Issues

For setting up the crash reporting SDK that captures performance context alongside bug reports, see our guide on adding crash reporting to Unity and Godot in under 10 minutes. If your performance investigation reveals crashes rather than slowdowns, our beginner’s guide to reading game stack traces will help you diagnose them. For automating performance regression testing as part of your build pipeline, see automated QA testing for indie game studios.

Your dev machine is not your player's machine. Instrument first, optimize second, and let the telemetry tell you where it hurts.