Quick answer: Log QA tester positions at regular intervals during test sessions. Aggregate the data into a grid overlay on your game’s map. Areas with high visit counts are well-tested; areas with zero visits are blind spots. Use the heatmap to direct testers to under-explored areas before each QA cycle, ensuring that no part of your game ships untested.

Code coverage tells you which lines of code your automated tests execute. QA coverage tells you which parts of your game your human testers have actually explored. The first is well-understood and widely practiced. The second is almost always a gut feeling: “I think we tested that area,” “someone probably checked that menu,” “the beta testers must have gone there.” A QA coverage heatmap replaces gut feelings with data. It is a visual map of where your testers have been and, more importantly, where they have not.

Collecting Position Data

The foundation of a coverage heatmap is position data from QA sessions. In your QA build, log the player’s world position at a regular interval — every two to five seconds is sufficient for coverage purposes. Each log entry should include the world coordinates, a session ID (to distinguish different testers and sessions), a timestamp, the game version, and optionally the tester’s identifier. Store the logs in a file that gets uploaded when the session ends, or stream them to a server in real time if your QA infrastructure supports it.

For 3D games, log the position as a 2D projection onto the ground plane (x, z coordinates) unless your game has significant vertical gameplay, in which case log all three coordinates. For 2D games, log both axes. For non-spatial games — menu-heavy games, card games, puzzle games — log which screen or state the tester is in rather than a position. The principle is the same: track where the tester’s attention is, whatever “where” means for your game.

# qa_position_logger.gd — attach to player in QA builds
extends Node

var log_interval := 3.0  # seconds between position logs
var timer := 0.0
var session_id: String
var entries: Array[Dictionary] = []

func _ready() -> void:
    session_id = generate_uuid()

func _process(delta: float) -> void:
    timer += delta
    if timer >= log_interval:
        timer = 0.0
        var pos := get_parent().global_position
        entries.append({
            "session": session_id,
            "x": snapped(pos.x, 1.0),
            "y": snapped(pos.y, 1.0),
            "time": Time.get_unix_time_from_system(),
            "version": ProjectSettings.get_setting("application/config/version"),
        })

func _notification(what: int) -> void:
    if what == NOTIFICATION_WM_CLOSE_REQUEST:
        save_log()

func save_log() -> void:
    var file := FileAccess.open(
        "user://qa_logs/%s.json" % session_id, FileAccess.WRITE)
    file.store_string(JSON.stringify(entries))
    file.close()

Keep the logging lightweight. Position data at 3-second intervals generates roughly 1,200 entries per hour per tester. At 50 bytes per entry, that is 60 KB per hour — negligible in terms of storage and performance. Do not log every frame; you are measuring coverage, not building a replay system.

Aggregating Data into a Grid

Raw position data is a cloud of points. To turn it into a heatmap, divide your game world into a grid of cells and count how many unique sessions visited each cell. The cell size depends on the scale of your game: for a large open world, 10-meter cells might be appropriate; for a tight metroidvania, 1-tile cells work better. The key is that each cell should represent a meaningful area of the game that a tester would either visit or not visit during a session.

Count unique sessions per cell, not total position entries. A tester who stands in one spot for ten minutes should contribute the same coverage as a tester who passes through once. You want to know “how many different testers explored this area,” not “how long did testers spend here.” The latter is useful for other analyses (engagement, pacing) but not for coverage.

Store the aggregated grid as a 2D array or dictionary keyed by cell coordinates. Each cell holds the unique session count. Optionally store the most recent visit timestamp so you can filter the heatmap by time range — coverage from two months ago may not be relevant if the area has been redesigned since then.

Rendering the Heatmap

The heatmap is a color overlay on your game’s map or level layout. Use a color gradient from warm to cool: red or dark for zero coverage, orange for low coverage, yellow for moderate, green for high, and blue or white for very high. The exact thresholds depend on your team size and testing cadence. If you have three testers and five days of QA, a cell visited by all three testers is “high coverage”; if you have twenty testers, you might set the threshold higher.

Render the heatmap in a standalone tool, not inside the game itself. A simple Python script using matplotlib or a web page using HTML Canvas can read the grid data and draw the overlay on a static map image. This keeps the tool independent of the game engine and accessible to non-engineers (QA leads, producers) who need to review coverage without launching the game.

Generate a new heatmap after each QA cycle and compare it to the previous one. The comparison reveals whether directed testing (sending testers to specific areas) is actually improving coverage. If a gap persists across multiple cycles, investigate why: is the area hard to reach? Is there a bug that prevents testers from entering? Is it behind a progression gate that testers have not cleared?

Identifying and Addressing Coverage Gaps

The heatmap’s primary value is making coverage gaps visible. A red or empty area on the heatmap is an area that no tester has visited, which means any bug in that area will be found by a player, not by your team. Review the heatmap before each QA cycle and assign testers specifically to the low-coverage areas.

Coverage gaps cluster in predictable places. Optional content (side quests, hidden rooms, optional boss fights) is almost always under-tested because testers focus on the critical path. Late-game content is under-tested because testers start from the beginning and run out of time before reaching the end. Edge-of-map areas are under-tested because there is no reason to go there unless you are specifically looking for boundary bugs. Settings menus, accessibility options, and rare dialogue branches are under-tested because they are not part of the gameplay loop.

“The heatmap does not tell you where the bugs are. It tells you where the bugs might be hiding that nobody has looked for yet. Those two things are very different, and the second one is more valuable.”

Beyond Spatial Coverage

Not all coverage is spatial. Extend the heatmap concept to other dimensions of your game. Which menu screens have been visited? Which settings combinations have been tested? Which dialogue branches have been triggered? Which items have been equipped, used, and discarded? Each of these is a dimension of coverage that can be visualized and tracked.

For feature coverage, build a matrix instead of a spatial heatmap. List every feature or screen on one axis and every QA session on the other. Color each cell by whether the tester visited that feature during that session. The features with the fewest colored cells are your coverage gaps. This is especially valuable for UI-heavy games where spatial heatmaps are less meaningful.

Combine spatial and feature coverage into a single dashboard that the QA lead reviews daily. The dashboard answers two questions: “which areas of the game have not been explored?” and “which features have not been exercised?” Together, they provide a complete picture of what your QA effort has and has not covered, turning testing from an art into a measurable process.

Related Issues

For using session replay data to investigate specific bugs, see how to use player logs to debug game issues. For building a broader QA process around coverage data, read how to build a QA checklist for game release.

If you cannot see where your testers have been, you cannot see where they have not. The heatmap makes the invisible visible.