Quick answer: At two-person scale, the QA stack that actually works is automated crash reporting plus a structured smoke test before every release candidate, playtests treated as QA sessions with debriefs, and Discord community members as a distributed QA layer. A dedicated QA hire isn’t the answer at this stage — but a freelance QA tester for major launches is.
Nobody starting a two-person game studio sits down and plans a QA process. QA happens in the gaps: one person plays through the level they just built while the other fixes something else, and if it doesn’t obviously break, it ships. This works until it doesn’t — usually in the form of a Steam review that says “crashed three times in chapter one” written by someone who bought your game on launch day. The good news is that QA at two-person scale doesn’t require a team or a budget. It requires a handful of deliberate practices applied consistently.
Automated Crash Reporting Covers What Manual QA Can’t
The most important QA tool a small studio can have isn’t a spreadsheet or a checklist or a testing framework. It’s a crash dashboard. Automated crash reporting captures what no amount of manual playtesting will find: the crash that only triggers on a specific GPU driver version that neither of you have, the crash that requires 40 minutes of continuous play to reach the right memory state, the crash that started happening at 2am after a platform update that changed something you didn’t know existed.
Players who encounter crashes almost never report them. Fewer than five percent of players who hit a crash will take the time to submit a bug report. A crash dashboard captures all of them, with a full stack trace, system information, and the game state at the moment of failure. The first time you look at your crash dashboard the morning after launch and see that 80 players hit the same crash that you never encountered in testing, you understand immediately why this is the first tool to set up, not the last.
At two-person scale, the operational workflow is simple: check the crash dashboard at the start of every workday. It takes five minutes. Any new crash group that appeared overnight, any existing crash group that is trending upward, any crash group that appeared in the same build window as a recent patch — those get triaged immediately. Everything else stays in the queue to be addressed in the next regular bug session.
The tooling cost is low. Bugnet’s crash reporting integrates with Godot, Unity, and other common engines via SDK in under an hour, and the dashboard requires no ongoing maintenance beyond checking it regularly.
The Weekly Bug Review: 30 Minutes, Not a Meeting
The words “bug review meeting” trigger a specific kind of dread in small studios. Meetings have agendas and facilitators and action items and follow-ups, and when there are only two of you, they feel like theater. The bug review that actually happens at two-person scale is not a meeting. It’s 30 minutes with a shared doc.
Once a week — Monday morning works well for most studios — one person opens the bug tracker and the crash dashboard while the other makes coffee. The crash dashboard gets checked first: anything new, anything spiking, anything that needs immediate attention. Then the bug queue: triage anything that came in from community reports during the past week, confirm priorities haven’t shifted, identify anything that should be fixed before the next release. Write down the three bugs that are getting fixed this week and the criteria for closing them.
The output of this review isn’t a meeting summary or a status report. It’s a short list of priorities that both people agree on. Disagreements about priority get resolved in real time because both people are in the same room. Everything else can wait until next week’s review.
Every Playtest Is a QA Session
Two-person studios run playtests informally all the time. A friend visits, plays the game for an hour, gives feedback over dinner. A build gets sent to a few people who agreed to try it. Someone at a local game dev meetup plays a build on a laptop. These playtests generate valuable data that almost always gets lost because there’s no structured way to capture it.
The fix is a structured debrief that happens within 24 hours of every playtest — before the specific observations fade. The debrief doesn’t need to be long. Three questions, written down:
- What did they get stuck on or confused by?
- Did anything break or behave unexpectedly?
- What did they ask about that they couldn’t figure out themselves?
Answers to these questions become bug tickets or design notes immediately. The playtest observation “she couldn’t figure out how to open the map” is a UX bug that should be in the tracker, not a note that gets forgotten. “The audio cut out completely after about 20 minutes” is a bug that might be intermittent and hard to reproduce — but getting it into the tracker means you can check the crash dashboard for correlated reports and start looking for the pattern.
Ask playtesters to record their sessions if they’re comfortable with it. A simple screen recording with the microphone on captures their verbal reactions alongside their actions. Watching ten minutes of a confused player trying to figure out your inventory system is worth more than any written summary.
Discord Community as a Distributed QA Layer
If your game has any community at all — even a small Discord server with a few hundred members — you already have access to a distributed QA layer that most mid-size studios would envy. The question is whether you structure it to produce useful bug data or let it produce noise.
The structure that works: a dedicated #bug-reports channel with a pinned template that tells players exactly what information to include. Steps to reproduce. Operating system and hardware (even just “low-end laptop” vs “gaming PC” is useful). Game version. What they expected to happen vs what actually happened. A screenshot or video if they have one.
The incentive that makes players use the template: public acknowledgment. Reply to every report with at least a reaction emoji so players know you saw it. For players who submit consistently useful, well-structured reports, give them a special Discord role — “Bug Hunter” works well — that recognizes their contribution publicly. Early access to beta builds is a tangible reward that costs nothing and is enormously motivating for engaged community members.
The reports that come through a well-structured community channel are qualitatively different from Steam reviews or vague social media complaints. Players who are motivated to report bugs clearly will go out of their way to reproduce the bug, capture the steps, and provide the context you need. They become an extension of your QA capacity at zero marginal cost.
The 5-Test Smoke Test Before Every Release Candidate
Every release, no matter how small, should pass a smoke test before it goes live. The smoke test is not a comprehensive QA pass. It’s a fast check that catches the most common categories of release-breaking bugs — the ones that get through precisely because everyone assumed someone else had checked.
Write your smoke test as a literal checklist in a shared doc. Five tests is the right number at this scale — enough to catch the major failure modes, short enough that both people will actually run it before every release. A sample smoke test for a single-player narrative game:
- New game flow: Start a new game from the main menu. Confirm the opening sequence plays, the first scene loads, and control is handed to the player correctly.
- Save and load: Play for five minutes, save, quit the game, relaunch, and load the save. Confirm the player is in the right location with the right game state.
- Settings menu: Open settings, change audio volume and resolution, close settings. Confirm changes persist across a game restart.
- Most recently changed system: Whatever you touched in this build, play through the feature it affects for five minutes. This catches regressions in the exact area you just modified.
- Platform-specific check: If shipping to multiple platforms, verify the build launches and runs for five minutes on each target platform before uploading.
The specific tests should be adapted to your game. The principles are: cover the new game flow, cover save/load, cover the area you changed, and cover each target platform. Run this before every release candidate, not just major releases. A hotfix that breaks save loading is worse than the bug it was supposed to fix.
Freelance QA vs Community Testing: When to Pay
Community testing and playtests cover a lot of ground, but they miss the things that professional QA testers find: edge cases that require methodical exhaustive testing, platform-specific issues that require specific hardware configurations, and bugs that only appear when someone is deliberately trying to break the game rather than enjoying it.
The rule of thumb for two-person studios: hire a freelance QA tester before any event that can’t be reversed. A Steam Early Access launch, a major content update, a console certification submission. Budget 10 to 20 hours of freelance QA time and treat it as a line item alongside trailer production and store page writing. A professional tester will find bugs in two days that your community wouldn’t surface for two months.
For regular patch releases, community testing is sufficient as long as your crash dashboard is active and your smoke test passes. Reserve the freelance budget for launches where a broken release has lasting consequences.
“The two-person studio QA stack isn’t about process for process’s sake. It’s about catching the bugs that would otherwise cost you reviews you can’t recover from.”
The Tooling Baseline
The minimum viable QA toolset for a two-person studio is genuinely minimal: a crash dashboard and a shared spreadsheet. The crash dashboard covers automated stability monitoring. The spreadsheet covers the bug backlog for bugs that don’t cause crashes — gameplay issues, UX problems, audio inconsistencies. When the bug list in the spreadsheet grows past 50 items, it’s time to move to a proper bug tracker with labels, priorities, and status tracking. That threshold comes sooner than most studios expect.
Don’t let the tooling decision become a reason to delay the process. A crash dashboard and a shared Google Sheet started today is infinitely more valuable than a perfectly configured bug tracker that gets set up after the launch and is never used. The process is what matters; the tool is just the container for it.
Two people can’t do QA like a team of ten — but with the right habits in place, they don’t need to.