Quick answer: CPU and GPU emitters can’t directly read each other’s particle buffers each frame. Use Niagara Event Handlers to bridge: GPU emitter writes Death/Collision events; CPU emitter consumes them. Or move the consumer to the GPU side and avoid the bridge entirely.
A GPU emitter spawns sparks. You want a CPU emitter to spawn one smoke puff per spark death. You drop a Sample Particles data interface in the CPU emitter pointing at the GPU emitter. Crash, or warning, or zero output. The two halves of Niagara aren’t designed to read each other directly.
The Symptom
Errors in the Niagara log: “Cannot read GPU buffer from CPU script.” Or warnings about sim target mismatch. CPU emitter produces no output despite Sample Particles being wired. Sometimes works in editor preview but fails in PIE.
What Causes This
Particle data for a GPU emitter lives on the GPU. CPU scripts run on the CPU and have no synchronous access to GPU memory; reading would require a stall. Niagara’s solution is to disallow direct sampling and provide an event-based handoff.
The Fix Patterns
Pattern 1: Move consumer to GPU. Easiest. If both emitters can be GPU, do that. The GPU emitter sampling another GPU emitter works directly because both buffers are on the same device.
Pattern 2: Niagara Event Handlers. The proper bridge.
- On the source emitter (GPU), in the Particle Update stage, add a Generate Death Event module. Set the event payload (position, velocity).
- On the consumer emitter (CPU), add an Event Handler stage. Source = source emitter, event type = Death.
- The consumer’s spawn count = events received this frame. Use Receive Death Event modules to read the payload (position, velocity).
SparkEmitter (GPU):
Particle Update:
+ Generate Death Event
Payload: Position, Velocity
SmokeEmitter (CPU):
Event Handler Properties:
Source: SparkEmitter
Event Type: Death
Spawn Behavior: Spawn one per event
Particle Spawn:
+ Receive Death Event (Position) -> Set Particle Position
This is the canonical “death produces secondary effect” pattern.
Pattern 3: GPU readback for occasional samples. If you absolutely need the data on CPU and can tolerate a few frames of latency, the Niagara Read Back GPU data interface copies a snapshot to the CPU asynchronously. Use sparingly; the latency is real and the cost is non-trivial.
Sim Target on the Emitter
Open the emitter Properties. Sim Target = CPUSim or GPUComputeSim. Decide once per emitter; switching mid-development requires re-validating every module since not all modules are GPU-compatible.
Diagnosing
Niagara Editor → emitter properties → check Sim Target. If a CPU emitter is referencing a GPU emitter directly, the Sample Particles module shows red. The fix is the event handler refactor or the sim target swap.
Console: fx.Niagara.Debug.GpuReadbackEnabled 1 to inspect read-back operations and their cost.
“Same sim target where possible. Event handlers across the boundary. Read-back only when latency is OK.”
Related Issues
For Niagara bounds culling, see Niagara bounds. For Niagara not rendering, see Niagara not rendering.
Events bridge GPU and CPU. Sim target stays consistent. Crashes go away.