Quick answer: Ensure the camera targeting the RenderTexture is enabled and its Target Texture field is assigned, create the RenderTexture with non-zero dimensions, and match the texture’s color format and depth buffer settings to what the camera’s HDR and MSAA settings require. For one-shot renders, call camera.Render() after setting camera.targetTexture.
A RenderTexture that shows a static frame, a black rectangle, or the last frame it rendered before you disabled something — these are all the same underlying problem: the camera that owns the texture has stopped writing to it, or it never started. RenderTextures are deceptively simple to set up but have a handful of configuration details that are easy to get wrong, and Unity gives no runtime warnings when any of them are misconfigured. This guide covers every failure mode and the exact fix for each one.
The Symptom
The most common presentation is a RawImage or a material using a RenderTexture as its texture that shows either a black image, a static frozen frame, or pink/magenta (Unity’s missing texture color). The RenderTexture asset itself may show a valid preview in the Project window from the last time it was written to, which makes it look like the texture exists and is valid — it is, but it is not being updated.
A related symptom is a RenderTexture that works in the Editor but shows black in a build, which usually indicates an HDR or color space mismatch between the texture format and the camera settings.
What Causes This
Camera is disabled. A camera only renders when it is active and enabled. If you disabled the security camera, minimap camera, or portrait camera to save performance, its RenderTexture stops receiving new frames. The texture holds the last frame it received. Either keep the camera enabled at a reduced frame rate using a coroutine, or use Camera.Render() for manual control.
Target Texture not set. The camera’s Target Texture field in the Inspector must point to your specific RenderTexture asset. If it reads None, the camera renders to the screen (or nothing, if it has a lower depth than the main camera). This is easy to miss when the camera was created by duplicating the main camera — the Target Texture field starts empty.
RenderTexture created with zero dimensions. Creating a RenderTexture in code with zero width or height produces an invalid texture that cannot be rendered to. Unity will not throw an error, but the texture will never display anything:
// Wrong — creates an invalid zero-size texture
int width = someRect.width; // RectTransform not yet laid out; returns 0
RenderTexture rt = new RenderTexture(width, width, 24);
renderCamera.targetTexture = rt;
// Correct — use explicit dimensions or read layout in OnEnable
void Start()
{
int w = 512, h = 512;
RenderTexture rt = new RenderTexture(w, h, 24,
RenderTextureFormat.ARGB32);
rt.antiAliasing = 1;
rt.Create(); // allocate GPU memory immediately
renderCamera.targetTexture = rt;
displayImage.texture = rt;
}
Format incompatible with camera HDR settings. If the camera has Allow HDR enabled, it tries to write 16-bit floating-point color values. A RenderTexture created with RenderTextureFormat.ARGB32 (8 bits per channel) cannot represent HDR values; the output will be clamped and may appear washed out or incorrect. Either disable HDR on the camera or use a format that supports it:
// Match format to camera HDR setting
RenderTextureFormat format = renderCamera.allowHDR
? RenderTextureFormat.DefaultHDR // ARGBHalf on most platforms
: RenderTextureFormat.ARGB32;
RenderTexture hdrRT = new RenderTexture(1024, 1024, 24, format);
hdrRT.Create();
renderCamera.targetTexture = hdrRT;
The Fix: Continuous Camera Rendering
For a live feed — a minimap, security monitor, or character portrait that updates every frame — the correct setup is:
- Create a dedicated camera with a Camera Depth lower than the main camera (e.g. −1).
- Assign your RenderTexture to its Target Texture field.
- Set Clear Flags to Solid Color or Skybox as appropriate.
- Keep the camera enabled. It renders into the RenderTexture every frame automatically.
To reduce GPU cost on lower-end hardware, render every other frame using camera.enabled toggled in a coroutine, or reduce the camera’s render interval using a custom IEnumerator:
public class ThrottledRenderCamera : MonoBehaviour
{
[SerializeField] private Camera renderCamera;
[SerializeField] private float updatesPerSecond = 10f;
private RenderTexture rt;
void Start()
{
rt = new RenderTexture(512, 512, 24, RenderTextureFormat.ARGB32);
rt.Create();
renderCamera.targetTexture = rt;
renderCamera.enabled = false; // manual control
StartCoroutine(RenderLoop());
}
System.Collections.IEnumerator RenderLoop()
{
var wait = new WaitForSeconds(1f / updatesPerSecond);
while (true)
{
renderCamera.Render(); // one-shot manual render
yield return wait;
}
}
void OnDestroy()
{
if (rt != null) rt.Release();
}
}
Manual Rendering and RenderTexture.active
When reading pixel data back from a RenderTexture (for screenshots, thumbnail generation, or image processing), you must set RenderTexture.active before calling Texture2D.ReadPixels(). Without this, ReadPixels reads from the currently active render target, which is usually the screen:
public Texture2D CaptureToTexture2D(Camera cam, int width, int height)
{
RenderTexture rt = new RenderTexture(width, height, 24);
cam.targetTexture = rt;
cam.Render(); // render into rt
// Must set RenderTexture.active before ReadPixels
RenderTexture previousActive = RenderTexture.active;
RenderTexture.active = rt;
Texture2D result = new Texture2D(width, height, TextureFormat.ARGB32, false);
result.ReadPixels(new Rect(0, 0, width, height), 0, 0);
result.Apply();
// Restore previous state
RenderTexture.active = previousActive;
cam.targetTexture = null;
rt.Release();
return result;
}
“Always call
rt.Release()when you are done with a dynamically created RenderTexture. Unity does not garbage-collect GPU memory — leaked RenderTextures accumulate across scene loads and will eventually exhaust VRAM on mobile devices.”
Using Graphics.Blit for Post-Processing
Graphics.Blit() renders a source texture into a destination RenderTexture, optionally through a material that acts as a shader pass. It is the correct approach for post-processing effects and image filters that operate on an existing render:
public class GreyscalePostProcess : MonoBehaviour
{
[SerializeField] private Material greyscaleMaterial;
private RenderTexture buffer;
void OnEnable()
{
buffer = new RenderTexture(Screen.width, Screen.height, 0);
}
void OnRenderImage(RenderTexture src, RenderTexture dest)
{
// Built-in pipeline: blit through the greyscale shader
Graphics.Blit(src, dest, greyscaleMaterial);
}
void OnDisable()
{
if (buffer != null) { buffer.Release(); buffer = null; }
}
}
In URP and HDRP, OnRenderImage is not called — use a ScriptableRendererFeature with a RenderObjectsPass or a custom RTHandle-based pass instead. Attempting to use Graphics.Blit in URP without a renderer feature will produce no output and no error.
Related Issues
If the RenderTexture updates correctly but appears gamma-incorrect (too bright or too dark) compared to the main camera view, check Project Settings > Player > Color Space. A mismatch between Linear color space and a RenderTexture format that does not support sRGB conversion will produce incorrect colors. Set the RenderTexture’s sRGB (Color Texture) checkbox to match your project’s color space. In Linear projects, most RenderTextures should have sRGB enabled unless they store non-color data like normals or depth.
RenderTexture failures on specific GPU vendors (Intel integrated graphics dropping HDR, ARM Mali zero-dimension clamp behavior) are exactly the kind of device-specific bug that only a crash reporter with device context can surface reliably.