neural renderer

Leveraging StreamDiffusion for realtime ai inference from Unity

PROJECT PLANNING

Figma

CONVERGENCE

Combining StreamDiffusion with

pointclouds

LATENT

SPACE

The results yielded a captivating exploration through latent space, whereby prior generations were used as the input for future frames.

No input environment needed, neural renderer was able to generate one itself.

Reintroducing depth

Techniques to extract depth from image are commonplace in the ai space, but none so pertinent to the issue of realtime than MiDAS, which I was able to call via RenderPass

img2depth

By using both color output from StreamDiffusion and depth from MiDAS, neural renderer was able to construct fully explorable worlds . . .

In conclusion,

Improvements to this system will determine its potential and eventual widespread adoption.

Between introducing temporal consistency through motion LoRAs, performing realtime img2txt to generate recursive prompts for the renderer to adhere to, and implementing developments in NeRF (neural radiance field) and Gaussian splatting technology, the potential for realtime neural rendering is evident.

It is my personal belief that, in the near future, traditional VFX & realtime rendering pipelines alike will offload more and more processes to neural rendering passes.

Next
Next

GOING DARK