neural rendering
a deep dive into AI image generation implementation in realtime game engines
-
First Steps . . .
-
Introduction: Why AI?
/imagine the possibilities of realtime ai image generation
-
NVIDIA GauGAN
In 2019, NVIDIA announced GauGAN — an img2img model that takes in a segmentation map painted by the user and turns it into landscape photography in realtime
-
Unity Barracuda
In 2020, Unity released their first approach to neural rendering in the form of Barracuda and support for ONNX styleGAN models
-
-
Experimentation
-
Pointclouds with VFX Graph
Leveraging Unity’s scriptable render pipeline, I was able to render a virtual environment as a pointcloud using VFX Graph
-
Inspiration at Siggraph '23
Inspired by Weidi Zhang & Rodger Luo’s art installation Re:Collection, I set out to finish what I had started years prior
-
StreamDiffusion
Using implementations for ComfyUI and TouchDesigner, I discovered the blazing speeds StreamDiffusion was able to bring to the table
-
-
And Now . . .
-
Realtime Diffusion in Unity
Mission Accomplished: realtime img2img generation using Unity’s frame buffer as input
-
Neural Pointclouds + Depth
Combining the Unity + StreamDiffusion pipeline with my prior pointcloud renderer while using MiDAS for realtime depth estimation
-
What's Next?
A word from like-minded futurists pioneering realtime neural rendering for runtime applications
-