Inspiration

In the years following my first attempt at a neural renderer, the technology had evolved to be better, and more relevantly, faster

SDXL-Turbo and SDXL-Lighting were able to generate usable imagery in 1-2 generation steps, and ComfyUI made implementing custom workflows a breeze

StreamDiffusion was able to crank out a whopping 90 (ai generated) frames per second on a 4090, and was being used by artists in TouchDesigner

It was time to take another shot at my

neural renderer

Touch Designer

--

Stream Diffusion

--

ComfyUI

vs

Touch Designer -- Stream Diffusion -- ComfyUI vs

External

I began by implementing SDXL-Turbo in ComfyUI, eventually switching to StreamDiffusion via this Diffusers repository.

However, a combination of performance overhead and streaming latency rendered the ComfyUI solution too costly.

Then, I began implementing StreamDiffusionTD by @dotsimulate

StreamDiffusion’s performance metrics proved extraordinarily promising, but I was intent on reducing the overhead incurred by either ComfyUI or TouchDesigner, so I opted to run the stable diffusion server in a headless environment using

KlakSpout & KlakNDI,

repositories by Keijiro, which replicated the GPU streaming functionality of StreamDiffusionTD in Unity