Pythius makes Drum & Bass and is a friend of mine. So when he told me he was doing a new track and I could make the visuals I didn't think twice about it!
The video was made using custom software and lots of programming! Generally 3D videos are made with programs that calculate the result, which can take minutes or even hours. This means that every time you change something you have to wait to see whether the result is more to your liking. With the custom software that we made, everything is instantly updated and we are looking at the end result at all time. This makes tweaking anything, from colors and shapes to animation, a breeze. It allows for much iteration as we want and turns the video creation process into an interactive playground.
The technique we use generates all the visuals with code, there are very few images and no 3D model files. Everything you see on the screen is visualized through maths. As a side effect of not using big 3D model files, the code that can generate the entire video is incredibly small. About 300 kilobytes, 10 thousand times smaller than the video file it produced!
Technologies used are Python (software) Qt (interface) OpenGL (visual effects). The rendering uses Enhanced Sphere Tracing & physically based shading.
The technologies used were fairly basic, it's very old school phong & lambert shading, 2 blur passes for bloom, so all in all pretty low tech and not worth discussing. What I would like to discuss is the evolution of the tool. I'll keep it high level this time though. Maybe in the future I can talk about specific implementations of things, but just seeing the UI will probably explain a lot of the features and the way things work.
Our initial idea was to leverage existing software. One of our team members, who controlled the team besides modelling and eventually directing the whole creative result, had some experience with a real-time node based software called Touch Designer. It is a tool where you can do realtime visuals, and it supports exactly what we need: rendering into a 2D texture with a fragment shader.
We wanted to have the same rendering code for all scenes, and just fill in the modeling and material code that is unique per scene. We figured out how to concatenate separate pieces of text and draw them into a buffer. Multiple buffers even. At some point i packed all code and rendering logic of a pass into 1 grouped node and we could design our render pipeline entirely node based.
Here you see the text snippets (1) merged into some buffers (2) and then post processed for the bloom (3). On the right (4) you see the first problem we hit with Touch Designer. The compiler error log is drawn inside this node. There is basically no easy way to have that error visible in the main application somewhere. So the first iteration of the renderer (and coincidentally the main character of Eidolon) looked something like this:
The renderer didn't really change after this.
In case I sound too negative about touch designer in the next few paragraphs, our use case was rather special, so take this with a grain of salt!
We have a timeline control, borrowed the UI design from Maya a little, so this became the main preview window. That's when we hit some problems though. The software has no concept of window focus, so it'd constantly suffer hanging keys or responding to keys while typing in the text editor.
Last issue that really killed it though: everything has to be in 1 binary file. There is no native way to reference external text files for the shader code, or merge node graphs. There is a really weird utility that expands the binary to ascii, but then literally every single node is a text file so it is just unmergeable.
So then this happened:
Over a week's time in the evenings and then 1 long saturday I whipped this up using PyQt and PyOpenGL. This is the first screenshot I made, the curve editor isn't actually an editor yet and there is no concept of camera shots (we use this to get hard cuts).
It has all the same concepts however, separate text files for the shader code, with an XML file determining what render passes use what files and in what buffer they render / what buffers they reference in turn. With the added advantage of the perfect granularity all stored in ascii files.
Some files are template-level, some were scene-level, so creating a new scene actually only copies the scene-level fies which can them be adjusted in a text editor, with a file watcher updating the picture. The CurveEditor feeds right back into the uniforms of the shader (by name) and the time slider at the bottom is the same idea as Maya / what you saw before.
The concept was to set up a master render pipeline into which scenes would inject snippets of code. On disk this became a bunch of snippets, and an XML based template definition. This would be the most basic XML file:
<template> <pass buffer="0" outputs="1"> <global path="header.glsl"/> <section path="scene.glsl"/> <global path="pass.glsl"/> </pass> <pass input0="0"> <global path="present.glsl"/> </pass> </template>
This will concatenated 3 files to 1 fragment shader, render into full-screen buffer "0" and then use present.glsl as another fragment shader, which in turn has the previous buffer "0" as input (forwarded to a sampler2D uniform).
This branched out into making static bufffers (textures), setting buffer sizes (smaller textures), multiple target buffers (render main and reflection pass at once), set buffer size to a portion of the screen (downsampling for bloom), 3D texture support (volumetric noise textures for cloud).
Creating a new scene will just copy "scene.glsl" from the template to a new folder, there you can then fill out the necessary function(s) to get a unique scene. Here's an example from our latest Evoke demo. 6 scenes, under which you see the "section" files for each scene.
The second important thing I wanted to tackle was camera control. Basically the demo will control the camera based on some animation data, but it is nice to fly around freely and even use the current camera position as animation keyframe. So this was just using Qt's event system to hook up the mouse and keyboard to the viewport.
I also created a little widget that displays where the camera is, has an "animation input or user input" toggle as well as a "snap to current animation frame" button.
So now to animate the camera, without hard coding values! Or even typing numbers, preferably. I know a lot of people use a tracker-like tool called Rocket, I never used it and it looks an odd way to control animation data to me. I come from a 3D background, so I figured I'd just want a curve editor like e.g. Maya has. In Touch Designer we also had a basic curve editor, conveniently you can name a channel the same as a uniform, then just have code evaluate the curve at the current time and send the result to that uniform location.
Some trickery was necessary to pack vec3s, I just look for channels that start with the same name and then end in .x, .y, .z, and possibly .w.
Here's an excerpt from a long camera shot with lots of movement, showing off our cool hermite splines. At the top right you can see we have several built in tangent modes, we never got around to building custom tangent editing. In the end this is more than enough however. With flat tangents we can create easing/acceleration, with spline tangents we can get continuous paths and with linear tangents we get continuous speed. Next to that are 2 cool buttons that allow us to feed the camera position to another uniform, so you can literally fly to a place where you want to put an object. It's not as good as actual move/rotate widgets but for the limited times we need to place 3D objects it's great.
Apart from being impossible to represent in this interface, we don't support 2 keys at identical times. This means that we can't really have the camera "jump" to a new position instantly. With a tiny amount of curve inbetween the previous and the next shot position, the time cursor can actually render 1 frame of a random camera position. So we had to solve this. I think it is one of the only big features that you won't see in the initial screenshot above actually.
Introducing camera shots. A shot has its own "scene it should display" and its own set of animation data. So selecting a different shot yields different curve editor content. Shots are placed on a shared timeline, so scrolling through time will automatically show the right shot and setting a keyframe will automatically figure out the "shot local time" to put the key based on the global demo time. The curve editor has it's own playhead that is directly linked to the global timeline as well so we can adjust the time in multiple places.
When working with lots of people we had issues with people touching other people's (work in progress) shots. Therefore we introduced "disabling" of shots. This way anyone could just prefix their shots and disable them before submitting, and we could mix and match shots from several people to get a final camera flow we all liked.
Shots are also rendered on the timeline as colored blocks. The grey block underneath those is our "range slider". It makes the top part apply on only a subsection of the demo, so it is easy to loop a specific time range, or just zoom in far enough to use the mouse to change the time granularly enough.
The devil is in the details
Some things I overlooked in the first implementation, and some useful things I added only recently. 1. Undo/Redo of animation changes. Not unimportant, and luckily not hard to add with Qt.
These things make the tool just that much faster to use.
Finally, here's our tool today. There's still plenty to be done, but we made 2 demos with it so far and it gets better every time!