Trevor van Hoof

Technical artist


C++ snippets

Python Range Collection

Classes & Javascript relations

Maya snippets

Advanced locator

Maya quaternion & matrix operation order

Monster Black Hole - Pythius

Pythius makes Drum & Bass and is a friend of mine. So when he told me he was doing a new track and I could make the visuals I didn't think twice about it!

The short version

The video was made using custom software and lots of programming! Generally 3D videos are made with programs that calculate the result, which can take minutes or even hours. This means that every time you change something you have to wait to see whether the result is more to your liking. With the custom software that we made, everything is instantly updated and we are looking at the end result at all time. This makes tweaking anything, from colors and shapes to animation, a breeze. It allows for much iteration as we want and turns the video creation process into an interactive playground.

The technique we use generates all the visuals with code, there are very few images and no 3D model files. Everything you see on the screen is visualized through maths. As a side effect of not using big 3D model files, the code that can generate the entire video is incredibly small. About 300 kilobytes, 10 thousand times smaller than the video file it produced!

The details

Technologies used are Python (software) Qt (interface) OpenGL (visual effects). The rendering uses Enhanced Sphere Tracing & physically based shading.

I talked about the tool development here and the rendering pipeline here in the past. More information about advanced sphere tracing here. Which is an enhancement of this!

-->

A* path finding using a game's wiki

Part 1: Drawing with PyOpenGL using moden openGL buffers.

Part 2: Creating an OpenGL friendly mesh exporter for Maya

Part 3: Importing and drawing a custom mesh file

Accelerating Maya -> PyOpenGL mesh IO

Python dependency graph

Attribute editor in PyQt

Viewing Python profiling results with QCacheGrind

Searchable combo box

Smarter delegates

Improving a renderer

Creating a tool to make a 64k demo


Or stress test your GPU / leave a comment here:
http://www.pouet.net/prod.php?which=69669

The technologies used were fairly basic, it's very old school phong & lambert shading, 2 blur passes for bloom, so all in all pretty low tech and not worth discussing. What I would like to discuss is the evolution of the tool. I'll keep it high level this time though. Maybe in the future I can talk about specific implementations of things, but just seeing the UI will probably explain a lot of the features and the way things work.

Step 1: Don't make a tool from scratch

Our initial idea was to leverage existing software. One of our team members, who controlled the team besides modelling and eventually directing the whole creative result, had some experience with a real-time node based software called Touch Designer. It is a tool where you can do realtime visuals, and it supports exactly what we need: rendering into a 2D texture with a fragment shader.

We wanted to have the same rendering code for all scenes, and just fill in the modeling and material code that is unique per scene. We figured out how to concatenate separate pieces of text and draw them into a buffer. Multiple buffers even. At some point i packed all code and rendering logic of a pass into 1 grouped node and we could design our render pipeline entirely node based.

Screenshot with 5 numbers referenced below: Render pipeline made as Touch Designer node-graph

Here you see the text snippets (1) merged into some buffers (2) and then post processed for the bloom (3). On the right (4) you see the first problem we hit with Touch Designer. The compiler error log is drawn inside this node. There is basically no easy way to have that error visible in the main application somewhere. So the first iteration of the renderer (and coincidentally the main character of Eidolon) looked something like this:

Screenshot of the render pipeline in action, with a time slider widget underneath
The renderer didn't really change after this.

In case I sound too negative about touch designer in the next few paragraphs, our use case was rather special, so take this with a grain of salt!

We have a timeline control, borrowed the UI design from Maya a little, so this became the main preview window. That's when we hit some problems though. The software has no concept of window focus, so it'd constantly suffer hanging keys or responding to keys while typing in the text editor.

Last issue that really killed it though: everything has to be in 1 binary file. There is no native way to reference external text files for the shader code, or merge node graphs. There is a really weird utility that expands the binary to ascii, but then literally every single node is a text file so it is just unmergeable.

Step 2: Make a tool from scratch

So then this happened:
Screenshot of the first proof-of-concept version of SqrMelon

Over a week's time in the evenings and then 1 long saturday I whipped this up using PyQt and PyOpenGL. This is the first screenshot I made, the curve editor isn't actually an editor yet and there is no concept of camera shots (we use this to get hard cuts).

It has all the same concepts however, separate text files for the shader code, with an XML file determining what render passes use what files and in what buffer they render / what buffers they reference in turn. With the added advantage of the perfect granularity all stored in ascii files.

Some files are template-level, some were scene-level, so creating a new scene actually only copies the scene-level fies which can them be adjusted in a text editor, with a file watcher updating the picture. The CurveEditor feeds right back into the uniforms of the shader (by name) and the time slider at the bottom is the same idea as Maya / what you saw before.

Step 3: Make it better

Render pipeline
The concept was to set up a master render pipeline into which scenes would inject snippets of code. On disk this became a bunch of snippets, and an XML based template definition. This would be the most basic XML file:

<template>
    <pass buffer="0" outputs="1">
        <global path="header.glsl"/>
        <section path="scene.glsl"/>
        <global path="pass.glsl"/>
    </pass>
    <pass input0="0">
        <global path="present.glsl"/>
    </pass>
</template>

This will concatenated 3 files to 1 fragment shader, render into full-screen buffer "0" and then use present.glsl as another fragment shader, which in turn has the previous buffer "0" as input (forwarded to a sampler2D uniform).

This branched out into making static bufffers (textures), setting buffer sizes (smaller textures), multiple target buffers (render main and reflection pass at once), set buffer size to a portion of the screen (downsampling for bloom), 3D texture support (volumetric noise textures for cloud).

Creating a new scene will just copy "scene.glsl" from the template to a new folder, there you can then fill out the necessary function(s) to get a unique scene. Here's an example from our latest Evoke demo. 6 scenes, under which you see the "section" files for each scene.

Screenshot of a tree-view listing scenes, each scene containing a sub-list of editable shader files

Camera control
The second important thing I wanted to tackle was camera control. Basically the demo will control the camera based on some animation data, but it is nice to fly around freely and even use the current camera position as animation keyframe. So this was just using Qt's event system to hook up the mouse and keyboard to the viewport.

Screenshot of a camera widget, showing buttons to enable/disable animation, snap to current animation, and 6 inputs for translate and rotate

I also created a little widget that displays where the camera is, has an "animation input or user input" toggle as well as a "snap to current animation frame" button.

Animation control
So now to animate the camera, without hard coding values! Or even typing numbers, preferably. I know a lot of people use a tracker-like tool called Rocket, I never used it and it looks an odd way to control animation data to me. I come from a 3D background, so I figured I'd just want a curve editor like e.g. Maya has. In Touch Designer we also had a basic curve editor, conveniently you can name a channel the same as a uniform, then just have code evaluate the curve at the current time and send the result to that uniform location.

Some trickery was necessary to pack vec3s, I just look for channels that start with the same name and then end in .x, .y, .z, and possibly .w.

Screenshot of the curve editor, showing various controls in a toolbar, a list of animateable properties on the left, and a big graph showing animation curves in the center

Here's an excerpt from a long camera shot with lots of movement, showing off our cool hermite splines. At the top right you can see we have several built in tangent modes, we never got around to building custom tangent editing. In the end this is more than enough however. With flat tangents we can create easing/acceleration, with spline tangents we can get continuous paths and with linear tangents we get continuous speed. Next to that are 2 cool buttons that allow us to feed the camera position to another uniform, so you can literally fly to a place where you want to put an object. It's not as good as actual move/rotate widgets but for the limited times we need to place 3D objects it's great.

Hard cuts
Apart from being impossible to represent in this interface, we don't support 2 keys at identical times. This means that we can't really have the camera "jump" to a new position instantly. With a tiny amount of curve inbetween the previous and the next shot position, the time cursor can actually render 1 frame of a random camera position. So we had to solve this. I think it is one of the only big features that you won't see in the initial screenshot above actually.

Screenshot of the shot manager, showing a toolbar with create/duplicate/delete buttons and a table of shot names, scenes and editable start/end times (in beats), with a context menu to show/hide and other utilities

Introducing camera shots. A shot has its own "scene it should display" and its own set of animation data. So selecting a different shot yields different curve editor content. Shots are placed on a shared timeline, so scrolling through time will automatically show the right shot and setting a keyframe will automatically figure out the "shot local time" to put the key based on the global demo time. The curve editor has it's own playhead that is directly linked to the global timeline as well so we can adjust the time in multiple places.

When working with lots of people we had issues with people touching other people's (work in progress) shots. Therefore we introduced "disabling" of shots. This way anyone could just prefix their shots and disable them before submitting, and we could mix and match shots from several people to get a final camera flow we all liked.

Screenshot of the time slider paired with the curve editor, showing shot layout on the timeline, playback & looping controls and synchronization between the curve and time slider playheads

Shots are also rendered on the timeline as colored blocks. The grey block underneath those is our "range slider". It makes the top part apply on only a subsection of the demo, so it is easy to loop a specific time range, or just zoom in far enough to use the mouse to change the time granularly enough.

The devil is in the details

Some things I overlooked in the first implementation, and some useful things I added only recently. 1. Undo/Redo of animation changes. Not unimportant, and luckily not hard to add with Qt.

  1. Ctrl click timeline to immediately start animating that shot
  2. Right click a shot to find the scene
  3. Right click a scene to create a shot for that scene in particular
  4. Current time display in minutes:seconds instead of just beats
  5. BPM stored per-project instead of globally
  6. Lots of hotkeys!

These things make the tool just that much faster to use.

Finally, here's our tool today. There's still plenty to be done, but we made 2 demos with it so far and it gets better every time!

Screenshot of current tool state, adding composition overlays and sneak peaking at the new render pipeline of our Yermom demo made in 2018

-->

Music visuals!

Click here to the separate tracks.

-->