At The Barbarian Group, we have an impressive 9 screen rig that was used for the Samsung CenterStage project. Since it’s been running the same application nonstop for 2 years, we figured it was time to give some other applications a shot up on the big screen. In addition to running other Cinder apps, I wanted to figure out how to get WebGL running on it.
The rig consists of 8 HD screens and one 85” UHD touchscreen. For an application to run across multiple screens this way, you need to be able to spin up multiple windows from the same application, position them, and render different areas of a texture to each one. Spanning multiple windows is just not possible using a standard web browser. We knew that in order to get WebGL displayed across multiple screens this way, we’d need to use Cinder’s capability to created and compose multiple windows and somehow feed a rendered HTML image to them. The major task at hand was to figure out how to to render HTML content to a texture that Cinder could then display.
Attempt #1 – Awesomium
Awesomium is a library that enables you to embed HTML right into a C++ or .Net app, often used as a UI layer. It turns out that it’s relatively easy to use, but it’s also relatively out of date. Quickly, I was able to get Awesomium running inside Cinder and rendering websites without any issues using this Awesomium Cinder Block. By default, WebGL is disabled, so there are a few flags that you need to set to even make WebGL rendering a possibility. Here’s what the setup function in my Cinder Awesomium test app looked like:
Unfortunately, WebGL does not display in Awesomium. Awesomium will render the entire page and leave a blank space where the WebGL canvas should be. But since the page that’s being rendered in the Awesomium module needs it’s own context, I suspect that it’s rendering to one that we don’t have access to. Supposedly, there are ways to get the WebGL rendering to work by creating a new window and rendering to that. Even if I could get that working, I would have an issue of inaccurate markup and WebGL compositing. For this to work correctly, we would need to be able to render the page exactly how a browser composites pages natively.
Attempt #2 – CEF
The next thing that I wanted to do, but has proven to be a bigger task than I was prepared for, was to use use CEF within a Cinder application. The Chromium Embedded Framework (CEF) is the framework for embedding Chromium-based browsers in other applications. It allows you to load and render a website as you would see it natively in Chrome and capture the result as a texture, making it perfect for this type of thing. There have been a bunch of research and attempts to get this working both in Cinder and in OpenFrameworks, with no perfect solution. It’s something that I would love to jump back into at some point, but was too complex a task for the time being.
Attempt #3 – VVVV with Spout
We had recently used Spout for a project to send texture from a Cinder to a TouchDesigner app. This seemed like a good opportunity to do a similar thing but in reverse. Spout is a Windows based framework for sharing textures between applications, similar to Syphon on OSX. It’s compatible with many different applications, but the only one that has an HTML renderer is VVVV. So the new task was to see if VVVV was able to render an HTML page with WebGL in-tact to a texture that we could feed in to Cinder via Spout. After some quick prototypes, it looked like this would work.
Working with VVVV
VVVV (or V4) is a node based visual programming language/environment, similar to TouchDesigner and Max/MSP. I had never used a node-based IDE like this, but these tutorials and YouTube videos quickly got me up and running. I’m not convinced that visual programming is the way to go for everything as it can be more tedious to do simple things, but it certainly makes some really hard tasks possible and somewhat enjoyable. There’s a reason that it’s the basis for many applications that VFX artists use, like Nuke and Houdini.
As part of the final v4 project, I created 4 custom nodes. I’ll walk through each one…
Main App module
This module served as the main container that every other module is linked from.
The max resolution is hard-coded with some switches to scale the resolution of the WebGL renderer. It turns out that full resolution is just too many pixels for WebGL in Chrome to handle most times, so it’s important that we have a way to scale the resolution to find the best performance.
The config module loads a config json file to load a default url. After starting up the app, it waits a few seconds and then automatically loads the config file. You can also manually load the config by hitting the “reset/reload” button.
The URL is then passed to the Rendering module. It can be swapped out via the OSC module if it’s coming from Cinder. It will then override what’s in the config.
The OSC module takes the URL and mouse events as input and passes that to the renderer.
There’s a tty output and logger in case of crashes and mysterious errors and red nodes.
This module loads and parses a config json file for some initial settings.
This module receives UDP signals via OSC from Cinder and converts them into relevant data.
We need to ingest mouse events and positions from the Cinder app since it is the active application. In the Cinder app, only 1 screen has a touch interface (the 4K UHD screen) and the v4 app is expecting a normalized value where the mouse events take place within the active touch area.
It then converts them into coords that the HTML renderer can use, which are in the range of [-1, 1], since the renderer is on a quad with those vertex coordinate values. These reevaluated coordinates then are fed into a new virtual mouse that is outputted from this module.
Since we are generating a virtual mouse here, we also are adding a keyboard event listener and output that too. I originally tried to pass this from Cinder via OSC, but there are issues within V4 when it comes to creating a virtual keyboard. As far as I can tell, it’s impossible to fully create a virtual keyboard object that includes all of the events that the HTML renderer might be interested in. So instead, V4 listens to the entire application’s key events and toggles it as active based on OSC calls from Cinder when the app is focused or not.
Via OSC, we also listen for changes in url of the page that we want to load.
This module is responsible for rendering the page of the requested URL and then outputting the texture via Spout.
The module takes a few inputs – url, screen dimensions, keyboard and mouse objects, and a “bang” for reloading the page.
The texture output here comes directly from the Spout VVVV Sender example. The only thing that has changed is the “spout sender name” field, which is an identifier that the receiver looks for, which is custom for this project.
It was simple enough to get an HTML context rendering using the Renderer (HTML String) node. Once I had something basic working, I pushed the result to Spout. At first, I wasn’t able to receive the texture in the Spout receiver, due to sending a DX9 Texture to a DX11 Spout receiver. It turns out that there are two HTML renderer options available in V4. The default one uses an older version of CEF and a DX9 renderer, which is extremely slow and has resolution limits. So first time around, things just looked awful. You are limited on the Spout side as well as you would need to specify DX9 as the Spout input, which stinks if DX11 or higher is available on your machine, which also might be why there was the performance issue originally. Once I realized that there is a DX11 version of a bunch of Spout components and an updated HTML renderer, performance and resolution increased significantly. Though, 1/4 res of 7088×3840 (1772×960) is still your best middle ground between resolution and performance. It’s fast enough to not be too noticeable and still have a decent amount of rendered pixels.
The Cinder App
The Cinder app itself has to communicate with the VVVV app. It has to output OSC commands for mouse events, focus events, and url changes, and it has to ingest a Spout DX11 texture and display it. Then it has to cut up that texture and send the relevant area to the correct window. One odd thing that was discovered is that depending on which application was started first, the texture that is outputted from VVVV will be flipped vertically. I’m not sure if Spout is trying to do something clever with correcting for the OpenGL coordinate space or if it’s a graphics card thing. But as long as the Cinder Spout Receiver is started first, and then the VVVV app afterwards, the texture has the correct orientation. This is fine since the V4 app would be launched via Cinder in the end.
This is a simplification of how Cinder, VVVV, and spout operate together.
I’m sure that there are other ways to integrate WebGL into a OpenGL app and would love to find something a bit more straightforward. If anyone knows of a better solution, let’s talk!