Visual Space Music
Stars
about
concept
tech
Photos
Video
Contact

Technical Description

Visual Space Music is programmed in Max/MSP and Jitter, and also uses Ableton Live and its ‘Analog’ instrument as a synthesis engine.

Installation 1 (Prototype) Hardware

The first installtion used 4 projectors connected to two G5 powermacs (I'm sure 1 powermac would be sufficient if we had multiple graphics cards in any of them). The G5s were networked to a main computer (a Macbook Pro) which dealt with all the audio, controls and also sending data and syncing to the G5s.

Basically the hardware setup was:

4 Projectors
8 Active Speakers
2 G5 Powermacs
1 Macbook Pro
1 LCD Display (for the radar screen)
1 8-Channel Audio Interface
+ Tons of cables & other much more boring gear

Also worthy of note, plain white shower curtains (SAXÅN) from IKEA make very cheap and portable rear projection screens (with amazing image quality).

Technical Software Details Begin Here (Warning, could be boring... only if you really need to know how VSM works inside) ->

Overview

VSM is made in multiple parts. On the sound side: the mixing engine, synthesis engine and midi engine; on the visual side: the GL engine. Here we will also explore some of the technical problems I faced during development.

The Mixing Engine

The mixing engine--as with the majority of VSM--is built in Max/MSP. It uses an ambisonics synthesis system to mix spatial audio in real-time; this was achieved with the help of ICST Ambisonics Externals for Max/MSP.
This receives sound in real-time from individual soft synth tracks in Ableton Live and spatially mixes and delays them across however many speakers the current installation has.

The use of ambisonics is unconventional here in the sense that ambisonics was designed to reproduce a real spatial recording (i.e a number of mics in certain locations pick up multiple tracks for the recording and the ambisonics system intelligently mixes this to reproduce given number of speakers and their location).
In VSM, however, everything is synthesized, creating a virtual audio space, while the virtual listening position changes constantly as the joystick is moved. (Modern ambisonics tools allow for recorded tracks to move, but not the whole listening position, which in a reproduced recording would be static.) Modifications were made that moved all the objects at once and calculated rotations of all the objects, which in turn feeds all the objects positions at once to the ambisonic engine.

The Synthesis Engine

The synthesis engine is built in Ableton Live 7, using the ‘Analog’ instrument and a series of effects for each object in VSM. Midi commands are sent from Max (the VSM midi engine) to Ableton which in turn trigger macros within Live’s environment and allow complex tweaking of parameters with a single knob. There is a separate track and an 'Analog' soft synth instance for each object, which each gets sent to the mixing engine for mixing and visual processing.

The Midi Engine

Thankfully Max/MSP is fantastic for complex midi trickery. The VSM midi engine receives midi data (and also joystick data) from the controllers and processes them into commands for the entire VSM system. A custom made appegiator spits out meaningful notes from using only a knob and a fader for each object. The other controls will be turned into Ableton control midi and manipulation data for the visual objects.

The GL Engine

The GL Engine is based around Jitter's jit.gl.sketch object, which takes standard openGL commands in the Max/MSP environment. Coordinate movement from the joystick controls the movement of the entire scene around the virtual cameras (set depending on how many screens will be present). Midi data from controller knobs and sound data from Live modify the appearance of the objects in visually meaningful ways.

Problems

Easily the most difficult and unforeseen problem throughout the project was getting the mixing engine and the space navigator to play nice. Both the (ICST Ambisonics) audio and (openGL) visual systems use different coordinate systems.

The problem basically came down to the fact that sound in the system, of course, can emanate in all directions from the point the object is in. The 3D visual side of the object is much more complicated: it has a direction. Different sides face you depending on where you are in relation to it and which way you are facing. My very first concept version of VSM failed as it was built with audio system feeding coordinates to the 3D system. The objects would rotate around right when you turned (as the sound did); however, we would always see the same side of the object--perspective wasn’t taken into account!

This problem was solved with the help of Zachary Seldess on the cycling74 forum. I made some heavy modifications to Zachary’s GL Navigators patches in order to move the whole visual world around a fixed camera and simultaneously calculating a synchronized trajectory for the ambisonics mixing engine (forward in any direction in the visual space had to be just on the Y-axis the right amount for the ambisonics engine and so on). Whew! never thought I would have to dig up that trigonometry again, but it comes in handy, kids.