Last week, guest writer Momo proposed a set of semantics and abstraction for making audiovisual collaboration more expressive – starting with ideas as fundamental as describing a kick beat. Now, he returns to show us what he actually means in his work, element by element. -Ed.

This is the current version of The Circus Of Lost Souls in its entirety. I’d like to break down each scene and element for you to give you some insight as to how it’s all built and controlled.

In its current form, the visuals for this song are basically self-running when we play live. I take a MIDI cable from their Audio setup and run the data through a local copy of Ableton Live in order to trigger the visuals on my end. I have failsafe options for every animation in this song – buttons and sliders I can use to control the animation if something goes haywire with our MIDI connection, though that has thankfully not happened yet during a show.

The Max4Live patches that I used for communication on this project are almost ready for some testing. Join the beta if you want to help!

Breakdown

Minimal Lights

This scene starts the show with some simple yellow orbs. I hear five triggers of this starting sound, and made the first one trigger /accent/1 and the following four /accent/2. I mapped these two inputs to functions called ‘trigger’ and ‘triggerMini’ within the scene.

 

Sat1

Here we reveal where the orbs are coming from. This scene looks for any message addressed to /accent/ (so it hears both accent/1/ and accent/2/) and calls the ‘trigger’ function within the scene. If a scene only has one function, I often call it ‘trigger’, changing it later if I evolve the complexity.

 

UFO1

This scene maps the /kick/ and /snare/ messages to ‘triggerTop’ and ‘triggerBottom’. The scene changes throughout are also triggered via MIDI. These could easily be left out to allow the visualist more control over which view we see. Since the /accent/, /kick/ and /snare/ messages are being sent throughout the entire song, you can switch to any scene and it will respond appropriately.

 

Sat2

Like Sat1, this scene maps /accent/ messages to a ‘trigger’ function.

 

Chorus1

This scene is a bit of a cheat – its start is triggered via a /scene “chorus1″ message, and then runs in time with the music. I’ve got 8 backup triggers that let me jump to the different lines in case things go pear-shaped.

 

First Dance

This was the first real tricky scene. I wanted to match the great oscillations of the synthesizer, but it was a Massive synth line, and there was no data I could find that matched the final outputted sound. My solution was to record and then clean-up a MIDI envelope that matched the warbling output of the sound, and then turn that into a stream of messages as /bass/, mapped to a ‘dance’ function in the scene that took a number between 0 and 1 to affect the animation. The envelope looks like this:

 


UFO Sing

This scene has two methods: ‘sing’, which takes a string and displays it in a speech bubble, and ‘unsing’, which hides the bubble away. These are triggered by the messages /lyrics “We are Bought and Sold” and /lyrics/clear.

 

Earth Sing

This scene works the same way as UFO Sing, with a legible font instead of a symbolic one.

 

Duet

This one sneaks in, since its default state looks identical to that of Earth Sing. When it detects that it’s live, however, it flies the UFO in, and then both entities respond to /lyrics and /lyrics/clear messages. Additionally, a trigger is mapped to a ‘beginTransfer’ function which is timed to the breakdown.

 

Earth Dance

A scene built from UFO Dance, with a different sprite in the foreground (the Earth). Everything else is the same – cheers for reuseable assets!

 

UFO Sing

This scene has two methods: ‘sing’, which takes a string and displays it in a speech bubble, and ‘unsing’, which hides the bubble away. These are triggered by the messages /lyrics “We are Bought and Sold” and /lyrics/clear.

 

Marquee

No interaction here, just some scene-setting.

 

Theater

Reusing assets again, this scene has Duet embedded in a frame of people in a movie theater.

 

Awards

This is a two-part scene. It is not reactive until the ‘zoom’ function is triggered, at which point an envelope controls the ‘dance’ function of the eyes, using code evolved from the earlier dance segments.

 

Last Chorus

No interaction – though I have a ‘fadeOut’ function I trigger for narrative purposes.

 
  • http://noisepages.com/members/brianpark999/ Brian Park

    Great~!!

  • http://www.broadstreetstudios.com Valis

    Interesting, exactly the use for M4Live I've been working on for the past year myself. Using Resolume/VDMX here and a combination of video material & Quartz compositions. Love to see more features like this!

  • http://snd.sc/gil3jz trudeau11

    Love your design, lyrics, music. Masterful control.

  • http://kodama.angrypixel.org Scott

    I think this is really great work, but just want to play devil's advocate for a moment…

    With this level of pre-rendered/planned visuals and enmeshment with what is being performed by the music artist, is there any performative element left for the visualist? It seems the musician is also becoming the visual performer (which may not always be such a bad thing).

    I'd love to see another layer of communication going on here, where the visuals/visualist responds to the system. Maybe they become the second party of the duet? I think there's more interest when you can't always tell when the system is responding to the performer and the performer is responding to the system.

    I do realise that what you're trying to accomplish through this is a language of communication and I think that's a really powerful idea. Just thought I'd throw another thought into the ring.

  • http://mmmlabs.com Momo The Monster

    @Scott – always appreciate a devil's advocate. You are exactly right – for the way the visuals are set up for this song right now, I do nothing to perform it. In this case, the creativity is in the planning, and the execution is automatic (though I do stand poised to jump in if something goes awry).

    This is not the end-state for this song, however. Now that the narrative is planned out and auto-running, I can add improvisational elements on top and play those. Or I could trigger every animation myself instead of allowing the OSC to do it, which would give me something to do as a performer, but otherwise doesn't create anything new for the audience.

    I dig the idea of 'responding to the system'. In this case, where the system automatically transitions between scenes, that idea could be explored by having some unmapped elements in each scene that the human performer can control to add spontaneity.

  • http://kodama.angrypixel.org Scott

    @Momo – It's definitely a very creative idea. And a strong platform to build further inspiration upon.

    I do believe that any idea about communication should also be about conversation. Whether that dialogue is between the performer and system, or just between each of the performers I guess is a question unique to each project. Speaking a common language, such as the one you're putting forward would certainly help.

    Perhaps the musicians shouldn't be enjoying all the spotlight – they could respond to visual cues (or OSC messages) too? ;)

  • http://vjdioboss.com VJ DIOBOSS

    Momo is definitely onto something BIG with his recent projects creating his audiovisual language with Max4Live. For me, the appeal to moving in this direction is more about the artistic design potential than the artistic performance aspect.

    What he's working on now could allow free the video artist from actually having to go on tour and run all the video live at each performance. The video product could be systemized and run by the musicians or outsourced to the lighting tech/sound guy. (Not to say that actually performing the video show live isn't ideal, I know this is a cornerstone issue for VJs)

    Basically, we are talking about the possibility of the video artist being free to work with more than one musician or tour at a time. It means new opportunities for video artists to extend their reach and the potential to make money by creating high quality visual products for musicians. This has great value and is very exciting!

    Thanks Surya and keep up the good work!