Live performers simply want a way to have more intimate relationships between music and visuals onstage. That means whether they’re working solo or with a live visualist, being able to get useful signal between music and visual tools and performance elements.
Livegrabber has got to be about the easiest way to do this I’ve seen yet. It’s actually a suite of plug-ins for Ableton Live’s Max for Live environment that spits out OSC (OpenSoundControl) messages to any visual tool that can respond.
And as you can see in the video, the results are both effortless and profound.
It’s best seen in this example with VDMX. (Any OSC tool will work – I’m rather keen to code around this with Processing, for instance, just for kicks. But speaking of VDMX, that superb tool is on a fire sale for a hundred bucks this month; act fast!)
Melbourne-based artist Jayson Haebich has rendered in thin air, literally, architectural forms in color. He uses custom software to map lasers through particles to produce an ephemeral sculpture of air. The results are gorgeous – frozen digital motion.
From his description:
These are a series of static light sculptures that have been created using laser light, smoke, shadows, physical shapes and custom built software to create complex compositions of shadow and light that play with the sense of depth and perception. These pieces challenge the observers idea of perspective and ask them to consider what components of the installation are physical objects and what are non physical objects such as light and shadow. These sculptures fill the room with light and colour creating an almost mesmerizing effect as the beams of precisely mapped light from a laser cut through swarming particles and haze suspended within mid air to create almost solid looking planes of illumination.
These pieces were created using custom built software that is used to map out physical features using a RGB laser and was created using OpenFrameworks.
Take your visual app. Take a parameter. Now control it with timelines and automation – anything, anywhere.
The appeal is clear for visualists, particularly with built-in options sometimes limited. Now, sequencing control – clocked to MIDI – can take on powerful new dimensions.
Vezér isn’t the first app to go this direction, by any means. I’ve admired in past the insanely-powerful IanniX, for instance. But then, “insanely powerful” isn’t always what you want – particularly if that depth just drives you insane trying to do something simple. Vezér, just now entering beta, promises to be a bit simpler. And coming from the developer of CoGe, it also arrives from a coder with a track record.
We’ll be watching for this to come out, but in the meantime, its creator gives CDM a first look. Details:
Vezér is under development, no public beta available yet.
You can have any number of compositions and each composition can contain any number of tracks.
There are different track types in Vezér, like single Midi CC message or Midi CC range.
Vezér supports Undo/Redo in the whole application and also supports Copy/Paste of keyframes.
The playback speed of a composition is adjustable and can be synchronized with a Midi Clock.
The resolution – FPS – of a composition can be set.
Vezér supports sending of Midi CC messages.
Different interpolation can be set for each keyframes.
Vezér support recording of incoming Midi signals.
Vezér can be controlled via Midi or OSC.
Vezér is a 64 bit application for Mac OSX 10.6 or later.
Vezér will be a commercial application.
Body mapping and dance/visual fusions are still explored only in fits and starts, compared to the extent of live music and visual performance in other media. So, it’s encouraging to see this latest experiment from dancer Christian Mio Loclair. Working with Microsoft’s Kinect, the slowly-undulating tendrils of visuals behind him create visual counterpoint for headstands and hip-hop dance techniques. Far from running up against latency, here there’s a sense of visuals that answer the moves with a slow sigh, creating a kind of living architectural space behind him.
Christian muses to CDM, “I am very convinced that especially the Kinect and the upcoming Kinect 2 will change the way dance will be performed. I hope I can contribute to this development … just some streetdancers, hackers and a Kinect. I wanna see how far we can get by open source Code, own code and Open Dance.”
Blast from the past: this color organ is from 2007. But it’s a beautiful demonstration of light and sound, fused into a single interface, and thus worth mentioning as I pull together notes for a talk at Mapping Festival tomorrow here in Genève. Compare the 60s-vintage Lumigraph of Oskar Fischinger, which I write about today on Create Digital Music.
In gooey pinks and purples, traced with imaginary sparks, the game controller-manipulated system resembles looking into the heart of a great jellyfish made of plasma.
Nail the finger fireworks of a particularly hard Rachmaninoff, and you may well feel like blasts of light are shooting out of the piano. But to give the audience the same sense, a DIY instrument made of cardboard and homebrewed responsive lighting translates that keyboard virtuosity to an optical show. Reader Aylwin Lo sends us this project out of Canada:
I’m with YAMANTAKA // SONIC TITAN. We’re an art collective based in Toronto and Montreal that is most known for making music and putting on dramatic live shows. People like Pitchfork, Vice, and MTV Iggy have nice things to say about us.
We made a video of our keyboard player pulling off a notoriously-difficult Rachmaninoff composition on a special piano we constructed
from an electric piano, a cardboard baby-grand shell, and a homebrew, Arduino-based LED-controlled light rig, and we thought you might like it.
You thought right. Now, if you want to play piano like this, you … uh, have some practicing ahead. But in a novel twist on crowd-funding rewards, they’re also using this very artist to help – even as they work on a video game. (Stay with me here.) Continue reading »
Entropy never looked this good. Or, certainly, it looks a lot better than when I broke that beer glass. (I know, I know: there are reasons why we don’t want to live in a universe where doing that would make the glass re-assemble itself from its fragmented shards.) Photo courtesy the designers.
In an elegant, balletic dive, taking an almost impossibly-long span of time, a single droplet of water falls and splashes, an animated logo peeking out from the inside. But it’s what isn’t there that may surprise you. There’s slow motion camera behind the scenes, meaning the usual way of doing this is absent.
Instead, what you’re seeing is a stop motion time lapse – a record of the shifting patterns of entropy in nature, thousands of different droplets appearing as connected that in reality are not. It’s a trick of animation and high-speed lighting, not high-speed photography, stroboscopic illusion.
And it’s a fun DIY project, to boot. 3D and robotics? Okay, we’re in.
The Barcelona-based studio that produced it shares not only the animated result (including a logotype for design mag IdN, but also a short film explaining the making of. Physalia’s Belén Palos writes us:
We got your contact through Alex Trochut, who shares studio here in Barcelona with us. We are big admirers of Create Digital Motion, so we would love to share with you our new piece Entropy, a joint effort from our 3D division and robotics Lab, in which we created a system to capture the fall of a water drop without a slow-mo camera- with replacement animation mapped inside the drop! There’s an Arduino involved, as well as our self-developed Motion Control.