This shouldn’t work.

A video full of lasers, mountain ranges inexplicably floating through space, and endless shots of cats should seem a cloying, annoying play for eyeballs.

And yet …

Can’t … take my eyes … away … somehow. (And I’m a dog person, darnit.)

Thank Immo König. The solo director / motion graphics house did the visual production to the hypnotically-chilled rhythms of Phon.o in a matchup for the label 50 Weapons. König’s one-man band approach shows in a creation that’s inventive, but also tightly directed and paced. And he’s been doing this a while, in case you hadn’t guessed, a veteran compositor and effects and motion artist. (Listed on his site, only “Autodesk Flame, Adobe After Effects, Final Cut Pro, Nuke (Basic).” Okay, then. Guessing we’re getting all After Effects here.)

With dark, warm synths quivering against the lone beat and vocal loop, all the motion silliness seems dreamy, not camp, the afternoon reverie of a child instead of the mad ramblings of the Interwebs.

But a lot of this is the lovely music, including Kit Clayton – you’ll know him as a key figure behind Jitter (of Max/MSP Jitter). Yes, those are even his spectacles on the nose of a cat in the old Jitter image.


From the EP “Cracking Space Pt. 1″
Release Date: March 28, 2014

Production: Visualking
Music: Phon.o

Anyway, if you don’t like it, blame this entire post on some cat mind control tri–

Hey! What did happen there?!


Download the track from XLR8R.

Vuo 0.6 normal+specular mapping demo from Steve Mokris on Vimeo.

Vuo, aka “that visual development environment that isn’t Quartz Composer, continues to march forward.

Just a quick tidbit, but this looks like a tasty morsel: normal and specular maps on 3D objects mean more photo-realistic lighting. As the demo video puts it:

Vuo 0.6 supports normal+specular maps on 3D objects. Here’s a simple quad, textured with diffuse+normal+specular maps, and a point light moving around in front of the quad.

Nice stuff. And seeing this live opens up some new possibilities.

Of course, we’ve seen vvvv (“vee four”) do this for some time, but notice you’re now seeing it on a Mac. (Ahem.)

I’ll also use this somewhat minor update to call out – who’s using Vuo? Done any cool projects yet? Do share.

Previously, a more in-depth on this tool:
Vuo in Beta: A New Hope for Visual Development? [Resources]

Another demo:

Vuo 0.6 lighting demo from Steve Mokris on Vimeo.

It’s about time the maker movement tackled display technology.

Enter OSCAR (Open SCreen AdapteR). It’s the sort of super high-resolution 9.7″ LCD panel you’d expect trapped inside something like an iPad, but you can connect it directly to a computer via Arduino.

Now, the actual “DIY” bit here is pretty simple: it’s just the interface. But even just having the interface is fairly useful. The display tech itself remains mass-market, mass-produced, but by adding that raw display part to the interface, you can build your own projects – and there are clearly some installation and other DIY projects just waiting to happen.

And people building installations and other projects get more than just the ability to output to the display. Because the Arduino-based hardware acts as the controller, you also finally get to control rotation, backlight brightness, and other parameters – just the stuff you’re missing when you try to mount iPads into displays.

There are various versions here, depending on your needs; all work via DisplayPort/Thunderbolt.

I’m not sure OSCAR would work for every application here – a tablet retains touch, crucially, which this lacks – but it could be a sign of things to come.

And, of course, it’s on Kickstarter. GBP65 gets you the basic hardware; pricing goes up from there. The UK-based project ends in about a week.


The narrative around Kinect and how hackers and artists has always been a little oversimplified. You may have heard something like this: thanks to a bounty, creative individuals “hacked” Microsoft’s Kinect camera and made it open.

That’s true, but it isn’t the whole story. While there is a “hacked” Kinect toolset, most of the creative applications you’ve seen make use of a richer set of frameworks from OpenNI. “OpenNI” referred to an alliance of individuals and organizations, and was supposed to represent various interests, as well as what the group called on their Website “an open source SDK used for the development of 3D sensing middleware libraries and applications.”

And now the kinda-sorta-open thing is about to be a huge problem, with the sudden demise of OpenNI and no recourse for anyone using its tools. When you hear long-bearded gentlemen complaining about the vagaries of the term “open source” or “open,” this is the very thing that keeps them up at night.

First, OpenNI was always dominated by a single company, PrimeSense. And rightfully so: PrimeSense developed the core technologies in the first Kinect camera, and had the talent who knew how to use it. But that in turn meant that OpenNI was heavily skewed toward that one vendor.

Second, “open” was used to describe the whole project, when the whole project wasn’t open.

And now the entire project is about to shutter. PrimeSense was bought by Apple, and the direct result of that acquisition (as I and many others predicted) as the demise of OpenNI.

In fact, with Apple splashing just this kind of creative technology all over their Website on the anniversary of the Mac, it’s deeply disappointing that Apple leadership isn’t intervening here. The closing of OpenNI is unceremonious and absent any kind of useful information. Visitors only get this:

“Software downloads will continue to be available until April 23rd, 2014 at which time the OpenNI website will be closed.”

Assuming both OpenNI and its site are dead, the question becomes how to redistribute and license the code. The issue is, there are two components. There’s the OpenNI SDK, which is under an open license and redistributable. But the good bits are part of what’s called “middleware” – additional, proprietary licenses. And that’s where the real magic of what Kinect does lies, the “skeleton tracking, hand point tracking, and gesture recognition.”

All of that is about to go away. And because NiTE is strictly proprietary, even the free (as in beer) downloads formerly used by artists are now off-limits.

This is likely to be an ongoing challenge with clever new depth-sensing camera technologies, because there is a lot of “special sauce” that is remaining proprietary. So far, the knowledge of how to make that work has been restricted enough to the proprietary sector that there aren’t really open source alternatives.

That said, in a bizarre twist of fate, you can actually look to Microsoft (no joke) for actually understanding open source technology and working with the community, as Apple does quite the opposite. Continue reading »


If your computer can visualize massive three-dimensional worlds inside your computer, why can’t it be just as savvy in multiple dimensions on output?

That’s the question asked by MWM. Like other mapping software, the idea is to get beyond just a single rectangle as output, supporting multiple outputs and 3D surfaces and warping.

What’s unique about MWM is that it sets out to be more fluid with surfaces and outputs, all running “purely” on your computer’s GPU. And the workflow becomes completely different.

Rather than crudely mapping some video textures to cubes, MWM lets you bring in entire 3D models and meshes, with complete support for lighting and texture mapping, and some decent MIDI and media support all its own.

The highlights:

  • Real-time rendering and editing
  • Lighting (area/ambient, direction, point, spot)
  • Real-time camera collaboration
  • Warping, plus horizontal and vertical edge blending
  • Runs on Syphon (like MadMapper) on OS X; also supports Windows

Full specs on the site, but a number are especially eye-catching: Continue reading »


From deep in Seoul’s underground, Maxqueen (Kloe) and Chang Park have forged an intimate audiovisual collaboration, in a rich dance of minimal, generated geometries in live music and image. Spanning Korea, Baltimore, Maryland, and now Berlin, they’ve played some of the biggest tickets in live techno (for Maxqueen) and digital media art (for Chang).

We’re thrilled to have them playing together – in their individual projects and then a duo – in Berlin on Friday night.

And we wanted to talk to them a bit about the craft of collaboration, and generating live performance through careful control of parameters in homemade patches in Max for Live and Max/MSP/Jitter.

Looking into their sets, what you see is a kind of joint modular synthesizer for music and visuals, a rigorous and disciplined focus on even the simplest parameters in order to make the fusion of optical and aural clear. We discuss their process, and – in a topic I’m sure is familiar to our readers in many corners of the world – how a scene is beginning to grow from the first seeds in Seoul.


First, let’s watch a bit of their work (hoping for even better documentation after Friday): Continue reading »

From Vitruvius.

Installation work by Arístides Garcia, working on an ongoing thread of body and projection.

Get ready for the high Renaissance of the digital.

The individual ingredients remain the same. But the threads of transmedia work, spanning everything from traditional costume design and choreography to the latest generative projections, draw ever closer.

A new generation of artists treats projection in the theater as something elemental, something that demands exploration.

Vitruvian, premiering its latest iteration in Berlin Friday as part of our event unrender with LEHRTER SIEBZEHN, has gradually evolved in the ways it interweaves dance with opera, projected visualization and interactive sound with dance theater. An “interactive opera” in one act, it unapologetically adds digital imagery as an additional performer. But the human body insistently remains, in movement and voice, bathed in electronic light but not upstaged.

Vitruvian(old) Platoon Kunsthalle Berlin 2013 from Tatsuru Arai on Vimeo.

Continue reading »