The narrative around Kinect and how hackers and artists has always been a little oversimplified. You may have heard something like this: thanks to a bounty, creative individuals “hacked” Microsoft’s Kinect camera and made it open.

That’s true, but it isn’t the whole story. While there is a “hacked” Kinect toolset, most of the creative applications you’ve seen make use of a richer set of frameworks from OpenNI. “OpenNI” referred to an alliance of individuals and organizations, and was supposed to represent various interests, as well as what the group called on their Website “an open source SDK used for the development of 3D sensing middleware libraries and applications.”

And now the kinda-sorta-open thing is about to be a huge problem, with the sudden demise of OpenNI and no recourse for anyone using its tools. When you hear long-bearded gentlemen complaining about the vagaries of the term “open source” or “open,” this is the very thing that keeps them up at night.

First, OpenNI was always dominated by a single company, PrimeSense. And rightfully so: PrimeSense developed the core technologies in the first Kinect camera, and had the talent who knew how to use it. But that in turn meant that OpenNI was heavily skewed toward that one vendor.

Second, “open” was used to describe the whole project, when the whole project wasn’t open.

And now the entire project is about to shutter. PrimeSense was bought by Apple, and the direct result of that acquisition (as I and many others predicted) as the demise of OpenNI.

In fact, with Apple splashing just this kind of creative technology all over their Website on the anniversary of the Mac, it’s deeply disappointing that Apple leadership isn’t intervening here. The closing of OpenNI is unceremonious and absent any kind of useful information. Visitors only get this:

“Software downloads will continue to be available until April 23rd, 2014 at which time the OpenNI website will be closed.”

Assuming both OpenNI and its site are dead, the question becomes how to redistribute and license the code. The issue is, there are two components. There’s the OpenNI SDK, which is under an open license and redistributable. But the good bits are part of what’s called “middleware” – additional, proprietary licenses. And that’s where the real magic of what Kinect does lies, the “skeleton tracking, hand point tracking, and gesture recognition.”

All of that is about to go away. And because NiTE is strictly proprietary, even the free (as in beer) downloads formerly used by artists are now off-limits.

This is likely to be an ongoing challenge with clever new depth-sensing camera technologies, because there is a lot of “special sauce” that is remaining proprietary. So far, the knowledge of how to make that work has been restricted enough to the proprietary sector that there aren’t really open source alternatives.

That said, in a bizarre twist of fate, you can actually look to Microsoft (no joke) for actually understanding open source technology and working with the community, as Apple does quite the opposite. Continue reading »

mwm_car

If your computer can visualize massive three-dimensional worlds inside your computer, why can’t it be just as savvy in multiple dimensions on output?

That’s the question asked by MWM. Like other mapping software, the idea is to get beyond just a single rectangle as output, supporting multiple outputs and 3D surfaces and warping.

What’s unique about MWM is that it sets out to be more fluid with surfaces and outputs, all running “purely” on your computer’s GPU. And the workflow becomes completely different.

Rather than crudely mapping some video textures to cubes, MWM lets you bring in entire 3D models and meshes, with complete support for lighting and texture mapping, and some decent MIDI and media support all its own.

The highlights:

  • Real-time rendering and editing
  • Lighting (area/ambient, direction, point, spot)
  • Real-time camera collaboration
  • Warping, plus horizontal and vertical edge blending
  • Runs on Syphon (like MadMapper) on OS X; also supports Windows

Full specs on the site, but a number are especially eye-catching: Continue reading »

Maxqueen_LFO_7

From deep in Seoul’s underground, Maxqueen (Kloe) and Chang Park have forged an intimate audiovisual collaboration, in a rich dance of minimal, generated geometries in live music and image. Spanning Korea, Baltimore, Maryland, and now Berlin, they’ve played some of the biggest tickets in live techno (for Maxqueen) and digital media art (for Chang).

We’re thrilled to have them playing together – in their individual projects and then a duo – in Berlin on Friday night.

And we wanted to talk to them a bit about the craft of collaboration, and generating live performance through careful control of parameters in homemade patches in Max for Live and Max/MSP/Jitter.

Looking into their sets, what you see is a kind of joint modular synthesizer for music and visuals, a rigorous and disciplined focus on even the simplest parameters in order to make the fusion of optical and aural clear. We discuss their process, and – in a topic I’m sure is familiar to our readers in many corners of the world – how a scene is beginning to grow from the first seeds in Seoul.

Monokord

First, let’s watch a bit of their work (hoping for even better documentation after Friday): Continue reading »

From Vitruvius.

Installation work by Arístides Garcia, working on an ongoing thread of body and projection.

Get ready for the high Renaissance of the digital.

The individual ingredients remain the same. But the threads of transmedia work, spanning everything from traditional costume design and choreography to the latest generative projections, draw ever closer.

A new generation of artists treats projection in the theater as something elemental, something that demands exploration.

Vitruvian, premiering its latest iteration in Berlin Friday as part of our event unrender with LEHRTER SIEBZEHN, has gradually evolved in the ways it interweaves dance with opera, projected visualization and interactive sound with dance theater. An “interactive opera” in one act, it unapologetically adds digital imagery as an additional performer. But the human body insistently remains, in movement and voice, bathed in electronic light but not upstaged.

Vitruvian(old) Platoon Kunsthalle Berlin 2013 from Tatsuru Arai on Vimeo.

Continue reading »

From VITRUVIAN, a one-act interactive opera, premiering a new version in Berlin on Friday. Photo courtesy the artist.

From VITRUVIAN, a one-act interactive opera, premiering a new version in Berlin on Friday. Photo courtesy the artist.

There’s not a word yet for visuals as event.

We know it when we see it. And we know it in other media. With music, there’s no question when something becomes performative, when the human element is something you can’t subtract. But in electronic visuals, in light and image, the awareness of what is emerging in the medium seems latent.

The narrow view of VJing and club visuals is dated. And disconnecting those media from generative and interactive work misses an explosive and dynamic new craft. Whether it’s clever work with optical analog and overhead projectors, or a delicately-constructed piece of interactive software, the question is simple: when can you take something in visual media and tell people “you had to be there.”

It’s long past time for CDM to renew its commitment to this question. It was the root of starting something called createdigitalmotion, and part of why music and motion are intertwined — finding that inflection point when a visual creation can speak like a musical instrument. And in the first chapter of that, I’m pleased to announce a curatorial partnership with Berlin’s LEHRTER SIEBZEHN, and curators Stefanie Greimel and Johanna Wallenborn. This will bring an event in Berlin fusing music with visual experience in installation and performance, and taking that showcase online for our readers internationally.

Here’s Johanna on what we set out to do:

UNRENDER is about showcasing a digital community physically in a space, bringing together remarkable visual artists whose work is usually not accessible to the general public. UNRENDER is really about the immediateness of art being experienced, having some of the most captivating artists performing live and enabling them to show their outside of the traditional contexts. In that way, UNRENDER offers a unprecedented leeway for the visual scene.

Edition one image. Made in Processing.

Edition one image. Made in Processing.

Continue reading »

The-Ark-LowRes-3

We see audiovisuals routinely in darkened, air-conditioned room. The Ark invites viewers to wander a botanical setting, plant life bathed in hard-contrast light and shadowjo, architectural forms animated through the vegetation. Set in Mexico’s Ethnobotanical Garden of Oaxaca, multiple works from visual label ANTIVJ create an open-ended environment of image and sound.

Concept & Visual design by Romain Tardy
Music composed by Squeaky Lobster (we’ve heard his fine music before in Mapping Festival trailers, among other places)
Project Management by Nicolas Boritch

THE ARK from ANTIVJ is a visual label on Vimeo.

Presented in April, ANTIVJ has just released their documentation. Laurent Delforge, aka Squeaky Lobster, composed an 8-channel musical soundscape, and has this to say: Continue reading »

VDMX_tutorials

There’s nothing like the comfort of your own home. The comfy feel of underwear instead of annoying … clothes. The absence of … other people in the same room.

Yes, if you’ve ever dreamt of learning the lovely semi-modular, powerful VJ tool VDMX, now you don’t need to wait until experts in the tool come to your part of the world. You don’t even need clothes. (Though, maybe you’re chilly. You can totally wear clothes, too.)

Vidvox’s own David Lublin will be presenting a workshop for video artists ready to get deep into VDMX. You can hear from this core developer, and then ask questions, too.

It’s not only a nice opportunity to hone your visual performance tools for AU$25, it’s a nice idea for training in general. With the visual community as scattered as it is around the globe, if this works well, I hope it gets repeated. We can’t all fly to be with one another, unfortunately. This is surely the next best thing. Scheduled at Noon USA Eastern (NYC) time on March 1, it works out reasonably well for much of the planet.

Find out more:
March 1: VDMX Masterclass @ ACMI with David Lublin, Vidvox, NY [skynoise]

Registration and info at the Australian Centre for the Moving Image

Let us know if you go for it.

And on your own time, there’s always this:

http://vdmx.vidvox.net/tutorials/