In a teaser video just released by Spain’s Things Happen, a silhouetted performer uses arm position to sweep through RGB colors and trigger sound cues. It’s the latest effort to integrate the immersive media environment with a performer’s body, part spectacle, part interface.
The ingredients, apart from Microsoft’s ubiquitous Kinect depth camera:
Motion capture + image = light + sound
MadMapper [using MadMapper's Madlight feature to trigger lighting]
Music: Sun Glitters x Isan – Snowfall
The nice thing about the inter-linked, comment-enabled Web is, we get to see more.
Inverting the relationship of color to output, Things Happen uses a color turntable to trigger lights and sound. The colors themselves become a score for audiovisuals, a return to the days of optical discs and optical film tracks.
Enveloped by a sculptural projection of light, the band performed amidst a shimmering visual spectacle. Photo courtesy the artists; check out their Instagram.
Amidst a forest of illuminated tubing, the band How to Destroy Angels (with Trent Reznor, Atticus Ross, Mariqueen Maandig, and visualist/art director Rob Sheridan) take to the stage in an abstract digital structure. The stage image is transcendent, a rectangular prism of shifting digital visuals and projected effects. Perhaps it’s best to run this by the specs:
16 ft. tall (nearly 5m) curtain of surgical tubing
Data visualization is moving from the macroeconomic and large-scale – census numbers and such – to the personal. And digital work is getting more physical. So, it’s telling to look at this latest interaction design project from Copenhagen-based creator Andrew Spitz. The sound designer-turned-interaction designer built an app in Max/MSP that pulls travel information – entered manually or from TripIt – and outputs graceful arcs in a 3D-printed sculpture that acts as a tangible travelogue. (I’d actually love to see it go further, perhaps showing elevation with flight tracking or something, but the simple gesture here is nice.)
Max to me is a surprising choice, but it shows how capable that environment can be for those willing to push it.
In another meeting place between digital and tangible, Andrew is using hand-stamped elements to make the UI for Flying, an iPhone app that will link to loci with tracking information. I love this sort of technique – the kind of broad perspective that reminds you that, just because you’re working in digital media doesn’t mean you can’t mix non-digital media. Watch: Continue reading »
Gestural input is already beginning to make Kinect look crude, offering new three-dimensional sensing approaches that combine low latency with precise gestures. That offers tremendous potential for new interfaces, and could in particular finally help solve the problem of how to intuitively work with three-dimensional interface concepts that don’t lend themselves to traditional touch and physical input.
Leap Motion is already speeding toward its mid-May launch, complete with an app store ecosystem for developers wanting to push out new ideas to users. But Leap is a closed box. Apart from any philosophical objection, that means you can’t take it apart and understand how it works, or modify its physical form to work in new performance instruments, installations, and the like. (And that’s of prime interest to many readers here, of course.)
That’s where DUO comes in. It’s an open-source alternative, now in its final week of crowd funding. I find my inbox is inundated with crowd funding ideas on sites like Kickstarter, but this one is special. The prototype and current software are already drool-worthy. And the team behind the project have worked on software we’ve already covered on CDM. They’re behind projects like CL Eye (the free Windows Sony PS3 Eye driver), CL Kinect, CL Studio, and crucially the popular open source touch libraries CCV and Touchlib.
Update: I should observe that AlexP did rub some folks the wrong way in the way he tried to fund his projects, including some (fair, I think) criticisms about whether licenses were indeed open. He was responsive to that criticism. At this point, I’d like to know what license they plan to offer. Also, I think there are reasonable concerns about open source projects getting crowd funding before they’ve released code – though, with hardware, I think it’s also fair for makers to wait until they’re shipping before they release their work, for a variety of reasons. (I actually think waiting in this case is the smart thing to do; that’s perhaps the subject for another article.)
Performance is also a variable. 240 FPS video, anyone? With higher framerates comes finer-grained performance control, which suggests more powerful musical applications (where Kinect in particular has lagged badly, literally).
VJon (Jon Bonk), curator of the competition, in action. Courtesy the artist.
Along with New York, Boston was really my introduction to what was happening in live visuals and VJing. So, I already took interest when a couple of friends sent word of a VJ competition in Boston. Then, of course, things in that city turned very strange indeed. The festival “Together,” scheduled in May, couldn’t have a better title – we’re thinking of you.
So, Northeast USA, I absolutely am pleased to alert you to this competition; I hope we get some CDM readers in it. And for the rest of us around the world, here’s a look at what a couple of our friends are doing with VJing in Beantown. (This is not by any means a comprehensive list of the folks I know there, but a couple of the artists working on this event!)
NightRide Visuals, a collective led by Jay NightRide, got a visit by Akai Pro which featured how they work with the APC40 controller and their approach to events in general:
LEDs are poised to revolutionize displays, but that future isn’t “evenly distributed” yet. And that’s why some emerging projects are so intriguing. Think instant displays, addressable from any computer – including with Syphon. Just plug Ethernet on one end, into strings of these:
That’s the idea behind PixelPusher, a project currently on Kickstarter. Development appears complete; they’re using crowd funding to support doing a run of a thousand of these, made in California. The notion is networked lighting arrays you can address from a computer, using Syphon or free apps they’ve built with Processing. (For anyone who uses Processing, this is also very, very exciting.)
Lighting effects, LED-based displays and matrices, even volumetric displays then become powerful. And since it’s all networked, interfacing is quite flexible.
The hardware looks cool, as well. Inside is a powerful processor (a 32-bit ARM Cortex-M3 based LPC1758, if you must know).
US$350 will buy you the hardware and 100 pixels, and nicely enough, they say they’ll ship by June. $220 gets one strip and the hardware, to just toy around with this. (Cheaper premiums with just LEDs are available.)
Update: this isn’t fully open source hardware, but includes a lot of open source components. The library and “API” are open, but it appears the hardware itself is closed, at least initially. Still, having the API is a step, and I would happily invest in a closed solution if it works. I expect the bigger obstacle for practicality is that these LED strips tend to be expensive once you start combining a lot of them, but on a smaller scale, this could be absolutely spectacular.
G8 LABS’ CL:OC is a lighting installation shown recently in Köln, Germany. It’s a kind of moving sculpture, choreographing tubes of light as they form and dissolve the digits of the titular clock. It’s a beautifully-elegant, minimal piece, the sort that demonstrates how much can be done with simple lighting fixtures simply by treating them themselves as part of the image.