Kyle McDonald’s Kinect art – among the experiments that made the idea of putting depth cameras into phones suddenly appealing. Photo (CC-BY) the man, the legend, the Kyle.
It wasn’t so long ago that point-and-shoot cameras were big, dedicated affairs. Now, camera sensors are everywhere.
What’s next? Expect depth-sensing cameras like the Kinect’s to become as ubiquitous as camera sensors are in phones. And don’t listen to the analysts: if Apple is buying PrimeSense, they’re thinking iPhone, not only their Apple TV “hobby.”
The news for the open source art hacking community using this stuff? Bad. And good. But… more on that in a bit.
With touch staked out as input method, vision and, more broadly, “perceptual computing” seem poised to reshape the way we interact with devices. Touch broke the boundary between machine and person imposed by keyboard and pointer – as Jobs famously said, using your finger as the pointing device. But, nearly a century after Leon Theremin first built musical instruments that sensed presence, the next interfaces may not require touch at all.
This week, Microsoft should have the headlines with Xbox One. But, having already seen the Xbox sensor, just as much interest may revolve around Apple’s reported acquisition of PrimeSense.
The new Kinect camera on Xbox One, superior to the original Kinect in every measurable way, isn’t based directly on PrimeSense technology. With Microsoft the key customer for PrimeSense’s unique expertise, that meant Microsoft’s move away from their technology set up an acquisition. But PrimeSense did have a strategy. The best read I’ve seen on the company comes from Engadget, who charted their early history, how those first meetings with Microsoft went, all the way up to their future plans:
Life after Kinect: PrimeSense’s plans for a post-Microsoft future [Engadget, June 2013]
Now, it seems, Apple (pending final approval) has acquired PrimeSense – and the tech Microsoft orphaned. (Microsoft may regret leaving them out on the open like that.) Excellent coverage by Charles Arthur at The Guardian: Continue reading »
Digital fashion is beginning to spread.
The latest evidence is the dazzling light-up dress for Little Boots, a “Cyber Cinderella” garment that transforms into a blaze of colored LEDs during the encore of her current tour. The Creators Project (VICE) has a short documentary film on the process.
Little Boots, an early adopter of the Yamaha/Toshio Iwai Tenori-On grid instrument, here demonstrates that the costume can be an extension of that matrix of lights. (Your next challenge: a wearable monome.)
What’s significant about the designer in this case, New York-based Michelle Wu, is partly that she isn’t one of the usual suspects of artists championing wearable tech. Instead, Wu admits she turned to YouTube and online resources for tips. That’s good evidence of the ability of these techniques to go viral. And the value of that isn’t just copycats: Wu’s dress looks different than other entries, and I especially appreciate the design as a garment. (Wu may be still teaching herself about which LEDs to use, but the Art Institute of Chicago grad has a deep resume of apparel design and development, including Moschino and Anna Sui internships and work with Heineken at Milan Design Week.)
The garment here doesn’t look like one designed by an engineer. It looks like a dress.
See Michelle Wu’s site for more; I really appreciate her designs. It’s not just the form of the dress that shows her skill: a keen eye for pattern and textile engineering, honed in engineering sweaters, comes through in her treatment of the LEDs. Continue reading »
Since its release, Syphon has demonstrated how visual materials can be more fluid on computers. The open source technology on OS X has changed the way people work with visual apps, become the key enabling tech that helped popularize streaming visuals to dedicated mapping applications, and, probably, convinced more than a few people to splurge on that MacBook Pro instead of a PC.
So, it’s always worth revisiting some of what it can do.
Earlier this year, Syphon co-creator vade demonstrated mapping with Syphon at New York’s now-legendary Eyebeam art research center, below:
(Via Le Collagiste [French])
Introduction to Syphon and Projection mapping workshop @ Eyebeam from vade on Vimeo.
In one concise video, vade (Anton Marini) illustrates the power of Syphon: Continue reading »
Can shiny, rotating 3D eye candy have a message?
They can if they’re in the masterful hands of motion graphics / visualist wizard Beeple, aka Mike Winkelmann.
His latest short is retina-singeing, brain-stimulating imagination of the issue of transparency and data privacy, in the age of Snowden and the NSA, targeted Facebook ads, Foursquare check-ins, and Google Glass. And it works not simply because it’s impressive animation, but because that impressive animation reminds us that the science fiction universe we imagined is around us now – a Ghost in the Machine, Serial Experiments Lain, cyberpunk universe that can be utopian and nightmarish all at once. And it’s not something imposed on us: it’s something we created.
It’s an epic; it’s an indictment.
Transparent Machines™ from beeple on Vimeo.
Of course, this being Beeple, it’s also paired with some of our favorite music (hello, Hecq) and fluid, sharp sound design (Kyle Vande Slunt). And, this being Beeple, the visuals you see are Creative Commons-licensed, ready for your use.
I will repeat my earlier plea: tough as it is to build on something that looks so finished, I still want to see someone exploit that CC license and remix things into something uniquely their own. (Heck, maybe I have to take on that challenge myself, too.) If you do, let us know.
You can get the clips, and you can also get the Cinema4D files:
In the meantime, full specs on the latest Beeple etude: Continue reading »
So, without a line of code, you want to make something new, visually. You’ve got Max, you’ve got Pd, you’ve got vvvv. But for quickly cooking up generative visuals, dynamic interaction, live animation, and more from a clean slate, the other option had been Apple’s Quartz Composer, a tool that has lost a lot of steam (and acquired quite a few bugs) lately.
Somehow, many people want some fresh blood on this scene. And that’s where Vuo comes in. From the creators of the Kineme plug-ins, it’s a chance to start anew.
We’ve been eyeing Vuo with interest for a while. The nodal environment promises faster creation and new magical visual powers, a chance to see a from-the-ground-up modern tool amidst a scene mostly dominated by architectures created years and years ago. It could be the visual programming option that feels most modern.
And now, Vuo has finally reached a proper beta. Early reactions are overwhelmingly positive – this is something people really do anticipate.
That’s doubly impressive partly because the Vuo beta doesn’t do a lot of things you’d expect. A big deal-killer for many: there’s no video in/out. No audio, no OSC, no Syphon, no math expressions. Did I mention no video?
These things are coming; you can check it out on the roadmap. But that has meant, in the meanwhile, what Vuo can do is already enough to whet some appetites – and early adopters are even pulling out their wallets to fund the effort. It’s something this community really hasn’t seen like this before.
Years in creation, the Vuo beta – available with a paid subscription – includes enough goodness that you can likely get some value right away. And, as covered today, there’s even Leap Motion integration for easy gestural input. The developers say Vuo means “flow” in Finnish. So, what will get into your flow: Continue reading »
Making interactive Leap Motion compositions with Vuo from Vuo on Vimeo.
Leap Motion has been touting the possibilities of making things happen with the wave of a hand. But that gesture only becomes meaningful when something happens. Imagine if that “something” could be anything you wanted. For that, you need an open-ended interactive environment.
Enter Vuo, the live interactive multimedia composition tool that last month hit beta. Among the functionality Vuo has been adding is native support for Leap Motion. See the video at top for what that means.
Vuo isn’t the first environment to support Leap. But seeing the Leap, by default, in a visual programming environment is really lovely. It’s the ability to use your gestures as an input, as easily as MIDI or OSC – a sort of “you-to-digital” interface. And that seems it should be a standard for this stuff.
Continue reading »
FLIGHT. from a TWiN thing. on Vimeo.
What makes this motion effect work is what you experience when you close your eyes.
In a clever short film, the directing team TWiN (Jonathan & Josh Baker) imagine shoes that take on a life of their own, as though channeling forces from Tron. Directing that virtual, digital world into ours, they propel the actor – and perhaps suggest a solution to LA’s transit problems. (Eat your heart out, Segway.)
But how do you make such an imaginary vision work? The post-production effects are pretty, but it really takes the sound to deliver their impact – enough so that I think it’s actually worth noting this story under the Motion heading rather than Music. The sound is what conveys the sense of motion.
Joseph Fraioli of jafboxsound writes CDM about the project, observing:
it’s sound design only which is a rare thing no music or dialog.. theres no production sound at all either. all the foley, backgrounds and of course sound design were done in post.
Full details: Continue reading »