The work by Theo Watson and team has been one of those magical technological revelations that makes people say “oh – that’s what that’s for.”

Say computer vision or tracking, or show the typical demo of what these can do with interaction, and eyes glaze over. But make them work as puppetry, and somewhere deep inside the mechanisms by which us human beings interact with our world, something lights up.

With iteration, that first proof of concept just gets better. Theo writes to share that he and collaborator Emily Gobeille made a second version of the project. In “Puppet Parade,” the Interactive Puppet Prototype 2.0, the barrier between digital realm and human gesture gets a bit thinner.

But don’t just watch the edited demo – see what it looks like in action below, and a brief visual look at how the system works. (Bonus: Theo wrote the tools on which the whole system was based – and shared them with a well-connected global community of hackers via his open source library.)

Description and credits:

Puppet Parade is an interactive installation that allows children to use their arms to puppeteer larger than life creatures projected on the wall in front of them. Children can also step in to the environment and interact with the puppets directly, petting them or making food for them to eat. This dual interactive setup allows children to perform alongside the puppets, blurring the line between the ‘audience’ and the puppeteers and creating an endlessly playful dialogue between the children in the space and the children puppeteering the creatures.

Puppet Parade premiered at the 2011 Cinekid festival in Amsterdam. Puppet Parade is made with openFrameworks and the ofxKinect addon.

Project page: design-io.com/?p=15

Credits:
Project by Design I/O – design-io.com
Exhibited at Cinekid Media Lab 2011 – cinekid.nl
Sound Design by: m-ost.nl
Video by Thiago Glomer / Go Glo – thiagobrazil.blogspot.com

Here’s the live version, unedited, for a better feel of what this project is like in person.

Note the interaction on two planes. (Kyle McDonald, another superstar of the Kinect community, points to this element in comments on Vimeo – thanks, Kyle.)

And for a peek behind the curtain, you can also see the tracking that drives the interaction:

Filip at Creative Applications goes into greater technical detail, in terms of libraries used and other specifics. Two particular tips: motor control allows the system to adjust to different heights, and the depth of field benefits from the use of two cameras.

  • vjchaotic
  • Steve Elbows

    Good stuff. Its been a funny old 13+ months for the Kinect, in some ways the art/installation/experimentation stuff has progressed more than the gaming side for which the device was originally intended. As good as the skeletal tracking is, I think there are some weaknesses and limitations which are posing a challenge for developers, the Kinect has yet to prove that it can be used for more than party-games really, a fact further demonstrated by the delay to the Star Wars game.

    Meanwhile the enthusiasm of the kinect hacking community has lead to an amusing situation in which we are now almost spoilt for choice on both he hardware & software front. Microsoft have announced Kinect for Windows hardware and pledged that their SDK will be free for commercial use, albeit there are some caveats which will annoy some, such as the higher price for the Windows version. I cant say Im overwhelmed by the Microsoft SDK but I will judge again when the new version comes out on Feb 1. PrimeSense have released lots of updated for OpenNI & NITE, and a calibration pose is no longer required to get skeletal tracking working. And then there is the world of other open stuff which I am sadly a bit out-of-date on, as my focus has tended to involve skeletal tracking rather than doing stuff with the depth map myself.

    Although I have to say that the majority of the interesting stuff I've seen so far has tended not to use the skeletal tracking of either microsofts SDK or the OpenNI & NITE stuff, with a few notable exceptions. I can see why, the depth data is a lot more expressive, rich and organic and more natural for a number of different sorts of digital artists  to work with. The skeletal tracking stuff can quickly take us into very time-consuming and niche areas of realtime 3D that far less people have tended to inhabit in for example, the VJing community when compared to those working with 2D material. And then there is the quality of the tracking, which although impressive often becomes defined by its limitations  and failings. Especially as once you start to use your body as a natural form of communication & control, it seems natural to do all sorts of things that, sadly, Kinect software can't monitor effectively right now. I want it to know whether Im turning my head, what my fingers are doing, some of what my face is doing, but its too much to ask to get all of this at the same time as reliable skeletal tracking, so the human spirit ends up feeling somewhat thwarted. So as should have ben expected right from the start, its going to take some more time for people to play around and see what can be achieved & feels right, I've no doubt that some of the limitations can be turned into strengths when the right context is found for the data the Kinect has no problem delivering. I did allow the limitations to dampen my early enthusiasm a bit too much, but as I have some time on my hands in 2012 I shall be having another go.

    A couple of other random thoughts on the Kinect subject:

    Last time I checked OSCeleton hadn't received too much recent love from the dev, so perhaps something like Synapse for Kinect is a better choice now&nbsp ;http://synapsekinect.tumblr.com/

    This is probably the most impressive demo I have seen of the Kinect being used as a full body remote performance controller, although as Im well out of date with my experience of lag between moving and getting final joint data from kinect, I can't judge whether there has been any correction to the timing of this vieo to make the response seem tighter than it actually was:
    http://vimeo.com/groups/kinect/videos/28838606