It’s about time the maker movement tackled display technology.
Enter OSCAR (Open SCreen AdapteR). It’s the sort of super high-resolution 9.7″ LCD panel you’d expect trapped inside something like an iPad, but you can connect it directly to a computer via Arduino.
Now, the actual “DIY” bit here is pretty simple: it’s just the interface. But even just having the interface is fairly useful. The display tech itself remains mass-market, mass-produced, but by adding that raw display part to the interface, you can build your own projects – and there are clearly some installation and other DIY projects just waiting to happen.
And people building installations and other projects get more than just the ability to output to the display. Because the Arduino-based hardware acts as the controller, you also finally get to control rotation, backlight brightness, and other parameters – just the stuff you’re missing when you try to mount iPads into displays.
There are various versions here, depending on your needs; all work via DisplayPort/Thunderbolt.
I’m not sure OSCAR would work for every application here – a tablet retains touch, crucially, which this lacks – but it could be a sign of things to come.
And, of course, it’s on Kickstarter. GBP65 gets you the basic hardware; pricing goes up from there. The UK-based project ends in about a week.
The narrative around Kinect and how hackers and artists has always been a little oversimplified. You may have heard something like this: thanks to a bounty, creative individuals “hacked” Microsoft’s Kinect camera and made it open.
That’s true, but it isn’t the whole story. While there is a “hacked” Kinect toolset, most of the creative applications you’ve seen make use of a richer set of frameworks from OpenNI. “OpenNI” referred to an alliance of individuals and organizations, and was supposed to represent various interests, as well as what the group called on their Website “an open source SDK used for the development of 3D sensing middleware libraries and applications.”
And now the kinda-sorta-open thing is about to be a huge problem, with the sudden demise of OpenNI and no recourse for anyone using its tools. When you hear long-bearded gentlemen complaining about the vagaries of the term “open source” or “open,” this is the very thing that keeps them up at night.
First, OpenNI was always dominated by a single company, PrimeSense. And rightfully so: PrimeSense developed the core technologies in the first Kinect camera, and had the talent who knew how to use it. But that in turn meant that OpenNI was heavily skewed toward that one vendor.
Second, “open” was used to describe the whole project, when the whole project wasn’t open.
And now the entire project is about to shutter. PrimeSense was bought by Apple, and the direct result of that acquisition (as I and many others predicted) as the demise of OpenNI.
In fact, with Apple splashing just this kind of creative technology all over their Website on the anniversary of the Mac, it’s deeply disappointing that Apple leadership isn’t intervening here. The closing of OpenNI is unceremonious and absent any kind of useful information. Visitors only get this:
“Software downloads will continue to be available until April 23rd, 2014 at which time the OpenNI website will be closed.”
Assuming both OpenNI and its site are dead, the question becomes how to redistribute and license the code. The issue is, there are two components. There’s the OpenNI SDK, which is under an open license and redistributable. But the good bits are part of what’s called “middleware” – additional, proprietary licenses. And that’s where the real magic of what Kinect does lies, the “skeleton tracking, hand point tracking, and gesture recognition.”
All of that is about to go away. And because NiTE is strictly proprietary, even the free (as in beer) downloads formerly used by artists are now off-limits.
This is likely to be an ongoing challenge with clever new depth-sensing camera technologies, because there is a lot of “special sauce” that is remaining proprietary. So far, the knowledge of how to make that work has been restricted enough to the proprietary sector that there aren’t really open source alternatives.
That said, in a bizarre twist of fate, you can actually look to Microsoft (no joke) for actually understanding open source technology and working with the community, as Apple does quite the opposite. Continue reading »
If your computer can visualize massive three-dimensional worlds inside your computer, why can’t it be just as savvy in multiple dimensions on output?
That’s the question asked by MWM. Like other mapping software, the idea is to get beyond just a single rectangle as output, supporting multiple outputs and 3D surfaces and warping.
What’s unique about MWM is that it sets out to be more fluid with surfaces and outputs, all running “purely” on your computer’s GPU. And the workflow becomes completely different.
Rather than crudely mapping some video textures to cubes, MWM lets you bring in entire 3D models and meshes, with complete support for lighting and texture mapping, and some decent MIDI and media support all its own.
- Real-time rendering and editing
- Lighting (area/ambient, direction, point, spot)
- Real-time camera collaboration
- Warping, plus horizontal and vertical edge blending
- Runs on Syphon (like MadMapper) on OS X; also supports Windows
Full specs on the site, but a number are especially eye-catching: Continue reading »
From deep in Seoul’s underground, Maxqueen (Kloe) and Chang Park have forged an intimate audiovisual collaboration, in a rich dance of minimal, generated geometries in live music and image. Spanning Korea, Baltimore, Maryland, and now Berlin, they’ve played some of the biggest tickets in live techno (for Maxqueen) and digital media art (for Chang).
We’re thrilled to have them playing together – in their individual projects and then a duo – in Berlin on Friday night.
And we wanted to talk to them a bit about the craft of collaboration, and generating live performance through careful control of parameters in homemade patches in Max for Live and Max/MSP/Jitter.
Looking into their sets, what you see is a kind of joint modular synthesizer for music and visuals, a rigorous and disciplined focus on even the simplest parameters in order to make the fusion of optical and aural clear. We discuss their process, and – in a topic I’m sure is familiar to our readers in many corners of the world – how a scene is beginning to grow from the first seeds in Seoul.
First, let’s watch a bit of their work (hoping for even better documentation after Friday): Continue reading »
Installation work by Arístides Garcia, working on an ongoing thread of body and projection.
Get ready for the high Renaissance of the digital.
The individual ingredients remain the same. But the threads of transmedia work, spanning everything from traditional costume design and choreography to the latest generative projections, draw ever closer.
A new generation of artists treats projection in the theater as something elemental, something that demands exploration.
Vitruvian, premiering its latest iteration in Berlin Friday as part of our event unrender with LEHRTER SIEBZEHN, has gradually evolved in the ways it interweaves dance with opera, projected visualization and interactive sound with dance theater. An “interactive opera” in one act, it unapologetically adds digital imagery as an additional performer. But the human body insistently remains, in movement and voice, bathed in electronic light but not upstaged.
Vitruvian(old) Platoon Kunsthalle Berlin 2013 from Tatsuru Arai on Vimeo.
Continue reading »
From VITRUVIAN, a one-act interactive opera, premiering a new version in Berlin on Friday. Photo courtesy the artist.
There’s not a word yet for visuals as event.
We know it when we see it. And we know it in other media. With music, there’s no question when something becomes performative, when the human element is something you can’t subtract. But in electronic visuals, in light and image, the awareness of what is emerging in the medium seems latent.
The narrow view of VJing and club visuals is dated. And disconnecting those media from generative and interactive work misses an explosive and dynamic new craft. Whether it’s clever work with optical analog and overhead projectors, or a delicately-constructed piece of interactive software, the question is simple: when can you take something in visual media and tell people “you had to be there.”
It’s long past time for CDM to renew its commitment to this question. It was the root of starting something called createdigitalmotion, and part of why music and motion are intertwined — finding that inflection point when a visual creation can speak like a musical instrument. And in the first chapter of that, I’m pleased to announce a curatorial partnership with Berlin’s LEHRTER SIEBZEHN, and curators Stefanie Greimel and Johanna Wallenborn. This will bring an event in Berlin fusing music with visual experience in installation and performance, and taking that showcase online for our readers internationally.
Here’s Johanna on what we set out to do:
UNRENDER is about showcasing a digital community physically in a space, bringing together remarkable visual artists whose work is usually not accessible to the general public. UNRENDER is really about the immediateness of art being experienced, having some of the most captivating artists performing live and enabling them to show their outside of the traditional contexts. In that way, UNRENDER offers a unprecedented leeway for the visual scene.
Continue reading »
Edition one image. Made in Processing.