Beyond the tired repetition of look-alike work, collaborations between areas like architecture and visual design really can light up our cities in new ways.
I spent some time today touring the Fab Lab Barcelona, a multi-disciplinary fabrication studio peopled by experts in architecture and design. What’s special about this space to me is not only the well-stocked array of tools, but a commitment to applying technology to new contexts. The campaign just wrapped up yesterday on a crowd-funded sensor board that will provide intelligent environmental data from around the world, called Smart Cities. And of course, this kind of work is the promise of a growing network of fab labs worldwide. People are not only making things: they’re making things with each other in new ways.
In the audiovisual piece “Scenarios,” Fab Lab Barcelona 3D printed a surface for a liquid interface of flowing data. It’s not so much significant that it uses 3D laser cutting and projection mapping – those two buzzwords – as the way that the two in combination create a whole that would otherwise not have existed.
Here, the work is largely abstract, but it’s not hard to see this as a cue to reactive architecture to come, in which surfaces become changing and pliable and can even pulse with information.
The elements here:
1. iPhone, the source of the flowing data.
2. A terrain for mapping projection horizontally, produced with laser-cut foam.
3. A traditional projection surface in the rear, with wood over foam.
The beauty of open-ended, modular environments is the ability to create exactly what you need from the ground up, each time.
The problem with open-ended, modular environments is the need to create exactly what you need from the ground up, each time.
And that has left some readers of this site utterly baffled when opening the graphical environment TouchDesigner we mention so often.
Media artist Mary Franck got tired of creating new modules and mappings for each performance, and wanted to produce a framework that allowed her to create custom shows without having to reinvent the wheel. What she has produced is potentially a boon to the live performance community. It includes modules that cover your needs – including effects and generative visualizations, plus projection mapping via Kantan Mapper. But it’s also free and hackable, so you can modify it for your needs. Mary also teaches, and says she’s aimed her work to students as well as experienced users.
The upshot of this is, there’s now something with the power of TouchDesigner that you can use out of the box. That offers some massive possibilities for VJs and visualists who wanted to experiment but also needed a starting point. Mary describes her work to CDM:
Rouge is a live visuals platform and TouchDesigner programming framework that works out of the box for video performance and makes it simple to build and share your own TouchDesigner FX and realtime 3d compositions.
For those of you who already use TouchDesigner or have been meaning to try it out, here is a release of my own tools as a comprehensive platform, with a section specifically for generative realtime video.
Chiaroscuro is an installation work by Sougwen Chung, the Canadian-born, New York-based illustrator/media artist. Veering far from the mechanical minimalism of so much projection mapping, with its hard edges and rectangular conformity, Sougwen instead uses light and animation to draw outside the lines. Shimmering as though refracted through a digital ocean, the animation lights up the outlines of hand-drawn forms in one moment, then spills out onto the walls and floor in the next.
Set to Praveen Sharma’s exotic and evocative score of rushing pads and alien percussion, the effect is irresistible. It was for me (and many others) a real highlight of this year’s Mapping Festival, which commissioned the project for their exhibition at Musee D’art Moderne et Contemporain Géneve in Switzerland.
Playing on the notion of light and shadow, Sougwen’s drawings pop out of the flat white-box gallery as though some kind of coral reef. The drawing themselves are black and white, in a suspended bas relief construction in paper, with lighting tucked in behind and projection augmenting the drawings. And her visual style translates beautifully to the work, organic baroque flourish blooming in florid detail.
Yep, your editor uses Processing. I had been working with textures using a library called GLGraphics and advoating the same to everyone I met – and now the functionality is in Processing proper. Here I am messing with textures live in my own work. Photo: Brigitte Faessler.
It’s been a long time coming, but Processing this week has reached 2.0, a major landmark release. For graphics, video, data, and usability, it’s a big leap forward.
The big story here is that the creative coding tool is both faster and simpler. An OpenGL engine, built by GLGraphics developer Andres Colubri, now powers everything. That makes extraordinary feats of graphic wonder possible (with shaders and the like), but it also irons out frustrating inconsistencies between renderers. The narrative is similar for video. Andres Colubri’s GSVideo library handles the video work, with high-performance HD (and function with the shaders) for those who want it, but at last simple “just play video” functions work as expected.
The full changelog goes into greater detail. Even this 2.0 release includes a bevy of bug fixes. Revisions at GitHub
You might have missed some of these links as they’re now after a call for donations on the downloads page. That call is non-mandatory, though, and once you click past it, the download functionality on the site works the same way it always has.
Andres Colubri also has a couple of detailed stories on how shaders work, from the alpha/beta process – though very relevant now. For instance: Continue reading »
oscillating continuum is an audiovisual haiku, an object creating an sonic architectural object. At first, it appears stunningly minimal, but close up there’s a terrific sense of detail to the glitching soundscapes and accompanying digital waveform visualization. Intricate particles swirl and then suddenly blink into explosions, to be replaced by sharp lines and gentle hums.
The piece then takes on a sense of resonating stillness, electrified equilibrium.
Ryoichi Kurokawa also has some exquisite live AV performances, tightly synchronizing visuals and sound into synesthetic wholes – crackling good, and crackling, stuff, full of violent bursts of sound and rhythm amidst a balanced overall world.
An observer takes in the installation in sculptural form – though much of the sculpture lies in the screen, sound. Image courtesy the artist.
Will the new Kinect be hackable for artists and developers?
Some of the best speculation I’ve seen yet actually shows up here in CDM comments. So, it’s worth elevating this to a news story, just in case you missed it. In short, the answer appears to be yes. I’m hopeful in particular for Microsoft’s own official developer tools when Kinect hits Windows next year, as I think it’s really the PC world that will be the most expressive and open. From an anonymous CDM reader calling him or herself “n4cer”:
Though the port for the Kinect on Xbox One is proprietary, this is no different from the original Kinect. The port on the 360 slim supplied both power (Kinect’s power requirements are greater than what a standard USB port can supply) and a USB 2.0 connection. For Kinect 2.0, it’s power + USB 3.0.
There are Microsoft and third-party adapters available for splitting the power and data into an AC adapter and standard USB connector. In addition to PCs, the non-slim Xbox 360s required such adapters. The situation could be a bit different with Kinect 2.0 since the only ones sold separately from the Xbox One will be the PC version, which will ship next year. Though, I wouldn’t be surprised if adapters appeared prior to the launch of the PC version.
The hardware for Kinect 2.0 was developed internally by Microsoft.
“The highlight of the story is the CMOS sensor, which we developed internally,” Spillinger says. “This design was done completely, 100 percent on this site. This is brand-new technology. There is discontinuity between this technology and the first Kinect; from the technology perspective that we are using for depth, for 3D measurement. So this was done here. On this one, this was a complete Microsoft custom design, where our engagement is directly with the manufacturer. It’s not with any third party. We did the work. We do the qualification of the parts. We do the validation of the parts. We have done everything on this one.”