maxprocessing_386

One beauty of Processing is that it’s so portable, thanks to its intelligent, lightweight engineering, its open source nature, and the fact that it’s built on Java. The elegant text-based language for describing interactive visuals therefore can become a part of a workflow in other places.

That, in turn, has led people to look for ways of integrating Processing with Ableton’s new Max for Live (or more generally Max). You can certainly get Processing working with Live using MIDI or (via the freely-accessible Live API) OSC. Max for Live simply adds the convenience of Live-style devices and controls, as well as the chance to mesh work in Max with work in Processing. That could be a big help in the context of Live, because the device controls work the way Live controls do, and devices and their presets are saved with your Live sets. There are already two formal efforts.

l2p

Live 2 Processing, above, is a Max for Live plug-in that provides for simple communication between Live and Processing. It’s seems to be most useful if you have Max patches for Max for Live you want talking to Processing. Details:

http://www.unsound.com/M4L/

Meanwhile, Adam Murray is working on a new Max object (pictured, top) that promises greater integration. Adam, who already has a bunch of externals including one for Ruby support, wants to merge the Processing canvas with Max, allowing Processing’s sketches to be rendered directly to Jitter. For Max and Max for Live users, this would mean the ability to put sketches directly into Jitter patches or Live devices. Jesse Kriss had a similar effort with MaxLink, but Adam’s work is built on Max 5 and appears to go a bit further.

There’s nothing available for the Processing object just yet, but if you’re in the San Francisco area, you can catch a big event by the organization Overlap on December 15. For the rest of us, stay tuned.

And here’s some of Adam’s Processing work, if you’re curious:

The Max approach is not without some disadvantages. Once you start adding Max patches, you lose some of Processing’s portability, and you’re no longer working in a free tool, meaning you’re reliant on Cycling ‘74’s updates and support. Also, if you want to work in Max for Live, you need both a copy of Live 8.1 or later (LE, Starter, Lite, etc. don’t work), plus a copy of Max for Live, which can cost as much as US$300 if you don’t already own Max. On the other hand, for Max patchers and Live users, this could be a godsend, and it demonstrates some of the flexibility of Processing. If you give this stuff a try, do let us know about it.

  • http://www.max4live.info Michael Chenetz

    I am really interested in the integration of Processing and other languages within M4L. This gives people that have M4L other options that could be cheaper and/or better alternatives than just Jitter. I wonder if anybody knows about DIPS (http://dips.dacreation.com/). It seems like this works in M4L and might be a good solution too. I know nothing about it except for the fact that it was mentioned to me before.

  • http://createdigitalmusic.com Peter Kirn

    Well, two things to note:

    One, to use the fully-integrated solution, you still need Jitter.

    Two, if you're *not* really integrating them, you might just want to use OSC without M4L. Depends on what you're doing…

    Looks like DIPS may have fallen behind recent versions of QC and Max.

  • http://www.max4live.info Michael Chenetz

    I have been goign the OSC route for most things. It definitely leaves things more modular. I have been thinking about an OscRoute engine that you send commands to and it becomes the, "middleware" or director to other apps. This way you can create a standard methodology to communicate to the OscRoute engine and then have something like an XML schema to define the mappings to external apps.

    hmmm

  • http://thecovertoperators.org Andreas Wetterberg

    I really love the simplicity of getting OSC to stream from M4L to the world outside it.
    I'm not too keen on Processing, but I love using vvvv, and marrying m4l with vvvv is proving to be even easier than when using MaxMSP *and* ableton Live – as you'd expect.

  • http://compusition.com Adam Murray

    I just want to clarify this will be a built in object that ships with Max, so as a Max user you won't need to do anything special to get this object. I can't say how soon it will be available, so let's just say "soon hopefully".

    We're jumping the gun a little bit here because it's not even quite ready for beta yet. But please stop by the public preview that Peter mentioned if you are in San Francisco on the 15th.

    There's certainly nothing wrong with OSC-based solutions for Processing integration. There are, however, some nice benefits when embedding directly in Max, including the ability to convert a sketch frame to jitter matrix, easily interact with multiple inlets/outlets without building up any OSC communication infrastructure, and the ability to use Cycling 74's mxj java API. Also, I find it's a good environment if you want to run multiple sketches side by side, since the Max object provides some convenient ways to manage windows that's otherwise somewhat painful inside pure Processing. Basically, it's a more streamlined way to work with Processing and Max together. If you prefer OSC, nothing wrong with that!

  • http://ericparren.net Eric

    So we'll be able to convert the processing frames into jit.matixes?!
    That's awesome news. I've been looking for ways to do that and got in a big mess of extending extending jitter classes and what not. Stuff is over my head. Hope you get to release some form of beta real quick! Can't wait.

  • http://createdigitalmusic.com Peter Kirn

    Oops, had composed a comment and I think forgot to hit send…

    Adam, this work looks well awesome; I look forward to seeing where it goes and giving it a try!

    So, we have two very different solutions — maybe I hadn't had all my coffee and didn't make that clear.

    1. Transmitting OSC from Live elsewhere with Max for Live. It says "Live2Processing" but could just as easily say "Live2OSC" or "Live2Anything." And the advantage of this approach is having this integrated as a device. Not *as good* as having Ableton do native OSC implementation, but it could be a good starting place for specific applications or seeing how to add OSC transmission to your existing patches.

    2. Integrating Processing into Max itself (for Max for Live or standalone Max patches), which is what I gather Adam is working on. (And Adam, I had no idea this would be for C74!) The advantage here even if you don't want to muck around much with Jitter is the ability to manage multiple Processing sketches, and to turn Processing sketches into M4L devices. If you want to drop a device into Live that's a 2-channel Processing sketch mixer and sync it to beats in Live, I have to say this is probably the way to go. ;)

    The larger application would be, if you're fond of Jitter matrix processing, you can mix Processing and Jitter and have the best of both worlds. There, I'm a little less convinced personally, because I'm pretty happy doing everything in Processing. But that's just the way I work, and I can see why others might work in a different way.

    The point I wanted to make – and this is not a criticism of this effort or Max – is that one strength of Processing is that it can run anywhere. It can go in a browser, in JavaScript, on a Linux netbook, and soon on phones and embedded devices. You can make your work in Processing and turn it into a standalone application that can run anywhere, and edit it on any machine. You can keep your sketches in the cloud, have your laptop catch fire, and then download Processing from the site and go right back to editing without registering or authorizing or anything like that. That may not be the case once you start cobbling it together with Max and the like.

    Obviously, there are still many reasons you'd want to do this in spite of that warning, or people wouldn't be trying it out. I just have to point out that to me that is an advantage of working in Processing. But that's another reason this stuff is healthy – it gives us an extra kick in the pants to figure out, say, ways of making environments in which to run Processing sketches.

    One example of how to handle multiple Processing sketches — although it *could* use more of these multiple windows:
    http://code.google.com/p/processing-mother/

  • http://blog.mattbot.net Mattbot

    @Peter

    I'm Alpha testing Adam's processing object and I think your criticism of Processing running within Max is a bit unfair. On the surface, it seems to be focused on portability concerns.

    I agree that one of Processing's strengths its portability but I also think Processing running within Max represents an expansion of this strength by increasing its potential install base. Max/Jitter has poor native support for vector-based drawing operations and Processing's strength in this area will be a great boon for Max patchers. I suspect this addition will only swell the ranks of Processing coders in the wild. An alternative solution for improved vector drawing in Max would be for C74 to implement a proprietary language for this task. Assuming one is OK with the idea of using Max at all, the choice of Processing seems pretty ideal.

    Even in the alpha version, getting my existing Processing sketches to run within the Max environment was pretty simple and coding them to work both in Max and as stand-alones was simple as well. Development is still done in the Processing IDE. The main advantage for me in using Processing within Max is getting the ability to have the option to draw directly to a Jitter matrix. Adam tells me that in the beta version, runtime environment considerations will be totally transparent.

    Max has a lot to offer Processing too. Coding a video mixer/switcher in Processing is definitely not one of it strengths. Neither is GPU accelerated performance. Making a fast video mixer/effects system that runs live Processing code is pretty hot.

    It sounds like to me like the subtext of your criticism is with choosing to work with a proprietary development environment at all. Which is an indirect criticism of Max and the [processing] object. Why not just come out and say that instead of raising vague portability concerns about a Processing runtime which hasn't even dropped a public Beta yet?

  • http://createdigitalmusic.com Peter Kirn

    @Mattbot:
    I don't think it's unfair to talk about the differences — and tradeoffs – inherent in two different approaches.

    I think one question that is up for debate is whether each environment benefits more from being meshed together with something else, or solving those problems directly. So, on the Jitter side, it may be great to have Processing in there. At the same time, if I'm going to choose Max over Processing — a radically different workflow with patches in place of code — shouldn't Max have its own solution to some of the things Processing does?

    Conversely, I can speak directly to the Processing example. I think the answer the things you want to do — GPU-accelerated performance, video mixing — absolutely need to be implemented natively in Java (or Java-wrapped native code) inside Processing, so you don't *have* to rely on another tool. The short answer to these is this library:

    http://users.design.ucla.edu/~acolubri/processing

    As it happens, I've been doing video mixing / switching / GPU-accelerated operations in that library, and it's slated to become a default renderer in a future version.

    Of course, by "debate," I mean "debate." I think it *can* be valuable to have someone working on mashing the two together, even as other folks work on making each as valuable on its own. It makes sense to work on both those approaches.

    Nor am I just pulling out this "portability" issue for arbitrary reasons. I've had laptops suddenly decide to start working and wound up moving my entire environment – with the ability to edit – somewhere else regardless of OS. It's real.

    But these are tradeoffs. Processing can be fast and talk to the GPU, but it's true other things can be faster or easier (certainly for specific tasks). See also OpenFrameworks, which offers a similar development environment and syntax but allows you to use native code in place of Java; memo's doing great work with OpenCL most recently.

    And I can't overemphasize this enough — if you want to use Max for Live, or if you're predominantly a Max coder, this looks terrific.

    I personally think mash-ups are fascinating and if I thought this weren't worth anyone's attention, I wouldn't talk about it.

    Processing is licensed under an LGPL license. It's there specifically so that you can have the freedom not only to incorporate it into open source projects, but into proprietary projects, as well. I think far too little attention has been paid to the value of this kind of permissive open source license. Copyleft has its place, but different users have different needs, and different projects have different needs. In this case, there's obvious benefit to a mash-up of Max and Processing for certain users and certain use cases, and so the "Lesser" in the LGPL makes this possible. I think that's good.

  • Yancy

    Nodebox has an OSC library if your interested in the more vector side of things. It's OSX only at the moment, but Nodebox2 is shaping up nicely, is ported to Jython for access to Java and Python code, and will run on Windows too.

  • http://createdigitalmusic.com Peter Kirn

    Also worth noting:
    http://openendedgroup.com/field/wiki/ProcessingIn

    – and The Field is doing things Processing currently can't, like responding to code live. Mac-only, but could be ported.

    They, in turn, have also tried a bridge to Max. But I think technically, some of the strength here comes from the Python and Java platforms and the stuff that's built atop them. In the case of The Field, they're able to support JavaScript, embedded Java, Groovy, Clojure and Scala, all thanks to the Java VM beneath and the open work done atop them.

    Of course, an artist looking at that list — or even Max and Processing — is likely to be overwhelmed with the number of choices. So I think, as with this combination of Max and Processing, the key is that there are this many choices because people have different needs and preferences.

  • http://blog.mattbot.net Mattbot

    @Peter

    The bit I specifically took issue with was this paragraph:

    "The point I wanted to make – and this is not a criticism of this effort or Max – is that one strength of Processing is that it can run anywhere. It can go in a browser, in JavaScript, on a Linux netbook, and soon on phones and embedded devices. You can make your work in Processing and turn it into a standalone application that can run anywhere, and edit it on any machine. You can keep your sketches in the cloud, have your laptop catch fire, and then download Processing from the site and go right back to editing without registering or authorizing or anything like that. That may not be the case once you start cobbling it together with Max and the like."

    You can still keep your sketches online and download them to the Plan B laptop, etc. They will still run independently of Max. In the context of using Live, Max's portability verses Processing's portability, its a bit moot. You're still stuck in Win/Mac land with authentication to deal with, as you scramble to get Live working. A bit of preplanning and proactive system administration can offset the effects of having to do a system rebuild in the wild regardless of the platform. (It's happened to me too.)

    So it seems to me the point of your point is whether or not to go totally open source such as creating a Processing only solution as in our video mixer example. It just seems a bit unfair to introduce a criticism laboring this point without establishing the context of proprietary vs. open source software debate more clearly.

    On the topic of meshing or a single solution, I'd love a single solution but I haven't seen it yet. Part of the attraction of using Max and Processing is that they are low hanging fruit. Trying to write a Javascript or Ruby drawing framework for [jit.lcd] is painful. Processing does it better. Recreating a GUI and video playback engine on par with Max's in Processing isn't what I'm really interested in either as they already exist in Max and Jitter. If I had to start coding stuff like that from the ground up, I'm going to bypass the JVM completely and reach for C/C++ and/or Objective-C.

  • Pingback: kodama.pixel » Blog Archive » A Lesson In Projection

  • http://blog.mattbot.net Mattbot

    @Yancy Nodebox looks nice! I like that it's in Python. I'll be keeping an eye on this.

  • http://www.onar3d.com onar3d

    Following this argument, I would like to add some more formal (academic?) reasoning as to why integrating Processing with Max/MSP might be a good idea.

    There is significant published work on the limitations of Dataflow languages in general (ie PD, Max, vvvv, etc). The two main problems this family of languages faces is that a programmer cannot define his/her own datastructures, and that there is no efficient way of implementing recursion. Both of the above happen to be crucial when programming procedural computer graphics, and are problems a declarative language (Processing) does not have.

    The common solution adopted is to allow a programmer to write his/her own 'box' in a declarative language, implementing the problematic code. Max/MSP's way around the problem is allowing you to write javascript inside a box, but the speed of execution is so low, that it is unuseable for anything as demanding as real-time procedural graphics.

    Integrating Processing into Max/MSP therefore not only allows more efficient procedural computer graphics in a dataflow language, but goes further, in that it helps address a general limitation inherent to Max/MSP's dataflow heritage.

  • http://blog.mattbot.net Mattbot

    @onar3d

    You're certainly right about Max's language limitations. I had to use C to create a Max object in order to implement Euclid's algorithm which is recursive. (Which you can get here under the MIT license: http://github.com/Mattbot/mattbot.euclid)

    Max includes a Java virtual machine object so it already has access to an external language more suited to things like recursion. Processing and Adam Murray's excellent [ajm.ruby] object are both built on top of Max's included JVM.

  • http://www.onar3d.com onar3d

    @Mattbot

    Impressive that Max includes a JVM object, that had evaded me! I only thought it supported javascript, as opposed to full-on java.

    That alone covers a great deal of ground; with exception of allowing the easy integration of generative visuals into the jitter workflow, which seems to be coming up as well!

    A very good idea which hasn't yet been implemented would be to make it even more extensible, by embedding Processing into a freeframe plugin instead, which then in turn can be embedded in Max/MSP, or any other compatible host.

    There was talk of embedding my Mother program into freeframe, but ultimately it turned out to be potentially too time consuming for me to be able to do it as a spare time project, the way I had developed Mother.

    There's a great deal of cool visualist software out there, it's about time more than just control data is shared between systems!

  • http://createdigitalmusic.com Peter Kirn

    Yeah, absolutely – I think this is important news or I wouldn't have posted it.

    I think people will naturally have a different take on this coming from the Max angle than from the Processing angle, as well. For Processing's part, I think a number of these problems are solvable within Processing and Java, so I'll put my money where my mouth is and as I'm doing more with it, see if I can't share the framework I'm building in a way it may be reusable to others.

    The one big deficiency in Processing I see remains audio and signal processing, but there I think the most practical solution will be to continue finding ways to embed open source audio in Processing. That's not any philosophical statement about open source versus proprietary, either – it would mean that you could integrate it with the Processing build environment, which is a practical consideration.

  • bilderbuchi

    having a FFGL plugin for processing (or other stuff like vvvv or puredata) would be great!
    imagine being able to use, e.g. resolume avenue with a source plugin playing all your generative goodness along with video files!