Calling it a day now – today’s work has been kind of in fits and starts and has become progressively more and more technical as the day has gone on, but I’ve learned loads and feel that every step is taking me closer to a point of finalising this patch.
After yesterday’s thoughts, I made a few tweaks to the patch to incorporate more static moments (it now has Big Red Buttons not only to pause all the automation, but to restart them all, or to toggle if some are on and some are off) which I think is really helping how I feel about the balance of the resulting pieces.
I made my first ‘final’ improv (there’ll prob be about 10 of these unless I end up with something I feel is perfect earlier), which I was very pleased with – much better balance between fast and slow and just overall. However, while my initial idea was to have the video content entirely silent, as per HearSee, in practice, and for a more extended piece, having no sound at all felt kind of barren and bleak. So I started to tweak the Vizzie patches, adding in a volume control to the player, then adding some automation so that the volume of the wave video was tied to the crossfader that switched between that and the paper video (which has no sound). I quite liked this idea (although am actually thinking of inverting this effect, so that the sound is clearest when you can’t see the waves at all), and in a test run (improv 2) it worked quite well BUT! *gasp* it wasn’t saving the sound along with the video recording – I could hear it while I was improvising the video, but nothing was saved in the MOV. SO disappointing.
So it turns out that Max is a wee bit snooty when it comes to the audio output of videos and I had to do quite a lot of digging to work out that I needed to go into the player patch again to set an input so that I could give individual names to the player objects, so I could pick up the audio output from the video and turn it into an MSP object rather than as a component of a Jitter object (possibly my terminology is wonky here and people will be cringing, but I know what I’m trying to say anyway!). This can then be routed into a DAC for output/recording etc.
So I tested this out, using Jitter Tutorial 27’s sample patch and poking and prodding my way through it, and behold, I was able to get my audio coming through the DAC rather than just from the video playback. Woot!
And this is where I gave up on the day. I seem to have two options here. One, I can record my video and audio separately (using the existing Vizzie Recordr object for the video and standard Max/MSP code for the audio) and the smoosh them together in Final Cut Express, or there seems to be a Jitter object called jit.vcr which will record audio as well as video. I’d like to have a go at jit.vcr because it seems cleaner, but it’s not a simple case of just replacing the jit.qt.record object in the Vizzie Recordr (I tried) because the inputs are different, so I think I’m going to have to start coding that from scratch – probably a good thing, because I think playing with the Vizzie bits and now that I’m getting into adding stuff to them, has clarified a lot of things for me about how Jitter works, but it’s going to be fiddly and I don’t think it’s a good thing to do when I’m tired! I might duplicate the Recordr patch and see if I can turn it into a video + audio version that I’ll be able to easily reuse, rather than a cobbled-together thing (I’ve been using the Snippets function for a few days now after finding it in a video tutorial on YouTube – SO much easier than always having to dig out old patches to reuse your code!)
So I am optimistic at this point that I might be able to have a final version of this section done before I go to Cambridge on Friday, which will be good. This patch is taking quite a bit more time than the audio ones, but I think it’ll pay off with much faster work – both in terms of concept and development – on the next video section.