I’ve made good progress in understanding what i can do with OpenFrameworks, how i can get it to interact with PureData and what the limitations of it are. Learning OpenFrameworks has proved challenging – although there are plenty of ‘getting started’ tutorials which make the basics look easy, there’s a real jump up to the more complicated stuff that requires a good depth of understanding of the world of C++.
Some of the real sticking points so far have been in getting OF to work with a range of external ‘add-ons’. I feel like i’ve wasted too much time trying to deal with fiddly programming issues, and not enough time prototyping and trying things out.
The setup i’ve got (more or less) working to try things out, without relying on my shaky programming knowledge too much, is as follows:
OpenFrameworks loads an image and performs some analysis, possibly calculations based on the image. Data is passed over the local network (or MIDI) to PureData which creates the musical data. Sounds are generated using a standalone software instrument (NI Kontakt at the moment).
Eventually everything will be contained inside a single OF application, and then ported to iOS…
The next step is to start experimenting with sound generation, working with a set of rules like these:
- colour content relates to tonal quality – more reds = warm sounds, more blues = cold sounds
- average brightness relates to key – brighter = major, duller = minor
- Melodic content will be derived from an algorithm – i’m going to try something simple to start with then something more complicated, like utilising John Conway’s Game of Life to generate a Markov chain governing the probability of a note being generated.
- Rate of change of a parameter (like hue, or saturation) will govern complexity of melodic content.
I’ll use a standard major / minor key, with weighting to favour the ‘easier’ notes.
Hopefully i should be able to post some examples soon.