I’m making slow and steady technical progress through the project – i’ve started to write simple applications based on OpenFrameworks using xcode. I’m going to do the graphical stuff with this, ‘wrapping’ a Puredata patch into the application. PD will deal with the sequencing and, ultimately, the sound generation too.
Here’s the results of an OF application that analyses an image and graphs the average hue, saturation and brightness for each column of pixels:
Here’s a prototype PD patch that generates sequences who’s complexity (as in, the number and variation of notes in the sequence) can be controlled in a probabilistic way – the variations in image data as seen above will be ‘mapped’ to this sequence generator.
When this prototype is complete, it should be able to generate musical sequences that are unique to each image presented. If this method for mapping image data to music works I’ll develop it further, adding more sounds and more sophisticated sequence generation processes. It should then theoretically be possibly to substitute the image data with something else too.