As part of my Master’s coursework i’ve decided to create a machine which essentially translates colour into sound. Based around the programming environment MAX, this machine will:
- Analyse a still image, or a video, and create a soundscape according to the colour information contained within.
- Allow some audience interaction in some form – i think initially with some interaction with a live video feed, and perhaps if i have time to make it, via some kind of tangible interface enabling the user to select particular colours, images, or other aspect of the process.
- Create sounds that are musically pleasing and reflective of the colour content of the image.
So – why?
Well, i want people to be able to explore the relationship between colour and sound. Imagine if you could look around your house, and ‘hear’ the colour palette of your home. Or, that you could choose a place, or a familiar image, and listen to music generated from what that place or picture looked like.
The link between colour and emotion – ‘colour psychology’ – is well trodden ground. We can all appreciate the association between bright, strong colours like red and yellow with happiness and upbeat emotions, or the association between ‘blue’ and ‘sadness’ (singing the blues, for example). There’s also a clear link between music and emotion – happy, major chords, sad, minor chords, or, bright, clear, happy tones versus dull, muted, sad ones. I want to try and link these two things – in a way that people can understand and engage with.