toyradio


  • ABOUT
  • MUSIC
  • OTHER PROJECTS
    • Otono
    • Flickr Playr
    • Colourorgan
    • IOCT Showcase
    • MILDEW’S MUSIC
    • ‘Otono’ overview

Category Archives: Colourorgan

‘Colourorgan’ pictures

Posted on June 25, 2014 by irdandan

Posting some pictures from last year’s ‘colourorgan’ project – here a yellow chair is offered up, the software presenting a corresponding yellow ‘matrix’ on screen (that was used to drive an audio sequence) and an image tagged as ‘yellow’ from a Google image search:

Martin_chair

Here’s a couple from the IOCT showcase – the software in action again:

_MG_8426 Dan

And a rather fetching picture of me explaining something unwittingly sporting a large image of a skull:

_MG_7427 Dan

Posted in Colourorgan | Leave a comment |

‘Colourorgan’ in action!

Posted on May 4, 2013 by irdandan

Yesterday the finished ‘colourorgan’ piece was tested along with a series of other works as a part of the IOCT Master’s Performance Technologies module practical section.  It held up well technically – despite the finished version of the patcher stretching my poor Macbook to it’s limits – and produced some interesting sounds as well as some good reactions from the audience.

Interaction with the piece was unexpected – although my instruction was kind of ‘show it some colours, see what it does’, there was a flurry of activity as everyone searched the lab for different coloured objects and offered them up to the camera in a way that was almost ritualistic, albeit in a slightly crazed way!  It was interesting to see everyone’s reaction and their interest in how the piece reacted.  I think i should have choreographed things more carefully though, perhaps asking participants to perform specific tasks with coloured objects – if i perform the piece again i’ll bear this in mind.

Technically the piece mostly worked – as in testing, it had no problem picking up reds and yellows, struggled more with blue and almost totally ignored green.  There’s probably a good reason for this… i shall continue the research!

Posted in Colourorgan | Leave a comment |

Project progress

Posted on April 24, 2013 by irdandan

My ‘performance technologies’ project is coming along nicely – currently inhabiting a single MAX patch is a program which does the following:

- Search the internet for images given a specific key word

- Analyse the image for colour content and play some corresponding sounds

- Look at a live video feed and find red, yellow, blue and green areas in the video

- Generate a simple sequence given each video feed

Each of these stages is relatively complex – if i have time I’ll post details of how to do each thing on a separate page under ‘MAX’.  Here I’ll run through a few general problems I came up against and how I solved them!

Searching the web for images

Using MAX there are a number of strategies for doing this, ranging from the simple (using a built-in object for downloading data) to the complicated (for example, using an external that launches a shell script that does a search).  Using the built-in object is relatively straightforward and the process results in a big text file that contains the URL of several hundred images. Decoding this text file and extracting the right bits of data is fiddly, but not impossible (and requires the use of ‘regular expressions’, which are the hardest bit!).

I found the biggest challenge was actually getting the data in the first place.  The best way of automatically searching for images seems to be to use flickr – they have a public API which means you can make search requests and get results from some code, rather than using a web interface.   A simpler way is to create an RSS feed, within which you can embed a keyword search term.  The resulting feed contains a URL for each image within it, which can easily be stripped out and used to download and display the image.

Analysing the image and playing corresponding sounds

I’ve discussed the technical aspects of this in a previous post.  Actually the major challenge here is working out what sound goes with each colour – which i suppose is one of the major challenges of the project.  Some tentative survey results have started to reveal the answer – i think this might need a dedicated page though!

Finding red, blue, green, yellow in a video feed

For this, as with the static image colour analysis, the first thing i did was reduce the resolution of the image to a 32 x x32 grid – this makes it easier to create musical sequences as the end of the process.  The jitter object ‘jit.findbounds’ can be used to extract the specific regions of a given colour – though the source image needs to have it’s brightness, contrast and saturation increased to make the process easier. The jit.findbounds object takes a threshold level for each colour to look for and outputs the relevant coordinates – which i’ve fed into a second matrix for display and analysis.  This is mapped onto a live.grid object for sequencing and is further mapped into a more musical form using a pre-defined scale stored in a cool object.

Generating a sequence corresponding to each channel

My original idea was to analyse the content of the individual colour channel all at once – i.e. the whole matrix in one go.  I’d hoped i could create a chord from this, perhaps mapping pitch from height of detected colour in the matrix, tone to colour intensity and left / right position to pan.   However, i tried various methods for achieving this ‘all in one’ analysis and fell short each time (or created a patch that was so convoluted it wouldn’t actually run).

In the end driving a sequencer from each colour has worked out well.  I need to work on the mapping between the sequence and the sound, making sure there is a definite difference between the sound of each channel, perhaps with a distinct set of sounds running from low to high on the grid.

Posted in Colourorgan | Leave a comment |

Analysing Pictures

Posted on March 24, 2013 by irdandan

I’ve been looking at several ways of picking apart a still image and converting it to sounds or music.  Although programs like Metasynth already do this, the literal interpretation of image width / height as time / frequency, and of pixel intensity as amplitude can quite often result in some harsh sounds.  The application as a filter is nicer, and could be easy to implement in MAX too.

To cross-reference with the emotive connections that are often made with sounds (blue, red, bright, dark etc) i want to try and derive literal colour names from the image, and then apply these to a range of sounds.  So how do i do this?

Stage 1 is to get the image into MAX.  I import an image directly into a jitter matrix for display, then into a second matrix with a fixed resolution for analysis.  Using a 32 x 32 matrix results in a pretty, pixelated image like this:

Screen Shot 2013-03-24 at 21.44.54

Then it’s easy enough to get a RGB using the ‘getcell’ message.  So, a bit of blue sky would have a higher ‘blue’ value than ‘red’ or ‘green’.  So, how to associate that with a word?  I could store away RGB values for each colour i want to identify, then compare the colour sampled to see if it matches?

Well, stage 2 is to convert this image from an RGB colourspace to an HSL one.  HSL stands for Hue, Saturation and Luminance – meaning one single value (hue) is used to describe the colour.  That can be mapped to a standard MAX ‘swatch’ object which looks like this:

Screen Shot 2013-03-24 at 21.52.45

Moving across the X axis you can see how each colour in the visible spectrum is represented.  The Y axis represents Luminance.

Anyway, from left to right we get a variety of colours starting a red, going through green, blue and back to red.  I’ve identified 12 colours across the spectrum which i’m going to use as identifiers for sounds:

RED

ORANGE

YELLOW

YELLOW-GREEN

GREEN

BLUE-GREEN

SKY-BLUE

CYAN

BLUE

PURPLE

PINK

MAGENTA

So as each cell in the matrix is examined, is is cross-referenced with the X axis on the swatch object, it’s position here being mapped to the list of colours.

Stage 3.  The H value is scaled and sent to a ‘nodes’ object which calculates a list of weighted values in 2d space… we create a node in the X axis for each colour, mapping this to a table which describes the colour!

Screen Shot 2013-03-24 at 22.07.34

There’s probably a much easier way of doing all this.  If you’re reading this and you can see it – feel free to tell me!

Posted in Colourorgan | Leave a comment |

Turning a picture into sounds

Posted on March 24, 2013 by irdandan

This is the core technical challenge i’m facing as i create my image-to-sound machine.  Although sounds often make quite pretty pictures, like this spectrogram…

Image

…the reverse isn’t necessarily true.  There’s a great track by Aphex Twin called ‘Equation’ that exemplifies this – he’s encoded images of his face using a kind of reverse spectrogram into the track.  The parts of the song where his face appears are quite unpleasant sounding – only he could get away with including them!

My plan is to associate specific colour values with specific sounds, scanning through an image at a particular rate.  For example, if a bright red is encountered in the image, a bright, angry sound is played – or, is more likely to be played.  This will involve splitting the image up into easy to manage chunks, rather than dealing with it on a pixel-by-pixel basis – analysing each chunk for colour content before passing that information to a sound generating process.

This throws up a couple of questions:

  1. How do i decide what colours equate to what sounds?
  2. How will i generate the sounds / music?

I’m going to address the second point first.  I’ve already started to put together a library of samples, each with a particular tonal and rhythmic quality.  I want to get a good idea of the colour that’s associated with each sound before i reverse the process, and to do this my plan is to set up a soundcloud page, and invite users to comment on each sound sample only in the form of colours.  Hopefully i can build up a kind of tag list for each sound, and identify the most common colours associated with each sound.

I also want to investigate the basic components of musical sounds and how these relate to colour.  To do this my plan is to create an online survey asking participants to associate a colour with a series of simple tones, varying only in pitch, envelope shape (i.e. sustained or percussive), tone (i.e. bright or dark) and chord shape (i.e. major, minor, sustained, diminished). These sounds will be free from any other qualities that might affect their perceived ‘colour’, such as varying timbre (for example, orchestral vs. synthesised sounds) or varying reverb amounts.

Once i’ve got a good idea of how people generally perceive sounds as colours, i’ll work on a way of playing sounds based on colour information in an image.  This will involve an increased likelihood of a particular sound playing given a specific input colour, based on a fairly broad database of sampled and generated sounds.  I’ll use the max object ‘groove~’ to play back samples, and some freely available physically-modelled instrument objects together with some cunning sequencing to generate sounds from scratch.

Posted in Colourorgan | Leave a comment |

Music, Sound and Colour…

Posted on March 24, 2013 by irdandan

As part of my Master’s coursework i’ve decided to create a machine which essentially translates colour into sound.  Based around the programming environment MAX, this machine will:

  • Analyse a still image, or a video, and create a soundscape according to the colour information contained within.
  • Allow some audience interaction in some form – i think initially with some interaction with a live video feed, and perhaps if i have time to make it, via some kind of tangible interface enabling the user to select particular colours, images, or other aspect of the process.
  • Create sounds that are musically pleasing and reflective of the colour content of the image.

So – why?

Well, i want people to be able to explore the relationship between colour and sound.  Imagine if you could look around your house, and ‘hear’ the colour palette of your home.  Or, that you could choose a place, or a familiar image, and listen to music generated from what that place or picture looked like.

The link between colour and emotion – ‘colour psychology’ – is well trodden ground.  We can all appreciate the association between bright, strong colours like red and yellow with happiness and upbeat emotions, or the association between ‘blue’ and ‘sadness’ (singing the blues, for example).  There’s also a clear link between music and emotion – happy, major chords, sad, minor chords, or, bright, clear, happy tones versus dull, muted, sad ones.  I want to try and link these two things – in a way that people can understand and engage with.

Posted in Colourorgan | Leave a comment |

Categories

  • Colourorgan
  • Flickr Playr
  • IOCT Showcase
  • MUSIC
  • Otono

Recent Posts

  • Volatilecycle showreel audio
  • IAJ ‘Straight Lines’ remix
  • Generative process
  • Otono project progress
  • I need to start making sounds…

Meta

  • Log in
  • Entries RSS
  • Comments RSS
  • WordPress.org

Pages

  • ABOUT
  • OTHER PROJECTS
    • ‘Otono’ overview
    • ABOUT
    • Mildew’s Music

Archives

  • June 2015
  • July 2014
  • June 2014
  • April 2014
  • May 2013
  • April 2013
  • March 2013

Categories

  • Colourorgan (6)
  • Flickr Playr (1)
  • IOCT Showcase (1)
  • MUSIC (2)
  • Otono (5)

WordPress

  • Log in
  • WordPress
CyberChimps

CyberChimps

© ToyRadio