This project enables the audience to listen to New York City's skyline as a piece of polyphonic music. Panning across the panorama, the audience can not only enjoy a spectacular view of the city's skyscrapers, but also feel the rhythm and texture of these buildings by ear—a somehow exotic, but truly panoramic experience.
In preparation for music generation, a panorama photo of New York City was cleaned up and downgraded into 8 levels of grayscale. The processed image was scanned by a Processing sketch. For every vertical line of pixels, the height of the highest non-white pixel defines its base frequency; the overall level of the line's darkness defines the amplitude of the base frequency. To enrich the sound, the 6 lowest overtones of the base frequency have their respective amplitude defined by the amount of each grayscale level in the line, from the darkest to the lightest one—this is how the texture of the buildings is represented in the music. Via Open Sound Control protocol, all these calculated data are sent from the Processing sketch to a Max patch, where the music is generated accordingly.