DT Emotions

Read More

https://vimeo.com/127869492

Have you ever taken a close look at your friends' daily face expressions? It should be fun to do so! In this video, emotions of MFA Design and Technology students during the Red Bull Design Jam in September 2014 are magnified by slow-motion shooting and accompanied by hilarious music. How interesting it is to look at familiar things happening just on a different time scale.

(Cinematography: WANG Luobin. Video Editing and Sound Design: SHI Weili. Thanks for all the DT friends who showed their emotions in the video!)

Greetings, Kung-fu Masters!

Read More

This 30-second piece depicts a friendly exchange of martial arts techniques between two kung-fu masters—Master Lee and Master Chen—entirely using sound. They started by greeting each other, and then demonstrated their recently learned techniques respectively. Eventually they got into the real fight, which ended with a strong final hit and mutual compliments.

Studio recording of the masters' voice and shoutings was mixed with sound effects (air friction, hits, footsteps and explosion) and background music. Panning automation was used to bring about a sense of movement across space. The slower pace at the beginning helps to highlight the fast and dense climax. The overall rhythm was kept lighthearted in accord with the friendly mood of this kung-fu exchange. The recording and editing was done using Pro Tools.

(Special Thanks to Ralph MOREAU for contributing the voice of Master Lee. Music credit: YU Lingling, Ambush from Ten Sides.)

The Humanistic Movement: Proposal

Read More

I’m passionate about body movements and its interaction with space. For the final project of my Major Studio 1, I propose The Humanistic Movement (THM) as a 8-week exploration of generative art from bodily data.

A Humanistic View on Generative Art

Perhaps the broadest definition of generative art is like this:

Generative art is art created with the use of an autonomous system which generates data or interprets data input, and maps it to human-perceivable scales of visual, audio and/or other sensory data as output.

This kind of definition is open-minded as well as meaningless. Perceivable is not equal to affective. Merely mapping data to the human scale doesn’t guarantee good art.

Instead, artists should look for a humanistic scale of output in order to infuse their generative work with rhythm and verve, and therefore provoke emotional and reflective reaction from the audience.

The Model: Generating Art from Body Movements

The humanistic scale itself, however, is a mystery. Not only new media artists, but also traditional artists in all artistic domains, have been struggling for a touch with the human heart. Hopefully, in the making of generative art we have a shortcut—to wire in human-being themselves as data source, generating art from human for human.

On the other hand, if the generative system merely translates its input mechanically, making one-to-one mapping from some sensory data stream to a perceivable data set, the result will be by no means exciting from the viewpoint of generative art—the system doesn’t has a genius for art. The best possible outcome is a new piece of “instrument”, which has a high requirement on its “player” for the creation of good artwork.

THM will be of higher intelligence. Rather than relying on the data source to generate art, it only uses it as a reference. The way it works can be imagined as a musician collaborating with a dancer in an improvisatory work. Each one is autonomous. The dancer moves her body on her own initiative, as well as the musician plays her instrument. They don’t base their every next step exactly on each other’s latest action. Rather, they communicate on higher levels of artistic elements such as tempo and mood, in this way exchanging inspirations in the process of the performance, and seeking to achieve harmony in their collaborative artwork.

THM, as a computational generative art system, will function as the above-mentioned improvisatory musician. It captures data from body movements, and generate music and/or visuals in tune with the input, as an attempt to achieve the level of artistry which usually belongs to human masters.

If fruitful, THM is going to be my long-term research and making theme, and the outcome will be a systematic approach to humanistic generative music/visuals. For this Major Studio project, after several explorative prototypes, the final outcome will be one art piece demonstrating the concept.

Data-gathering Technologies

In order to capture data from body movements, sensory technologies need to be considered and evaluated. THM doesn’t not mean to hinder its human collaborator. Rather, it would like her to move as free as possible; it would like her to dance.

Several data-gathering technologies have been considered:

1. Myo (Preferred)

Myo is a gesture control armband developed by ThalmicLabs. With a 9-axis inertial measurement unit (IMU), it calculates spatial data about the orientation and movement of the user's arm. In addition, by measuring the electrical activity generated by arm muscles, it recognizes hand gestures. It is wireless and lightweight, not a big hindrance to body movements. Two armbands can be paired to the same computer, enabling movement capture of both arms.

The biggest issue with Myo is that the data it captured is inaccurate. Since the skin electricity measured by Myo is only the side effect of muscle movement, the measurement can be interfered by exterior factors such as muscle-binding clothes (even on the upper arm). Furthermore, when the arm goes to extreme positions, the measured data tends to jump the opposite extreme. As a loosely coupled generative system, THM does not have strict requirements on data accuracy. Nevertheless, captured data need to be pre-processed to reduce unreasonable behaviors of the system.

2. Kinect (Back-up)

Being developed and sold by Microsoft by years, Kinect is a sophisticated body tracking solution which captures the position of the whole human body through a set of cameras. No body attachment of sensor is needed. There are few reasons not to try it out if time is sufficient. My only concern for now is that it requires the person to be in front of the cameras in order to be captured, therefore limits her movement to some extent.

3. Brainwave Gathering Technologies (Alterative)

Besides gathering bodily data, an alternative data-sources is the mind. Brain-Computer Interface has been being researched for decades, and various ready-to-use data gathering technologies, such as EPOC, Muse and MindWave, are being shipped to the market. A large part of the data gathered by these devices (EEG, Attention/Meditation level, etc) are on subconscious level. For our goal of gathering humanistic data input, the brainwave approach is also very promising. I might try this alternative in future steps.

Supportive Projects

For the better outcome of this Major Studio 1 project, I plan to merge the final projects of some other courses to support it. This will allow me more time to work on it, and resources from these supportive courses.

  1. Theoretical Research on Generative Music/Visuals in Independent Study: Symphony + Technology, to provide theoretical basis for this project.
  2. Final project with Max/MSP in Dynamic Sound and Performance, as the sound-generating engine for THM.
  3. (Speculative) final project with openFrameworks in Creativity and Computation: Lab if required by the course and/or time-permitted, as the visual-generating engine for THM.

Schedule

After this proposal, there will be three prototypes and a final presentation of the project. The date and expected outcome of each phase is listed below:

  • Oct. 27/29 Prototype 1: getting data from body movement
  • Nov. 03/05 Prototype 2: generating sound/visuals
  • Nov. 17/19 Prototype 3: art directions; finalizing design
  • Dec. 08/10 Final Presentation

New York City Panorama Symphony

Read More

https://vimeo.com/122164868

This project enables the audience to listen to New York City's skyline as a piece of polyphonic music. Panning across the panorama, the audience can not only enjoy a spectacular view of the city's skyscrapers, but also feel the rhythm and texture of these buildings by ear—a somehow exotic, but truly panoramic experience.

In preparation for music generation, a panorama photo of New York City was cleaned up and downgraded into 8 levels of grayscale. The processed image was scanned by a Processing sketch. For every vertical line of pixels, the height of the highest non-white pixel defines its base frequency; the overall level of the line's darkness defines the amplitude of the base frequency. To enrich the sound, the 6 lowest overtones of the base frequency have their respective amplitude defined by the amount of each grayscale level in the line, from the darkest to the lightest one—this is how the texture of the buildings is represented in the music. Via Open Sound Control protocol, all these calculated data are sent from the Processing sketch to a Max patch, where the music is generated accordingly.

(Photo credits: photographed by Jnn13, stitched by LiveChocolate)

You & You

Read More

https://vimeo.com/123268029

You & You is an interactive music program which performs a whole song based on 3 seconds of the user's voice input. A one-man chorus is built through repetition and tonal modification.

This project was implemented in Max/MSP. For Mac users, an OS X build can be downloaded and experienced.