I’m passionate about body movements and its interaction with space. For the final project of my Major Studio 1, I propose The Humanistic Movement (THM) as a 8-week exploration of generative art from bodily data.
A Humanistic View on Generative Art
Perhaps the broadest definition of generative art is like this:
Generative art is art created with the use of an autonomous system which generates data or interprets data input, and maps it to human-perceivable scales of visual, audio and/or other sensory data as output.
This kind of definition is open-minded as well as meaningless. Perceivable is not equal to affective. Merely mapping data to the human scale doesn’t guarantee good art.
Instead, artists should look for a humanistic scale of output in order to infuse their generative work with rhythm and verve, and therefore provoke emotional and reflective reaction from the audience.
The Model: Generating Art from Body Movements
The humanistic scale itself, however, is a mystery. Not only new media artists, but also traditional artists in all artistic domains, have been struggling for a touch with the human heart. Hopefully, in the making of generative art we have a shortcut—to wire in human-being themselves as data source, generating art from human for human.
On the other hand, if the generative system merely translates its input mechanically, making one-to-one mapping from some sensory data stream to a perceivable data set, the result will be by no means exciting from the viewpoint of generative art—the system doesn’t has a genius for art. The best possible outcome is a new piece of “instrument”, which has a high requirement on its “player” for the creation of good artwork.
THM will be of higher intelligence. Rather than relying on the data source to generate art, it only uses it as a reference. The way it works can be imagined as a musician collaborating with a dancer in an improvisatory work. Each one is autonomous. The dancer moves her body on her own initiative, as well as the musician plays her instrument. They don’t base their every next step exactly on each other’s latest action. Rather, they communicate on higher levels of artistic elements such as tempo and mood, in this way exchanging inspirations in the process of the performance, and seeking to achieve harmony in their collaborative artwork.
THM, as a computational generative art system, will function as the above-mentioned improvisatory musician. It captures data from body movements, and generate music and/or visuals in tune with the input, as an attempt to achieve the level of artistry which usually belongs to human masters.
If fruitful, THM is going to be my long-term research and making theme, and the outcome will be a systematic approach to humanistic generative music/visuals. For this Major Studio project, after several explorative prototypes, the final outcome will be one art piece demonstrating the concept.
In order to capture data from body movements, sensory technologies need to be considered and evaluated. THM doesn’t not mean to hinder its human collaborator. Rather, it would like her to move as free as possible; it would like her to dance.
Several data-gathering technologies have been considered:
1. Myo (Preferred)
Myo is a gesture control armband developed by ThalmicLabs.
With a 9-axis inertial measurement unit (IMU), it calculates spatial data about the orientation and movement of the user's arm. In addition, by measuring the electrical activity generated by arm muscles, it recognizes hand gestures. It is wireless and lightweight, not a big hindrance to body movements. Two armbands can be paired to the same computer, enabling movement capture of both arms.
The biggest issue with Myo is that the data it captured is inaccurate. Since the skin electricity measured by Myo is only the side effect of muscle movement, the measurement can be interfered by exterior factors such as muscle-binding clothes (even on the upper arm). Furthermore, when the arm goes to extreme positions, the measured data tends to jump the opposite extreme. As a loosely coupled generative system, THM does not have strict requirements on data accuracy. Nevertheless, captured data need to be pre-processed to reduce unreasonable behaviors of the system.
2. Kinect (Back-up)
Being developed and sold by Microsoft by years, Kinect is a sophisticated body tracking solution which captures the position of the whole human body through a set of cameras. No body attachment of sensor is needed. There are few reasons not to try it out if time is sufficient. My only concern for now is that it requires the person to be in front of the cameras in order to be captured, therefore limits her movement to some extent.
3. Brainwave Gathering Technologies (Alterative)
Besides gathering bodily data, an alternative data-sources is the mind. Brain-Computer Interface has been being researched for decades, and various ready-to-use data gathering technologies, such as EPOC, Muse and MindWave, are being shipped to the market. A large part of the data gathered by these devices (EEG, Attention/Meditation level, etc) are on subconscious level. For our goal of gathering humanistic data input, the brainwave approach is also very promising. I might try this alternative in future steps.
For the better outcome of this Major Studio 1 project, I plan to merge the final projects of some other courses to support it. This will allow me more time to work on it, and resources from these supportive courses.
- Theoretical Research on Generative Music/Visuals in Independent Study: Symphony + Technology, to provide theoretical basis for this project.
- Final project with Max/MSP in Dynamic Sound and Performance, as the sound-generating engine for THM.
- (Speculative) final project with openFrameworks in Creativity and Computation: Lab if required by the course and/or time-permitted, as the visual-generating engine for THM.
After this proposal, there will be three prototypes and a final presentation of the project. The date and expected outcome of each phase is listed below:
- Oct. 27/29 Prototype 1: getting data from body movement
- Nov. 03/05 Prototype 2: generating sound/visuals
- Nov. 17/19 Prototype 3: art directions; finalizing design
- Dec. 08/10 Final Presentation