Little Universe · 小宇宙

Read More

https://vimeo.com/120657370

腾讯视频链接 (video link for visitors in China): http://v.qq.com/page/y/9/n/y0304z44l9n.html

There they are, poetic objects in the Little Universe, revolving around themselves, whispering to each other, regardless of whether or not they are visited by you—their exotic guest.

Colorful the objects are, and they are not mindless clouds of stardust. Each object has its own temperament— a tendency to get close to someone, and to run away from another. In this way, the Little Universe reveals its rich dynamics.

Concept

This interactive installation is designed to bring about a serene and meditative atmosphere. It has an intimate physical form, inviting the audience to interact with it, to touch and move it with their hands. Without interfering with the dialogue between the objects, the audience can still enjoy the poetic scene by watching, listening and contemplating their behaviors and relationships.

While the audience is not explicitly informed of the logic behind the behaviors, the five-element theory (五行) of Chinese philosophy inspires the distinct characteristics and relationships between the objects. The colors of the five objects represent the five elements—metal (金), wood (木), water (水), fire (火), and earth (土). There are two orders of presentation within the five-element system: in the order of mutual generation (相生), one element can give birth to another, while in the order of mutual overcoming (相克), the element can suppress yet another. Represented by the attracting and repelling forces between the particle systems surrounding the objects, the relationships between the poetic objects enrich the contemplative intricacy of the Little Universe.

Technology

The Little Universe installation is built to integrate four components:

  1. poetic objects with which the users interact,
  2. projection system,
  3. sound generation system, and
  4. position detection/tracking system.

The poetic objects are made with foam balls implanted with microcontrollers and infrared LEDs facing upwards, atop a flat surface as the cosmic background. The infrared light from the objects is captured by a Microsoft Kinect motion sensor, and analyzed by an openFrameworks application in order to locate the objects. A particle system is projected onto the objects, and based on the position of the objects and their relationships, the behaviors of the particle systems are updated and rendered by the openFrameworks application. The image is then processed by MadMapper and sent to the projector, which has been calibrated so that the projection will be aligned with the physical objects. Control messages are sent from the openFrameworks application to Ableton Live via the Open Sound Control (OSC) protocol, generating the ambient music and sound effects.

The position detection system precisely locates the objects on the table and tracks them throughout the installation. The projection on each object behaves with distinct characteristics depending on the viewer’s interaction, which requires the system to be able to distinguish the objects from each other. Thus, the application has to be always aware of the exact location of the individual objects.

The tracking system comprises of camera vision alongside infrared LEDs. Utilizing blob detection on Kinect’s infrared stream, the application will precisely locate the objects according to their implanted infrared LEDs. A signal processing procedure enables the system to distinguish the objects within 300 ms post detection.

Kinect’s infrared stream can afford up to 30 FPS, which translates to a 30 Hz sampling rate. Hence, theoretically, we can reconstruct the any input signals with a bandwidth less than 15 Hz. The infrared LEDs on the objects are constantly blinking, which generate patterns that are perceived as square wave signals by the camera vision system. All the LEDs in the objects generate periodic square waves with a period equal to one third of a second. However, each LED generates a signal with a distinct duty cycle. The application tells the objects apart by measuring the duty cycles of the LEDs through the Kinect’s data frames. After detection, the application starts tracking the objects, and in the event of tracking loss or confusion between objects, a precise identification will be performed again through frequency analysis.

(Little Universe is a collaboration among SHI Weili, Miri PARK, and Saman TEHRANI, with the guidance of Marko TANDEFELT.)

inSight: The Prehistory of Homo Interscient

Read More

https://vimeo.com/119103265

While the daily routine of archaeologists is dealing with antiques, we are nonetheless amazed by the genius of our forefathers from time to time, and are shocked by the social impacts of their inventions. After all, what we treat as artifacts were once the most cutting-edge technologies. Since the dust raised by them has already settled, we may learn from what has happened in order to predict how technology will affect human beings in the future.

Today we are super excited to introduce our newest discovery, which will fill a long-existing gap in the prehistory. As we all know, the continuously recorded history starts after the Total War, and before the war, there existed two kinds of human beings—homo interscient, which is us, and homo sapiens, our ancestor. It might not be easy for us to imagine that the homo sapiens cannot really know each other's mind. For us, thinking together with other people equals to exchanging ideas with them. For the homo sapiens, however, thinking is one thing, while informing other people is another. In order to exchange ideas, it seems that they mainly relied on very primitive methods, such as modulate their voice to carry information with data rates as low as several bits per second. Not only did they have inefficient ways to communicate within their species, but they also lacked the solidarity which might have helped us in defeating them in the war.

Nonetheless, it was the homo sapiens who invented us, the homo interscient. We already know that mind communication was treated by them as a scientific and technological breakthrough. And we also know that once there was a first community of people like us, they immediately saw themselves a different kind of human than the old species. So did homo sapiens. Accumulated enmity between the two species eventually led to the outbreak of the Total War, and apparently, we won it, admittedly with a heavy loss. The destruction was so severe that most of the historical records before our current era got lost, so that people have no way to know the details of the prehistory, and can only rely on archaeological discoveries.

The video clip we recovered recently is an significant breakthrough in knowing the early days of the homo interscient. It suggests an answer to an eye-catching question—how was the interscient technology adopted in its early age? As the video (which is a commercial advertisement) shows, the technology was first promoted as a consumer product named inSight. The manufacturer carefully chose a minimal set of interscient applications (or maybe that actually is how much they were able to offer on that stage), and advertised the technology as an augmentation to people's lifestyle. This seeming harmlessness might facilitated the early adoption. And once there was a solid community of adopters, the trend could never be stopped.

—Weili, Tyler, and Lama

(inSight: The Prehistory of Homo Interscient is a collaboration among SHI Weili, Tyler HENRY, and Lama SHEHADEH.)

Using Myo Armband in Performances

Read More

In recent months, I have used the Myo armband in various performance-related projects. Although for the moment it is not as magical (powerful and reliable) as it seems to be in the official promotion video, the armband is nevertheless a very promising sensing technology for body gesture and movement. In this essay, I would like to share my experience of using myo in a scenario of technology-enhanced performances1.

(The version of the Myo SDK I am referring to in the main part of this essay is Beta 7. The newest version as I am writing the text is 0.8.0, which introduces the ability to acquire raw EMG data.)

System Overview

The Myo SDK exposes its functionality in a C++ API2, which is the native language of many major creative coding libraries such as openFrameworks and Cinder. Therefore, it is easy to employ the armband in creative coding applications.

More than one armbands can be used simultaneously, so the application can make use of the data from multiple people or from both arms of one person. However, the SDK does not provide a unique identifier for the program to recognize the armbands over different runs. Therefore, if more than one armband are used in the same application, the role of each armband will be assigned on the fly, which is a hassle for the user to discern.

By overriding the callback functions of the API, the programmer decides the application’s reaction to each type of the captured data. Two major types of data—spatial data and gesture data—are gathered, which will be described in detail later. The data is refreshed in a rate high enough for capturing body movements in most situations3. The armband communicates with the computer via Bluetooth connection. However, a dedicated Bluetooth receiver must be plugged into the computer so that the daemon program (Myo Connect) can find the armband. In order for the armband to fully sense all types of data, it must be worn on bare skin of the forearm, and a pairing gesture must be performed and recognized every time it is put on the arm.

The overall stability of the armband and the SDK is good. With careful programming to deal with the connection procedure, the armband can be used as a reliable data source on stage. However, when worn on a thin arm, the armband tends to slide during arm movements. And after every slide, the SDK will require the pairing gesture to be performed again before it can provide gesture data, which is unacceptable in performances. Moreover, the battery of the armband depletes in days when the armband is idle, and nowhere can the user find a exact indicator of the battery condition. Careful preparation must be done before the performance to make sure no problem will be caused by these known issues.

Data Gathering

The Myo armband consists of a 3-degree accelerometer and a 3-degree gyroscope for the capture of spatial data, and eight EMG sensors for gesture recognition. For spatial data, the SDK provides:

  • orientation data in the form of quaternion and Euler angles
  • raw accelerometer reading in the form of 3-dimensional vector
  • raw gyroscope reading in the form of 3-dimensional vector

The quaternion and Euler angles are different representations of the same arm direction, of which the latter is easier for human perception. The three components of the Euler angles correspond to the arm’s

  • pitch (vertical angle)
  • yaw (horizontal angle)
  • roll (rotational angle)

The pitch data is very reliable. It always refers to the horizontal plane as origin, and is unrelated to the horizontal direction of the arm. Therefore, the performer can feel free to turn around during the performance, without worrying about the pitch data going out of scope. The data ranges from -π/2 (arm towards ground) to π/2 (arm towards sky). According to my experience, this is probably the most expressive kind of data provided by the armband. People do not raise their arms for no reason. Therefore, the vertical direction of the arms is a very good indicator of the emotional state of the performer. By making use of the absolute pitch or the relative pitch over time, simple but effective mechanism can be conceived to respond to the performer’s emotional state.

The yaw and roll data refer to their initial state as origin. That means, once the armband get initialized, the yaw data represents the arm’s horizontal direction relative to this fixed origin rather than the current frontal direction of the performer’s body. In consequence, when the performer turns her body, the reading will be shifted. Since we have no way to capture the performer’s body direction, the yaw data is useless in most cases, unless the performer never turns her body during the whole performance. A possible use of the yaw data is to capture data from both arms, and calculate their difference to examine whether the openness of the arms. Another issue of the yaw and roll data is that the reference coordinate tends to drift over time, which makes the data even more unreliable. These data range from -π to π (representing a whole circle).

The raw data from the accelerometer and gyroscope can also be accessed. In fact, these are the data source for the SDK to calculate the orientation data. Yet beyond this usage, the data has their own significance—they measure the linear and angular acceleration of the armband. The data units are g (the gravitational constant) or °/s (degree per second), respectively. Being viewed separately, each component of these data might not be of great use in most performance scenarios. However, if we calculate the SRSS (square root of the sum of the squares) of all the components of either the accelerometer data or the gyroscope data, we have the magnitude of the linear or angular acceleration of the arm, which are very effective indicators of the intensity of the arm movement, which in turn contains emotional or rhythmical information of the performance.

After proper pairing, the armband also provides gesture data of the hand, which indicates which of the following gesture the arm is making:

  • rest
  • fist
  • waving in
  • waving out
  • fingers spread
  • thumb to pinky

However, this data is not as useful as it may seem at first sight. The hand gesture is calculated from the EMG data measured on the skin of the forearm, which is a side effect of the muscle movement. Therefore, the calculated gesture may not loyally indicate the actual gesture of the hand. What’s more, when exterior forces are applied to the muscles, the accuracy of the measurement can be largely affected. In fact, when the performer wears tight clothes on her upper arm, the hand gesture data tends to be nearly unusable.

While it was widely hoped that the armband can recognize custom hand gestures, this feature is still missing from the Myo SDK, which disappointed many developers. In spite of that, starting from version 0.8.0 of the SDK, the eight streams of raw EMG reading can be now accessed. This not only means that custom gesture recognition becomes possible (though it might require great efforts from the developer), but also opens various possibilities. I suppose that the EMG data might be employed by new media artists in ways similar to how EEG data from the brain is used.

Endnotes

  1. A prior essay with a focus on the application of Myo in my project The Humanistic Movement can be accessed via http://shi-weili.com/the-humanistic-movement-bodily-data-gathering-and-cross-application-interoperability/.

  2. Scripting is also supported by the Myo SDK in Myo Scripts.

  3. According to https://www.thalmic.com/blog/raw-uncut-drops-today/ , it seems that the refresh rate of the armband data is 200 Hz.

(Title image credit: Thalmic Labs)

The Humanistic Movement

Read More

https://vimeo.com/114502459

腾讯视频链接 (video link for visitors in China): http://v.qq.com/page/o/i/u/o0304j7xjiu.html

The Humanistic Movement (THM) is a generative music system. It collaborates with a human dancer, and improvises music using the dancer’s body movements as its inspiration source.

THM is not a body-instrument. It hates one-to-one mapping from gesture to sound, which awkwardly limits the dancer’s movements, making her, rather than the system, responsible for the composition. THM wants the dancer to dance, with confidence that the system will take good care of the music.

Master’s Spirit in Markov Chains

And the dancer need not worry that without her direct control, the system would generate ugly sounds. In fact, THM’s musical style comes from Wolfgang Amadeus Mozart. It has calculated second-order Markov Chains1 of note progression of the first movement of Mozart’s Piano Sonata No. 11 in A major, so it really has Mozart’s style in mind. For every two-note sequence in this work, THM knows the frequencies of all possible following notes. For example, it knows that after the note sequence E4 B3, the frequencies of the following notes are:

With this knowledge, when generating a new note, THM looks back for the last two notes it has generated, and looks them up in the Markov chains. It can then follow the frequency table of the following notes, so that it plays in the style of the great master. Because of the randomness built in this process, the music is new in every performance, yet the system has a consistent style all the time, just like a real musician with her own personality.

Movement-Influenced Melody and Tempo

While THM has its own music ideas, the dancer still has influence on the composition, with the direction and acceleration of her arm captured by the Myo armband in real time. THM always bases its work on the current state of the dancer, making the music in tune with the dance.

Whenever a new note comes, the system first examines whether the dancer’s arm is pointing higher or lower than the its direction at the last note, and will accordingly look for relatively higher or lower notes in its style reference. In this way, the dancer has influence on the melody with her dance movement. Meanwhile, she has not to be overstressed, since the responsibility of deciding the exact notes are on the shoulder of THM. So the dancer can move freely, and feel that the melody flows in accordance with her movements.

The relation between arm direction and note progression is most perceivable when the music has a slow tempo. When the music goes faster, the link becomes harder to perceive. Furthermore, this is still a low level mapping which cannot represent higher level states, such as emotion, of the dancer. In order to improve its intelligence, THM introduces tempo alteration in accordance with the intensity of the dancer’s movements. In some parts of the music, the system examines the acceleration the dancer’s arm, and generates fast (e.g. eighth notes) or slow (e.g. quarter notes) notes according to the reading. The acceleration indicates the speed and complexity of the dancer’s movements, and therefore is a good representation of her emotion. By mapping it to the intensity of the music, THM receives a higher level of influence from the dancer.

In Pursuit of Rhythm

The rhythm of time-based artwork is a complex notion, and THM seeks to achieve some extent of it through the organization of musical structure. Besides the above-mentioned tempo alteration, several other strategies are employed in this work.

Repetition and variation of previously generated music occurs on a bar scale. Although the work of THM is not melodic enough for the audience to memorize a long segment of it, the reoccurrence of a bar seconds after its first appearance is readily recognizable. When the audience realize that the music is repeating it self (and that the system has memory of its previous work), they are more willing to enjoy the piece as a planned work rather than totally random hits on the keyboard.

The musical form of THM was also carefully planned. In order to build up the mood of the work effectively, the music has a predefined gradually developing structure:

The music starts slowly with whole notes whose pitch proceed in tune with the dancer’s movements. It speeds up across time, gradually switching through half, quarter, eighth and sixteenth notes. When different note-lengths coexist in one bar, there will be tempo alteration according to the dancer’s movements. When a bar has fixed-length notes which are fast enough to form a sub-melody, repetition and variation will be employed to enhance the rhythm. Chords on a lower octave are gradually introduced to further enrich the sound. After eight bars of sixteenth notes, which is the fastest part of the piece, the music slows down and finally ends with a whole note on A4, which is its key note.

System Architecture

This article mainly covers the composition logic of the THM system. The whole architecture of the system is shown in the graph above. Before the composer can make use of the dancer’s movement data, the data has to be captured by the Myo armband and pre-processed by the corresponding infrastructure, which was discussed in The Humanistic Movement: Bodily Data Gathering and Cross-application Interoperability. After the notes get composed, they are sent to the sound engine, which was implemented in Ableton Live with Max for Live, and to the visual engine, which was implement in C++ based on openFrameworks, to produce sound and corresponding visuals.

The concept behind THM was further discussed in The Humanistic Movement: Proposal.

Endnotes

  1. The order of Markov chains decides the extent of simulation of the master’s note choices. With the order of zero, the system chooses the notes based on the master’s overall distribution of note choice frequencies, with no knowledge of its previous composition. The higher the order is, the more previously composed notes the system will look back for. Second order Markov chains can already support significantly accurate simulation of the master’s style, and is reasonably simple to implement.

The Humanistic Movement: Bodily Data Gathering and Cross-application Interoperability

Read More

The first step of the THM project is the preparation of infrastructures—in order to base the generative art on bodily data, we need to get the data; in order to implement the core logic and sound module in separate platforms, we need to enable the communication between the two.

After building these infrastructures with the iteration of the first two prototypes, the architecture of the THM system is fixed like this:

  • It senses bodily data with two Myo armbands.
  • The data is then gathered and analyzed by the core logic implemented in C++ programming language based on openFrameworks.
  • The core logic composes the music, and sends directions to the sound engine implemented in Max/MSP for audio output.
  • With the foundation of openFrameworks, the core logic can compose and render visuals by itself if needed.

Bodily Data Gathering

The screenshot above demonstrates all the bodily data that can be fetched with the Myo SDK. Multiple armbands can be connected to the system at the same time, which enables data capture from both arms.

Two types of bodily data are gathered. The majority of them are spatial data, which contains

  • orientation data,
  • accelerometer output, and
  • gyroscope output.

Indicating the direction of the forearm, the orientation data is calculated by the Myo SDK from accelerometer data and gyroscope data, and is originally provided in the form of a quaternion. The THM core logic then translates it into its equivalent Euler angles, which directly correspond to the

  • pitch (vertical angle),
  • yaw (horizontal angle), and
  • roll (rotational angle)

of the arm. The Euler angles are represented both numerically and graphically in the screenshot.

According to my experience, the pitch data is very reliable. Referring to the horizontal plane as origin, the number ranges between -π/2 (arm towards ground) and π/2 (arm towards sky), and is unrelated to the frontal direction of the body, and unrelated to whether the arm is pointing to the front or the back. After using the armband in a drama performance, I was surprised by how effective the pitch data indicates the emotional state of the performer. It might be one of the most exploited input data in THM as well.

The reference coordinate system of the yaw and roll data are determined by the position of the armband at the point of connection establishment. In consequence, when the user turns his or her body, the reading will be shifted. Another issue of these data is that the reference coordinate tends to drift over time, which also affects the usefulness of the reading. These data range from -π to π (representing a whole circle).

The raw data from the accelerometer and gyroscope can also be accessed via the Myo SDK. They measure the linear and rotational acceleration of the armband in g (the gravitational constant) or °/s (degree per second) respectively. By calculating the SRSS (square root of the sum of the squares) of its components, we get the magnitude of the linear acceleration. When the armband is placed still, this number stays near 1, representing the influence of gravity. When the armband is in free fall, the number approaches 0. It is apparent that acceleration of the arm contains rich rhythmic information. Therefore, it is promising to take advantage of the accelerometer and gyroscope output.

Another type of bodily data that can be acquired via the Myo SDK is gesture data. As indicated by the lower right part of the screenshot, when the armband is worn on arm and the sync gesture is performed and correctly recognized, the SDK provides additional information, including

  • which arm the armband is on,
  • the direction of the x-axis of the armband, and most importantly,
  • the gesture of the hand, which in its turn includes
    • rest
    • fist
    • wave in
    • wave out
    • fingers spread
    • thumb to pinky

While it sounds promising, the gesture data is actually hard to make use of in this project. One reason is that it requires synchronization every time the armband is worn, which is inconvenient and sometimes frustrating, since it is not always easy to make the sync gesture recognized by the Myo SDK. The essential reason, however, is that the gesture data is calculated from the electricity measured on the skin of the arm, which is actually a side effect of the muscle movement itself. Therefore, the calculated hand gesture cannot be perfectly accurate, and is sensitive to exterior influences such as tight clothes on the upper arm, or extreme arm positions. Based on the above considerations, I currently have few intention to make use of the gesture data in THM.

Cross-application Interoperability

Because the core logic and the sound engine of THM are implemented as separate applications base on different platforms (openFrameworks and Max/MSP, respectively), it is necessary to build the communication mechanism between them. The Open Sound Control (OSC) protocol nicely serves this purpose, enabling sending data and commands from the core logic to the sound engine in the form of UDP datagrams. In the core logic, the openFrameworks addon ofxOsc is used to form and send OSC messages. In the sound engine, the udpreceive patch is used to receive the messages, and then the route patch is used to categorize the data and commands, as demonstrated in the screenshot above.

The sound engine represented by the screenshot is only a demonstration of the interoperability between it and the core logic. It simply feeds the bodily data to two groups of oscillators to generate sound, with the frequencies related to arm pitches in the past few seconds, and the amplitudes to arm yaws. Together with the core logic, these primitive prototypes demonstrate the architecture of the THM system described in the beginning of this article.

The next step of the THM project is to explain the bodily data on higher levels (such as tempo, rhythm and emotion), and generate more artistic output.

Frankenstein's Frankenstein

Read More

https://vimeo.com/122130782

In this production of Frankenstein, the team experimented with several digital technologies in order to bring about an immersive theatrical experience.

Two projectors were used simultaneously. The first projector threw huge ambient images (fire, thunder, etc) onto the backdrop to set the keynote. This background image, together with the actors, was then captured by a webcam and sent to VDMX, where the projections were controlled. Before the captured image was sent to the second projector, its brightness and saturation were respectively mapped from the direction of the forearms of the two actors, which was captured by the Myo armbands. Therefore, not only would there be multiple overlaying images of the actors and the stage, but also that the attributes of this visual environment were responsive to the arm movement of the actors, which is a simple but effective indicator of the emotion of the characters.

In the end of the play, after killing its creator, the monster of Frankenstein began to vomit, indicating that it was going to give birth to its next generation—an echo of the circular wording of the play's title.

(Project team: Christopher DAMEN, Mark PUNCHINSKY, Stephanie BEATTIE, SHI Weili, Michael GLEN, Kieun KIM, LIU Jiaqi)

The Humanistic Movement: Proposal

Read More

I’m passionate about body movements and its interaction with space. For the final project of my Major Studio 1, I propose The Humanistic Movement (THM) as a 8-week exploration of generative art from bodily data.

A Humanistic View on Generative Art

Perhaps the broadest definition of generative art is like this:

Generative art is art created with the use of an autonomous system which generates data or interprets data input, and maps it to human-perceivable scales of visual, audio and/or other sensory data as output.

This kind of definition is open-minded as well as meaningless. Perceivable is not equal to affective. Merely mapping data to the human scale doesn’t guarantee good art.

Instead, artists should look for a humanistic scale of output in order to infuse their generative work with rhythm and verve, and therefore provoke emotional and reflective reaction from the audience.

The Model: Generating Art from Body Movements

The humanistic scale itself, however, is a mystery. Not only new media artists, but also traditional artists in all artistic domains, have been struggling for a touch with the human heart. Hopefully, in the making of generative art we have a shortcut—to wire in human-being themselves as data source, generating art from human for human.

On the other hand, if the generative system merely translates its input mechanically, making one-to-one mapping from some sensory data stream to a perceivable data set, the result will be by no means exciting from the viewpoint of generative art—the system doesn’t has a genius for art. The best possible outcome is a new piece of “instrument”, which has a high requirement on its “player” for the creation of good artwork.

THM will be of higher intelligence. Rather than relying on the data source to generate art, it only uses it as a reference. The way it works can be imagined as a musician collaborating with a dancer in an improvisatory work. Each one is autonomous. The dancer moves her body on her own initiative, as well as the musician plays her instrument. They don’t base their every next step exactly on each other’s latest action. Rather, they communicate on higher levels of artistic elements such as tempo and mood, in this way exchanging inspirations in the process of the performance, and seeking to achieve harmony in their collaborative artwork.

THM, as a computational generative art system, will function as the above-mentioned improvisatory musician. It captures data from body movements, and generate music and/or visuals in tune with the input, as an attempt to achieve the level of artistry which usually belongs to human masters.

If fruitful, THM is going to be my long-term research and making theme, and the outcome will be a systematic approach to humanistic generative music/visuals. For this Major Studio project, after several explorative prototypes, the final outcome will be one art piece demonstrating the concept.

Data-gathering Technologies

In order to capture data from body movements, sensory technologies need to be considered and evaluated. THM doesn’t not mean to hinder its human collaborator. Rather, it would like her to move as free as possible; it would like her to dance.

Several data-gathering technologies have been considered:

1. Myo (Preferred)

Myo is a gesture control armband developed by ThalmicLabs. With a 9-axis inertial measurement unit (IMU), it calculates spatial data about the orientation and movement of the user's arm. In addition, by measuring the electrical activity generated by arm muscles, it recognizes hand gestures. It is wireless and lightweight, not a big hindrance to body movements. Two armbands can be paired to the same computer, enabling movement capture of both arms.

The biggest issue with Myo is that the data it captured is inaccurate. Since the skin electricity measured by Myo is only the side effect of muscle movement, the measurement can be interfered by exterior factors such as muscle-binding clothes (even on the upper arm). Furthermore, when the arm goes to extreme positions, the measured data tends to jump the opposite extreme. As a loosely coupled generative system, THM does not have strict requirements on data accuracy. Nevertheless, captured data need to be pre-processed to reduce unreasonable behaviors of the system.

2. Kinect (Back-up)

Being developed and sold by Microsoft by years, Kinect is a sophisticated body tracking solution which captures the position of the whole human body through a set of cameras. No body attachment of sensor is needed. There are few reasons not to try it out if time is sufficient. My only concern for now is that it requires the person to be in front of the cameras in order to be captured, therefore limits her movement to some extent.

3. Brainwave Gathering Technologies (Alterative)

Besides gathering bodily data, an alternative data-sources is the mind. Brain-Computer Interface has been being researched for decades, and various ready-to-use data gathering technologies, such as EPOC, Muse and MindWave, are being shipped to the market. A large part of the data gathered by these devices (EEG, Attention/Meditation level, etc) are on subconscious level. For our goal of gathering humanistic data input, the brainwave approach is also very promising. I might try this alternative in future steps.

Supportive Projects

For the better outcome of this Major Studio 1 project, I plan to merge the final projects of some other courses to support it. This will allow me more time to work on it, and resources from these supportive courses.

  1. Theoretical Research on Generative Music/Visuals in Independent Study: Symphony + Technology, to provide theoretical basis for this project.
  2. Final project with Max/MSP in Dynamic Sound and Performance, as the sound-generating engine for THM.
  3. (Speculative) final project with openFrameworks in Creativity and Computation: Lab if required by the course and/or time-permitted, as the visual-generating engine for THM.

Schedule

After this proposal, there will be three prototypes and a final presentation of the project. The date and expected outcome of each phase is listed below:

  • Oct. 27/29 Prototype 1: getting data from body movement
  • Nov. 03/05 Prototype 2: generating sound/visuals
  • Nov. 17/19 Prototype 3: art directions; finalizing design
  • Dec. 08/10 Final Presentation

New York City Panorama Symphony

Read More

https://vimeo.com/122164868

This project enables the audience to listen to New York City's skyline as a piece of polyphonic music. Panning across the panorama, the audience can not only enjoy a spectacular view of the city's skyscrapers, but also feel the rhythm and texture of these buildings by ear—a somehow exotic, but truly panoramic experience.

In preparation for music generation, a panorama photo of New York City was cleaned up and downgraded into 8 levels of grayscale. The processed image was scanned by a Processing sketch. For every vertical line of pixels, the height of the highest non-white pixel defines its base frequency; the overall level of the line's darkness defines the amplitude of the base frequency. To enrich the sound, the 6 lowest overtones of the base frequency have their respective amplitude defined by the amount of each grayscale level in the line, from the darkest to the lightest one—this is how the texture of the buildings is represented in the music. Via Open Sound Control protocol, all these calculated data are sent from the Processing sketch to a Max patch, where the music is generated accordingly.

(Photo credits: photographed by Jnn13, stitched by LiveChocolate)

You & You

Read More

https://vimeo.com/123268029

You & You is an interactive music program which performs a whole song based on 3 seconds of the user's voice input. A one-man chorus is built through repetition and tonal modification.

This project was implemented in Max/MSP. For Mac users, an OS X build can be downloaded and experienced.

5 in 5

Read More

In this Major Studio 1 assignment, I was supposed to create five projects consecutively, with each project conceptualized, produced and documented in a single day. There is no restriction on theme or medium, and experimental attempts are highly encouraged.

#1: Meditation for 1H

The first three projects are related to my body. I named this serious Half Familiar, Half New, for that in each project, I started with a topic I'm familiar with, yet ended up with an experience I've never had before.

In this very first project, I used time-lapse video to record myself meditating for one hour. I've been doing Zen meditation in recent years, which helps me to get closer to a mental state of mindfulness. I also treat meditation as a practice of spiritual strength. However, I've never done a meditation as long as one hour.

In addition to the extension of meditation time, I've also changed its location to an outdoor environment. People usually meditate in a quiet room. However, since during meditation one's senses tend to be more acute, it might make sense to do so in an open area.

The outcome of my meditation is a mixture of different feelings. First of all, it is indeed physically demanding to meditate on such a time scale. As time passed on, it was gradually harder to concentrate my mind on my breath, and easier to lose my mind into a mental state like dreaming. I could feel the weakness of my body, and I was even unconfident of completing this challenge. Fortunately, I endured this one hour without problem. When I opened my eyes, the world turned blue, because I have been closing my eyes under sunlight for so long time.

The good news is that, in long-time meditation, it is relatively easy to fall into a "deeper" mental state even if you are not of an optimum physical strength. The experience reinforced my thoughts that people's physical and mental states are interrelated with each other. Being still for a long time really helped my mind to quiet down, and this explains the reason to practice mindfulness in this kind of body set.

During my meditation in the community park, I was able to recognize various kinds of surrounding sounds, such as the splash of the fountain, and children playing around me. Making myself quiet helped me to better sense the environment, and even the mental state of my self. However, without the help from a camera, I can never know the way my body looks in the meditation. Although predictable, the time-lapse record still surprised me on how motionless I was during the whole process. What's more, when reviewing the record, I found out that there are many more things that I hadn't notice during the process, such as squirrels playing around me in a very short distance, and the change of lightness of the sunlight. These interesting discoveries showed the limit of human senses, even in a highly mindful state.

#2: Fifth Avenue—From The New School to Central Park

In the second project, I used a mobile app called Hyperlapse to record what I have seen in a jogging along the fifth avenue. I've been a runner for years, so that the distance from The New School to Central Park is not a problem to me. The amazing part of this experience is the video captured during the process.

The team of Instagram have done such a wonderful job in making Hyperlapse that its stabilization algorithm enables people to shoot time-lapse video with a hand-held phone, even when they are walking. Before, shooting time-lapse video requires a tripod in order to make the image stable, which largly limits its usage. Now, even a jogging can be recorded and condensed into a two-minute video with ease.

To me, the outcome is stunning. The buzzy night of midtown Manhattan was captured and highlighted by these video clips. They are rough but vivid, just like what people tend to make with the first grasp of a cutting-edge technology. An interesting thing is that since the recording of video gave me so much fun, I almost lost my mind into it, experimenting ways of camera placement and movement, without remembering that I was there originally for a jogging.

(Music credit: U2, The Miracle)

#3: A Moment of Butoh

My third and last project dealing in the Half Familiar, Half New serious, I tried to recollect my body memories of Butoh movements. As a genre of contemporary dance, Butoh is a mixture of modern dance and Japanese traditional culture. It tends to build a dark and deep atmosphere using restrained body movements.

I had learned some modern dance and Butoh during the spring, but since I began to travel in the summer, I never practiced dance again. This project is an attempt to awaken the dancer in me, and get myself more prepared to continue my learn dancing in New York. I set up the room, and used a fixed camera to record myself doing random movements, in hope of producing a montage of me dancing.

The process was quite disappointing at first. I found out that my body has almost forgotten the way it used to dance, and I cannot achieve a proper tenseness of it, in order to make clean movements. Every piece of footage was ugly when I first looked at it. I didn't know what can be done with them.

Then, after collecting an hour of video clips, I stopped dancing, sat down, and looked at them for a second time. I began to notice interesting movements in the footage. It might be not that bad if I only collect and arrange this moments. This idea resulted in the following piece. I named it A Moment of Butoh, hoping that in all my random search, I've found at least one moment that matches the state of a Butoh dancer.

(Music credit: Nocturnal Emissions, 01 from Music For Butoh)

#4: How to Do Chicken Right, Seriously

When I say "seriously," I really mean it. Eating in America makes me homesick, since cooks here don't give chicken the extent of care it deserves, which results in a bland and coarse taste. The Chinese people treat chicken as precious food; they carefully prepare chicken for their New Year's Eve—that's the way I was going to cook my chicken.

The slideshow above demonstrates the process of cooking a whole chicken. The basic idea is to remove the blood and body fluid from the meat, and bring out the flavor using various cooking methods. The whole process took me three hours, and the outcome definitely cured a nostalgic stomach.

#5: Pushing the Beats

At the beginning of this semester, I bought myself these pieces of MIDI controllers:

I daydreamed with them, but have had no time to play around with them during the first three weeks. When dealing with my very last 5 in 5 project, I decided that I just want to do something with them. Simple things should suffice.

The video below demonstrates me making a sequence of beats, using a MIDI controller named Push, which is a new type of musical instruments which lets the musician to modify the arrangements of a musical piece on the fly. What I have made is not some masterpiece, apparently. Nevertheless, I had fun in making it.

Final Thoughts

Doing 5 in 5 is my first experience of formally documenting the process when doing projects. The video or photo documentary enabled me to reflect on my process, which is full of interesting findings. The documentary also functions as a effective showcase of what has been done. Therefore, it is a good design practice to document.

On the other hand, adding an observer affects the process, even if the observer is a camera. To some extent, keeping the observer in mind makes people act like performing. This shifts their focus from the object to the subject themselves, which is equivalent to the objectification of the subject. Furthermore, the documentary itself becomes another form of outcome, which is sometimes more interesting than the direct product of the project, and will last longer than the latter, especially so if the project is action-taking (e.g., performance art) rather than object-making.

The presentation of 5-in-5 projects in class is eye-opening, since everyone demonstrated his or her unique area of interest and way of doing things. It becomes even more mind-revealing when people are asked to do choose their topic and approach in a serious of projects. This process is also a great opportunity for me to reflect on myself, which hopefully leads to a better understanding of the relationship between me and the world I'm experiencing and affecting.