Using Myo Armband in Performances

Read More

In recent months, I have used the Myo armband in various performance-related projects. Although for the moment it is not as magical (powerful and reliable) as it seems to be in the official promotion video, the armband is nevertheless a very promising sensing technology for body gesture and movement. In this essay, I would like to share my experience of using myo in a scenario of technology-enhanced performances1.

(The version of the Myo SDK I am referring to in the main part of this essay is Beta 7. The newest version as I am writing the text is 0.8.0, which introduces the ability to acquire raw EMG data.)

System Overview

The Myo SDK exposes its functionality in a C++ API2, which is the native language of many major creative coding libraries such as openFrameworks and Cinder. Therefore, it is easy to employ the armband in creative coding applications.

More than one armbands can be used simultaneously, so the application can make use of the data from multiple people or from both arms of one person. However, the SDK does not provide a unique identifier for the program to recognize the armbands over different runs. Therefore, if more than one armband are used in the same application, the role of each armband will be assigned on the fly, which is a hassle for the user to discern.

By overriding the callback functions of the API, the programmer decides the application’s reaction to each type of the captured data. Two major types of data—spatial data and gesture data—are gathered, which will be described in detail later. The data is refreshed in a rate high enough for capturing body movements in most situations3. The armband communicates with the computer via Bluetooth connection. However, a dedicated Bluetooth receiver must be plugged into the computer so that the daemon program (Myo Connect) can find the armband. In order for the armband to fully sense all types of data, it must be worn on bare skin of the forearm, and a pairing gesture must be performed and recognized every time it is put on the arm.

The overall stability of the armband and the SDK is good. With careful programming to deal with the connection procedure, the armband can be used as a reliable data source on stage. However, when worn on a thin arm, the armband tends to slide during arm movements. And after every slide, the SDK will require the pairing gesture to be performed again before it can provide gesture data, which is unacceptable in performances. Moreover, the battery of the armband depletes in days when the armband is idle, and nowhere can the user find a exact indicator of the battery condition. Careful preparation must be done before the performance to make sure no problem will be caused by these known issues.

Data Gathering

The Myo armband consists of a 3-degree accelerometer and a 3-degree gyroscope for the capture of spatial data, and eight EMG sensors for gesture recognition. For spatial data, the SDK provides:

  • orientation data in the form of quaternion and Euler angles
  • raw accelerometer reading in the form of 3-dimensional vector
  • raw gyroscope reading in the form of 3-dimensional vector

The quaternion and Euler angles are different representations of the same arm direction, of which the latter is easier for human perception. The three components of the Euler angles correspond to the arm’s

  • pitch (vertical angle)
  • yaw (horizontal angle)
  • roll (rotational angle)

The pitch data is very reliable. It always refers to the horizontal plane as origin, and is unrelated to the horizontal direction of the arm. Therefore, the performer can feel free to turn around during the performance, without worrying about the pitch data going out of scope. The data ranges from -π/2 (arm towards ground) to π/2 (arm towards sky). According to my experience, this is probably the most expressive kind of data provided by the armband. People do not raise their arms for no reason. Therefore, the vertical direction of the arms is a very good indicator of the emotional state of the performer. By making use of the absolute pitch or the relative pitch over time, simple but effective mechanism can be conceived to respond to the performer’s emotional state.

The yaw and roll data refer to their initial state as origin. That means, once the armband get initialized, the yaw data represents the arm’s horizontal direction relative to this fixed origin rather than the current frontal direction of the performer’s body. In consequence, when the performer turns her body, the reading will be shifted. Since we have no way to capture the performer’s body direction, the yaw data is useless in most cases, unless the performer never turns her body during the whole performance. A possible use of the yaw data is to capture data from both arms, and calculate their difference to examine whether the openness of the arms. Another issue of the yaw and roll data is that the reference coordinate tends to drift over time, which makes the data even more unreliable. These data range from -π to π (representing a whole circle).

The raw data from the accelerometer and gyroscope can also be accessed. In fact, these are the data source for the SDK to calculate the orientation data. Yet beyond this usage, the data has their own significance—they measure the linear and angular acceleration of the armband. The data units are g (the gravitational constant) or °/s (degree per second), respectively. Being viewed separately, each component of these data might not be of great use in most performance scenarios. However, if we calculate the SRSS (square root of the sum of the squares) of all the components of either the accelerometer data or the gyroscope data, we have the magnitude of the linear or angular acceleration of the arm, which are very effective indicators of the intensity of the arm movement, which in turn contains emotional or rhythmical information of the performance.

After proper pairing, the armband also provides gesture data of the hand, which indicates which of the following gesture the arm is making:

  • rest
  • fist
  • waving in
  • waving out
  • fingers spread
  • thumb to pinky

However, this data is not as useful as it may seem at first sight. The hand gesture is calculated from the EMG data measured on the skin of the forearm, which is a side effect of the muscle movement. Therefore, the calculated gesture may not loyally indicate the actual gesture of the hand. What’s more, when exterior forces are applied to the muscles, the accuracy of the measurement can be largely affected. In fact, when the performer wears tight clothes on her upper arm, the hand gesture data tends to be nearly unusable.

While it was widely hoped that the armband can recognize custom hand gestures, this feature is still missing from the Myo SDK, which disappointed many developers. In spite of that, starting from version 0.8.0 of the SDK, the eight streams of raw EMG reading can be now accessed. This not only means that custom gesture recognition becomes possible (though it might require great efforts from the developer), but also opens various possibilities. I suppose that the EMG data might be employed by new media artists in ways similar to how EEG data from the brain is used.

Endnotes

  1. A prior essay with a focus on the application of Myo in my project The Humanistic Movement can be accessed via http://shi-weili.com/the-humanistic-movement-bodily-data-gathering-and-cross-application-interoperability/.

  2. Scripting is also supported by the Myo SDK in Myo Scripts.

  3. According to https://www.thalmic.com/blog/raw-uncut-drops-today/ , it seems that the refresh rate of the armband data is 200 Hz.

(Title image credit: Thalmic Labs)

The Humanistic Movement

Read More

https://vimeo.com/114502459

腾讯视频链接 (video link for visitors in China): http://v.qq.com/page/o/i/u/o0304j7xjiu.html

The Humanistic Movement (THM) is a generative music system. It collaborates with a human dancer, and improvises music using the dancer’s body movements as its inspiration source.

THM is not a body-instrument. It hates one-to-one mapping from gesture to sound, which awkwardly limits the dancer’s movements, making her, rather than the system, responsible for the composition. THM wants the dancer to dance, with confidence that the system will take good care of the music.

Master’s Spirit in Markov Chains

And the dancer need not worry that without her direct control, the system would generate ugly sounds. In fact, THM’s musical style comes from Wolfgang Amadeus Mozart. It has calculated second-order Markov Chains1 of note progression of the first movement of Mozart’s Piano Sonata No. 11 in A major, so it really has Mozart’s style in mind. For every two-note sequence in this work, THM knows the frequencies of all possible following notes. For example, it knows that after the note sequence E4 B3, the frequencies of the following notes are:

With this knowledge, when generating a new note, THM looks back for the last two notes it has generated, and looks them up in the Markov chains. It can then follow the frequency table of the following notes, so that it plays in the style of the great master. Because of the randomness built in this process, the music is new in every performance, yet the system has a consistent style all the time, just like a real musician with her own personality.

Movement-Influenced Melody and Tempo

While THM has its own music ideas, the dancer still has influence on the composition, with the direction and acceleration of her arm captured by the Myo armband in real time. THM always bases its work on the current state of the dancer, making the music in tune with the dance.

Whenever a new note comes, the system first examines whether the dancer’s arm is pointing higher or lower than the its direction at the last note, and will accordingly look for relatively higher or lower notes in its style reference. In this way, the dancer has influence on the melody with her dance movement. Meanwhile, she has not to be overstressed, since the responsibility of deciding the exact notes are on the shoulder of THM. So the dancer can move freely, and feel that the melody flows in accordance with her movements.

The relation between arm direction and note progression is most perceivable when the music has a slow tempo. When the music goes faster, the link becomes harder to perceive. Furthermore, this is still a low level mapping which cannot represent higher level states, such as emotion, of the dancer. In order to improve its intelligence, THM introduces tempo alteration in accordance with the intensity of the dancer’s movements. In some parts of the music, the system examines the acceleration the dancer’s arm, and generates fast (e.g. eighth notes) or slow (e.g. quarter notes) notes according to the reading. The acceleration indicates the speed and complexity of the dancer’s movements, and therefore is a good representation of her emotion. By mapping it to the intensity of the music, THM receives a higher level of influence from the dancer.

In Pursuit of Rhythm

The rhythm of time-based artwork is a complex notion, and THM seeks to achieve some extent of it through the organization of musical structure. Besides the above-mentioned tempo alteration, several other strategies are employed in this work.

Repetition and variation of previously generated music occurs on a bar scale. Although the work of THM is not melodic enough for the audience to memorize a long segment of it, the reoccurrence of a bar seconds after its first appearance is readily recognizable. When the audience realize that the music is repeating it self (and that the system has memory of its previous work), they are more willing to enjoy the piece as a planned work rather than totally random hits on the keyboard.

The musical form of THM was also carefully planned. In order to build up the mood of the work effectively, the music has a predefined gradually developing structure:

The music starts slowly with whole notes whose pitch proceed in tune with the dancer’s movements. It speeds up across time, gradually switching through half, quarter, eighth and sixteenth notes. When different note-lengths coexist in one bar, there will be tempo alteration according to the dancer’s movements. When a bar has fixed-length notes which are fast enough to form a sub-melody, repetition and variation will be employed to enhance the rhythm. Chords on a lower octave are gradually introduced to further enrich the sound. After eight bars of sixteenth notes, which is the fastest part of the piece, the music slows down and finally ends with a whole note on A4, which is its key note.

System Architecture

This article mainly covers the composition logic of the THM system. The whole architecture of the system is shown in the graph above. Before the composer can make use of the dancer’s movement data, the data has to be captured by the Myo armband and pre-processed by the corresponding infrastructure, which was discussed in The Humanistic Movement: Bodily Data Gathering and Cross-application Interoperability. After the notes get composed, they are sent to the sound engine, which was implemented in Ableton Live with Max for Live, and to the visual engine, which was implement in C++ based on openFrameworks, to produce sound and corresponding visuals.

The concept behind THM was further discussed in The Humanistic Movement: Proposal.

Endnotes

  1. The order of Markov chains decides the extent of simulation of the master’s note choices. With the order of zero, the system chooses the notes based on the master’s overall distribution of note choice frequencies, with no knowledge of its previous composition. The higher the order is, the more previously composed notes the system will look back for. Second order Markov chains can already support significantly accurate simulation of the master’s style, and is reasonably simple to implement.

The Humanistic Movement: Bodily Data Gathering and Cross-application Interoperability

Read More

The first step of the THM project is the preparation of infrastructures—in order to base the generative art on bodily data, we need to get the data; in order to implement the core logic and sound module in separate platforms, we need to enable the communication between the two.

After building these infrastructures with the iteration of the first two prototypes, the architecture of the THM system is fixed like this:

  • It senses bodily data with two Myo armbands.
  • The data is then gathered and analyzed by the core logic implemented in C++ programming language based on openFrameworks.
  • The core logic composes the music, and sends directions to the sound engine implemented in Max/MSP for audio output.
  • With the foundation of openFrameworks, the core logic can compose and render visuals by itself if needed.

Bodily Data Gathering

The screenshot above demonstrates all the bodily data that can be fetched with the Myo SDK. Multiple armbands can be connected to the system at the same time, which enables data capture from both arms.

Two types of bodily data are gathered. The majority of them are spatial data, which contains

  • orientation data,
  • accelerometer output, and
  • gyroscope output.

Indicating the direction of the forearm, the orientation data is calculated by the Myo SDK from accelerometer data and gyroscope data, and is originally provided in the form of a quaternion. The THM core logic then translates it into its equivalent Euler angles, which directly correspond to the

  • pitch (vertical angle),
  • yaw (horizontal angle), and
  • roll (rotational angle)

of the arm. The Euler angles are represented both numerically and graphically in the screenshot.

According to my experience, the pitch data is very reliable. Referring to the horizontal plane as origin, the number ranges between -π/2 (arm towards ground) and π/2 (arm towards sky), and is unrelated to the frontal direction of the body, and unrelated to whether the arm is pointing to the front or the back. After using the armband in a drama performance, I was surprised by how effective the pitch data indicates the emotional state of the performer. It might be one of the most exploited input data in THM as well.

The reference coordinate system of the yaw and roll data are determined by the position of the armband at the point of connection establishment. In consequence, when the user turns his or her body, the reading will be shifted. Another issue of these data is that the reference coordinate tends to drift over time, which also affects the usefulness of the reading. These data range from -π to π (representing a whole circle).

The raw data from the accelerometer and gyroscope can also be accessed via the Myo SDK. They measure the linear and rotational acceleration of the armband in g (the gravitational constant) or °/s (degree per second) respectively. By calculating the SRSS (square root of the sum of the squares) of its components, we get the magnitude of the linear acceleration. When the armband is placed still, this number stays near 1, representing the influence of gravity. When the armband is in free fall, the number approaches 0. It is apparent that acceleration of the arm contains rich rhythmic information. Therefore, it is promising to take advantage of the accelerometer and gyroscope output.

Another type of bodily data that can be acquired via the Myo SDK is gesture data. As indicated by the lower right part of the screenshot, when the armband is worn on arm and the sync gesture is performed and correctly recognized, the SDK provides additional information, including

  • which arm the armband is on,
  • the direction of the x-axis of the armband, and most importantly,
  • the gesture of the hand, which in its turn includes
    • rest
    • fist
    • wave in
    • wave out
    • fingers spread
    • thumb to pinky

While it sounds promising, the gesture data is actually hard to make use of in this project. One reason is that it requires synchronization every time the armband is worn, which is inconvenient and sometimes frustrating, since it is not always easy to make the sync gesture recognized by the Myo SDK. The essential reason, however, is that the gesture data is calculated from the electricity measured on the skin of the arm, which is actually a side effect of the muscle movement itself. Therefore, the calculated hand gesture cannot be perfectly accurate, and is sensitive to exterior influences such as tight clothes on the upper arm, or extreme arm positions. Based on the above considerations, I currently have few intention to make use of the gesture data in THM.

Cross-application Interoperability

Because the core logic and the sound engine of THM are implemented as separate applications base on different platforms (openFrameworks and Max/MSP, respectively), it is necessary to build the communication mechanism between them. The Open Sound Control (OSC) protocol nicely serves this purpose, enabling sending data and commands from the core logic to the sound engine in the form of UDP datagrams. In the core logic, the openFrameworks addon ofxOsc is used to form and send OSC messages. In the sound engine, the udpreceive patch is used to receive the messages, and then the route patch is used to categorize the data and commands, as demonstrated in the screenshot above.

The sound engine represented by the screenshot is only a demonstration of the interoperability between it and the core logic. It simply feeds the bodily data to two groups of oscillators to generate sound, with the frequencies related to arm pitches in the past few seconds, and the amplitudes to arm yaws. Together with the core logic, these primitive prototypes demonstrate the architecture of the THM system described in the beginning of this article.

The next step of the THM project is to explain the bodily data on higher levels (such as tempo, rhythm and emotion), and generate more artistic output.

Frankenstein's Frankenstein

Read More

https://vimeo.com/122130782

In this production of Frankenstein, the team experimented with several digital technologies in order to bring about an immersive theatrical experience.

Two projectors were used simultaneously. The first projector threw huge ambient images (fire, thunder, etc) onto the backdrop to set the keynote. This background image, together with the actors, was then captured by a webcam and sent to VDMX, where the projections were controlled. Before the captured image was sent to the second projector, its brightness and saturation were respectively mapped from the direction of the forearms of the two actors, which was captured by the Myo armbands. Therefore, not only would there be multiple overlaying images of the actors and the stage, but also that the attributes of this visual environment were responsive to the arm movement of the actors, which is a simple but effective indicator of the emotion of the characters.

In the end of the play, after killing its creator, the monster of Frankenstein began to vomit, indicating that it was going to give birth to its next generation—an echo of the circular wording of the play's title.

(Project team: Christopher DAMEN, Mark PUNCHINSKY, Stephanie BEATTIE, SHI Weili, Michael GLEN, Kieun KIM, LIU Jiaqi)

The Humanistic Movement: Proposal

Read More

I’m passionate about body movements and its interaction with space. For the final project of my Major Studio 1, I propose The Humanistic Movement (THM) as a 8-week exploration of generative art from bodily data.

A Humanistic View on Generative Art

Perhaps the broadest definition of generative art is like this:

Generative art is art created with the use of an autonomous system which generates data or interprets data input, and maps it to human-perceivable scales of visual, audio and/or other sensory data as output.

This kind of definition is open-minded as well as meaningless. Perceivable is not equal to affective. Merely mapping data to the human scale doesn’t guarantee good art.

Instead, artists should look for a humanistic scale of output in order to infuse their generative work with rhythm and verve, and therefore provoke emotional and reflective reaction from the audience.

The Model: Generating Art from Body Movements

The humanistic scale itself, however, is a mystery. Not only new media artists, but also traditional artists in all artistic domains, have been struggling for a touch with the human heart. Hopefully, in the making of generative art we have a shortcut—to wire in human-being themselves as data source, generating art from human for human.

On the other hand, if the generative system merely translates its input mechanically, making one-to-one mapping from some sensory data stream to a perceivable data set, the result will be by no means exciting from the viewpoint of generative art—the system doesn’t has a genius for art. The best possible outcome is a new piece of “instrument”, which has a high requirement on its “player” for the creation of good artwork.

THM will be of higher intelligence. Rather than relying on the data source to generate art, it only uses it as a reference. The way it works can be imagined as a musician collaborating with a dancer in an improvisatory work. Each one is autonomous. The dancer moves her body on her own initiative, as well as the musician plays her instrument. They don’t base their every next step exactly on each other’s latest action. Rather, they communicate on higher levels of artistic elements such as tempo and mood, in this way exchanging inspirations in the process of the performance, and seeking to achieve harmony in their collaborative artwork.

THM, as a computational generative art system, will function as the above-mentioned improvisatory musician. It captures data from body movements, and generate music and/or visuals in tune with the input, as an attempt to achieve the level of artistry which usually belongs to human masters.

If fruitful, THM is going to be my long-term research and making theme, and the outcome will be a systematic approach to humanistic generative music/visuals. For this Major Studio project, after several explorative prototypes, the final outcome will be one art piece demonstrating the concept.

Data-gathering Technologies

In order to capture data from body movements, sensory technologies need to be considered and evaluated. THM doesn’t not mean to hinder its human collaborator. Rather, it would like her to move as free as possible; it would like her to dance.

Several data-gathering technologies have been considered:

1. Myo (Preferred)

Myo is a gesture control armband developed by ThalmicLabs. With a 9-axis inertial measurement unit (IMU), it calculates spatial data about the orientation and movement of the user's arm. In addition, by measuring the electrical activity generated by arm muscles, it recognizes hand gestures. It is wireless and lightweight, not a big hindrance to body movements. Two armbands can be paired to the same computer, enabling movement capture of both arms.

The biggest issue with Myo is that the data it captured is inaccurate. Since the skin electricity measured by Myo is only the side effect of muscle movement, the measurement can be interfered by exterior factors such as muscle-binding clothes (even on the upper arm). Furthermore, when the arm goes to extreme positions, the measured data tends to jump the opposite extreme. As a loosely coupled generative system, THM does not have strict requirements on data accuracy. Nevertheless, captured data need to be pre-processed to reduce unreasonable behaviors of the system.

2. Kinect (Back-up)

Being developed and sold by Microsoft by years, Kinect is a sophisticated body tracking solution which captures the position of the whole human body through a set of cameras. No body attachment of sensor is needed. There are few reasons not to try it out if time is sufficient. My only concern for now is that it requires the person to be in front of the cameras in order to be captured, therefore limits her movement to some extent.

3. Brainwave Gathering Technologies (Alterative)

Besides gathering bodily data, an alternative data-sources is the mind. Brain-Computer Interface has been being researched for decades, and various ready-to-use data gathering technologies, such as EPOC, Muse and MindWave, are being shipped to the market. A large part of the data gathered by these devices (EEG, Attention/Meditation level, etc) are on subconscious level. For our goal of gathering humanistic data input, the brainwave approach is also very promising. I might try this alternative in future steps.

Supportive Projects

For the better outcome of this Major Studio 1 project, I plan to merge the final projects of some other courses to support it. This will allow me more time to work on it, and resources from these supportive courses.

  1. Theoretical Research on Generative Music/Visuals in Independent Study: Symphony + Technology, to provide theoretical basis for this project.
  2. Final project with Max/MSP in Dynamic Sound and Performance, as the sound-generating engine for THM.
  3. (Speculative) final project with openFrameworks in Creativity and Computation: Lab if required by the course and/or time-permitted, as the visual-generating engine for THM.

Schedule

After this proposal, there will be three prototypes and a final presentation of the project. The date and expected outcome of each phase is listed below:

  • Oct. 27/29 Prototype 1: getting data from body movement
  • Nov. 03/05 Prototype 2: generating sound/visuals
  • Nov. 17/19 Prototype 3: art directions; finalizing design
  • Dec. 08/10 Final Presentation

Dragon Nest · 龙之谷

Read More

https://www.youtube.com/watch?v=bxEIqhGDUcY

Excerpt, performed in Sino-Japanese Butoh Festival 2014

Date: April 27th, 2014
Venue: BeijingDance / LDTX

Dance Group: White Fox Butoh
Choreography: KATSURA Kan
Dancers: KATSURA Kan, SHI Weili, ZHAO Zhiyong, LIU Chao, LIU Wei