The first step of the THM project is the preparation of infrastructures—in order to base the generative art on bodily data, we need to get the data; in order to implement the core logic and sound module in separate platforms, we need to enable the communication between the two.

After building these infrastructures with the iteration of the first two prototypes, the architecture of the THM system is fixed like this:

  • It senses bodily data with two Myo armbands.
  • The data is then gathered and analyzed by the core logic implemented in C++ programming language based on openFrameworks.
  • The core logic composes the music, and sends directions to the sound engine implemented in Max/MSP for audio output.
  • With the foundation of openFrameworks, the core logic can compose and render visuals by itself if needed.

Bodily Data Gathering

The screenshot above demonstrates all the bodily data that can be fetched with the Myo SDK. Multiple armbands can be connected to the system at the same time, which enables data capture from both arms.

Two types of bodily data are gathered. The majority of them are spatial data, which contains

  • orientation data,
  • accelerometer output, and
  • gyroscope output.

Indicating the direction of the forearm, the orientation data is calculated by the Myo SDK from accelerometer data and gyroscope data, and is originally provided in the form of a quaternion. The THM core logic then translates it into its equivalent Euler angles, which directly correspond to the

  • pitch (vertical angle),
  • yaw (horizontal angle), and
  • roll (rotational angle)

of the arm. The Euler angles are represented both numerically and graphically in the screenshot.

According to my experience, the pitch data is very reliable. Referring to the horizontal plane as origin, the number ranges between -π/2 (arm towards ground) and π/2 (arm towards sky), and is unrelated to the frontal direction of the body, and unrelated to whether the arm is pointing to the front or the back. After using the armband in a drama performance, I was surprised by how effective the pitch data indicates the emotional state of the performer. It might be one of the most exploited input data in THM as well.

The reference coordinate system of the yaw and roll data are determined by the position of the armband at the point of connection establishment. In consequence, when the user turns his or her body, the reading will be shifted. Another issue of these data is that the reference coordinate tends to drift over time, which also affects the usefulness of the reading. These data range from -π to π (representing a whole circle).

The raw data from the accelerometer and gyroscope can also be accessed via the Myo SDK. They measure the linear and rotational acceleration of the armband in g (the gravitational constant) or °/s (degree per second) respectively. By calculating the SRSS (square root of the sum of the squares) of its components, we get the magnitude of the linear acceleration. When the armband is placed still, this number stays near 1, representing the influence of gravity. When the armband is in free fall, the number approaches 0. It is apparent that acceleration of the arm contains rich rhythmic information. Therefore, it is promising to take advantage of the accelerometer and gyroscope output.

Another type of bodily data that can be acquired via the Myo SDK is gesture data. As indicated by the lower right part of the screenshot, when the armband is worn on arm and the sync gesture is performed and correctly recognized, the SDK provides additional information, including

  • which arm the armband is on,
  • the direction of the x-axis of the armband, and most importantly,
  • the gesture of the hand, which in its turn includes
    • rest
    • fist
    • wave in
    • wave out
    • fingers spread
    • thumb to pinky

While it sounds promising, the gesture data is actually hard to make use of in this project. One reason is that it requires synchronization every time the armband is worn, which is inconvenient and sometimes frustrating, since it is not always easy to make the sync gesture recognized by the Myo SDK. The essential reason, however, is that the gesture data is calculated from the electricity measured on the skin of the arm, which is actually a side effect of the muscle movement itself. Therefore, the calculated hand gesture cannot be perfectly accurate, and is sensitive to exterior influences such as tight clothes on the upper arm, or extreme arm positions. Based on the above considerations, I currently have few intention to make use of the gesture data in THM.

Cross-application Interoperability

Because the core logic and the sound engine of THM are implemented as separate applications base on different platforms (openFrameworks and Max/MSP, respectively), it is necessary to build the communication mechanism between them. The Open Sound Control (OSC) protocol nicely serves this purpose, enabling sending data and commands from the core logic to the sound engine in the form of UDP datagrams. In the core logic, the openFrameworks addon ofxOsc is used to form and send OSC messages. In the sound engine, the udpreceive patch is used to receive the messages, and then the route patch is used to categorize the data and commands, as demonstrated in the screenshot above.

The sound engine represented by the screenshot is only a demonstration of the interoperability between it and the core logic. It simply feeds the bodily data to two groups of oscillators to generate sound, with the frequencies related to arm pitches in the past few seconds, and the amplitudes to arm yaws. Together with the core logic, these primitive prototypes demonstrate the architecture of the THM system described in the beginning of this article.

The next step of the THM project is to explain the bodily data on higher levels (such as tempo, rhythm and emotion), and generate more artistic output.