Shan Shui in the World · 世间山水

Read More

https://vimeo.com/169304961

本文中文版链接 (Chinese version): http://shi-weili.com/shan-shui-in-the-world-chinese/

Shan Shui in the World presents shanshui (山水, landscape) paintings of selected places in the world generated by a computational process based on geography-related information.

This project revisits the ideas implicit in Chinese literati paintings of shan shui: the relationship between urban life and people’s yearning for the nature, and between social responsibility and spiritual purity. For an audience living in an urban area, a traditional shanshui painting provides them with spiritual support through the depiction of the natural scene of elsewhere. With generative technology, however, Shan Shui in the World has the ability to represent any place in the world—including the city where the audience is—in the form of a shanshui painting based on geography-related information of the place.

The notion that shan shui can exist right here (though in a generative parallel world) not only underscores the contrast between the artificial world and nature, but also reminds the audience of an alternative approach to spiritual strength: instead of resorting to the shan shui of elsewhere, we may be able to obtain inner peace from the “shan shui” of our present location by looking inward.

The Generative Process

In this first production of Shan Shui in the World, the shan shui of Manhattan, New York is generated based on its building information. The generative engine was written in C++ with use of creative coding toolkit openFrameworks. The code that renders the shanshui painting was written in OpenGL Shading Language as fragment shaders.

Height and area of the buildings in Manhattan, New York plotted according to their location.

Adjacent buildings merged into mountains, indicated by colors.

Outline of the mountains generated based on building information.

Mountains rendered in the style of ink-wash painting.

Mountains rendered in the style of blue-green shan shui.

Scroll-making

The generative shanshui paintings were printed and framed into traditional Chinese scroll paintings, and inscribed and sealed by hand.

A partially unfurled handscroll, together with a furled one in a samite box.

Details of a scroll painting.

Two seals and their imprints, together with red ink and a carving knife.

Generative Shanshui Paintings

Scroll of Shan Shui in Manhattan, New York. 2016. Handscroll. Ink on paper. (192 × 12 inch) Scroll left to see the whole painting.

Downtown Manhattan, New York, High Distance. 2016. Hanging scroll. Ink on paper. (24 × 55 inch)

Uptown Manhattan, New York, Level Distance. 2016. Hanging scroll. Ink on paper. (24 × 55 inch)

Scroll of Blue-green Shan Shui in Manhattan, New York. 2016. Handscroll. Ink and colors on silk. (178 × 12 inch) Scroll left to see the whole painting.

Blue-green Downtown Manhattan, New York, High Distance. 2016. Hanging scroll. Ink and colors on silk. (24 × 55 inch)

Blue-green Uptown Manhattan, New York, Level Distance. 2016. Hanging scroll. Ink and colors on silk. (24 × 55 inch)

Scroll of Blue-green Shan Shui in Baltimore. 2016. Hanging scroll. Ink and colors on silk. (20 × 55 inch)

Exhibitions

Publications

(Credits: The geographical data used by Shan Shui in the World is from © OpenStreetMap contributors, Who’s On First, Natural Earth, and openstreetmapdata.com through Mapzen.)

Observe the Heart · 观心

Read More

https://vimeo.com/160933840

The video above shows a technical prototype for Observe the Heart.

If you ask a Zen master how to meditate, he might answer you, "Observe the heart." But the heart is so abstract to imagine, not even to mention observation. Observe the Heart is an artistic attempt to represent the meditator's mental state, generating visuals and sounds based on realtime brainwave input. The generative visuals are projected back onto the meditator, transforming the introspective meditation into an observable performance, in a sense.

There is more to tell about the concept. While third-party audiences can watch and hear one's meditation, the meditator themselves couldn't experience the generative contents in real time (given that they close their eyes during the meditation, and may even wear earplugs to block the sound). It is then questionable who is this meditation for. Moreover, the meditator will nonetheless be curious about how their meditation looks and sounds like, and this mental activity will be captured by the brainwave sensor and be reflected by the generative output. Therefore, it could make it even harder for the meditator to really "observe the heart".

The experience is designed to be installed in a dark room. The meditator sits in the center of the ground, with a projector projecting the generative visuals onto them. The audiences watch the meditation from above in order to get a better view. In this demonstrative production, a NeuroSky MindWave Mobile EEG headset is used to sense the meditator's brainwave. An openFrameworks application analyses the brainwave signal, and drives a GLSL fragment shader to render the generative visuals, and a Max patch to generate the sound. The generative approaches could be enriched for better output in future productions.

Impermanent Zen Garden · 无常禅园

Read More

https://vimeo.com/190820154

本文中文版链接 (Chinese version): http://shi-weili.com/impermanent-zen-garden-chinese/

Impermanent Zen Garden contrasts the Buddhist idea of impermanence with the principle of mindfulness meditation. Zen practitioners believe that through mindful observation one can gain wisdom. Therefore, Zen gardens were built around the world to provide serene environments for meditators to focus their minds.

However, in this Impermanent Zen Garden, almost every aspect of the environment is constantly changing. Once the meditator focuses their mind on any object in the meditation room or in the garden, they will notice the moving mountains on the doors, the floating clouds on the walls, the smoke being blown on the ceiling, the water permeating through the tatami, the water stains coming and going on the garden walls, the touring glosses on the rocks, the traveling ripples on the sand, and the blinking stars in the night-sky. Not any two moments are identical, so that it seems impossible to fully observe any moment.

Therefore, the dilemma is thrown to the audience, and the decision is up to them to make. Are you going to concede that the world is ultimately agnostic, or are you determined to embrace each and every present moment in this impermanent world?

Impermanent Zen Garden is a dynamic environment made with Unity. The ever-changing contents are generated in real time using custom shader programs.

The Humanistic Movement

Read More

https://vimeo.com/114502459

腾讯视频链接 (video link for visitors in China): http://v.qq.com/page/o/i/u/o0304j7xjiu.html

The Humanistic Movement (THM) is a generative music system. It collaborates with a human dancer, and improvises music using the dancer’s body movements as its inspiration source.

THM is not a body-instrument. It hates one-to-one mapping from gesture to sound, which awkwardly limits the dancer’s movements, making her, rather than the system, responsible for the composition. THM wants the dancer to dance, with confidence that the system will take good care of the music.

Master’s Spirit in Markov Chains

And the dancer need not worry that without her direct control, the system would generate ugly sounds. In fact, THM’s musical style comes from Wolfgang Amadeus Mozart. It has calculated second-order Markov Chains1 of note progression of the first movement of Mozart’s Piano Sonata No. 11 in A major, so it really has Mozart’s style in mind. For every two-note sequence in this work, THM knows the frequencies of all possible following notes. For example, it knows that after the note sequence E4 B3, the frequencies of the following notes are:

With this knowledge, when generating a new note, THM looks back for the last two notes it has generated, and looks them up in the Markov chains. It can then follow the frequency table of the following notes, so that it plays in the style of the great master. Because of the randomness built in this process, the music is new in every performance, yet the system has a consistent style all the time, just like a real musician with her own personality.

Movement-Influenced Melody and Tempo

While THM has its own music ideas, the dancer still has influence on the composition, with the direction and acceleration of her arm captured by the Myo armband in real time. THM always bases its work on the current state of the dancer, making the music in tune with the dance.

Whenever a new note comes, the system first examines whether the dancer’s arm is pointing higher or lower than the its direction at the last note, and will accordingly look for relatively higher or lower notes in its style reference. In this way, the dancer has influence on the melody with her dance movement. Meanwhile, she has not to be overstressed, since the responsibility of deciding the exact notes are on the shoulder of THM. So the dancer can move freely, and feel that the melody flows in accordance with her movements.

The relation between arm direction and note progression is most perceivable when the music has a slow tempo. When the music goes faster, the link becomes harder to perceive. Furthermore, this is still a low level mapping which cannot represent higher level states, such as emotion, of the dancer. In order to improve its intelligence, THM introduces tempo alteration in accordance with the intensity of the dancer’s movements. In some parts of the music, the system examines the acceleration the dancer’s arm, and generates fast (e.g. eighth notes) or slow (e.g. quarter notes) notes according to the reading. The acceleration indicates the speed and complexity of the dancer’s movements, and therefore is a good representation of her emotion. By mapping it to the intensity of the music, THM receives a higher level of influence from the dancer.

In Pursuit of Rhythm

The rhythm of time-based artwork is a complex notion, and THM seeks to achieve some extent of it through the organization of musical structure. Besides the above-mentioned tempo alteration, several other strategies are employed in this work.

Repetition and variation of previously generated music occurs on a bar scale. Although the work of THM is not melodic enough for the audience to memorize a long segment of it, the reoccurrence of a bar seconds after its first appearance is readily recognizable. When the audience realize that the music is repeating it self (and that the system has memory of its previous work), they are more willing to enjoy the piece as a planned work rather than totally random hits on the keyboard.

The musical form of THM was also carefully planned. In order to build up the mood of the work effectively, the music has a predefined gradually developing structure:

The music starts slowly with whole notes whose pitch proceed in tune with the dancer’s movements. It speeds up across time, gradually switching through half, quarter, eighth and sixteenth notes. When different note-lengths coexist in one bar, there will be tempo alteration according to the dancer’s movements. When a bar has fixed-length notes which are fast enough to form a sub-melody, repetition and variation will be employed to enhance the rhythm. Chords on a lower octave are gradually introduced to further enrich the sound. After eight bars of sixteenth notes, which is the fastest part of the piece, the music slows down and finally ends with a whole note on A4, which is its key note.

System Architecture

This article mainly covers the composition logic of the THM system. The whole architecture of the system is shown in the graph above. Before the composer can make use of the dancer’s movement data, the data has to be captured by the Myo armband and pre-processed by the corresponding infrastructure, which was discussed in The Humanistic Movement: Bodily Data Gathering and Cross-application Interoperability. After the notes get composed, they are sent to the sound engine, which was implemented in Ableton Live with Max for Live, and to the visual engine, which was implement in C++ based on openFrameworks, to produce sound and corresponding visuals.

The concept behind THM was further discussed in The Humanistic Movement: Proposal.

Endnotes

  1. The order of Markov chains decides the extent of simulation of the master’s note choices. With the order of zero, the system chooses the notes based on the master’s overall distribution of note choice frequencies, with no knowledge of its previous composition. The higher the order is, the more previously composed notes the system will look back for. Second order Markov chains can already support significantly accurate simulation of the master’s style, and is reasonably simple to implement.

The Humanistic Movement: Bodily Data Gathering and Cross-application Interoperability

Read More

The first step of the THM project is the preparation of infrastructures—in order to base the generative art on bodily data, we need to get the data; in order to implement the core logic and sound module in separate platforms, we need to enable the communication between the two.

After building these infrastructures with the iteration of the first two prototypes, the architecture of the THM system is fixed like this:

  • It senses bodily data with two Myo armbands.
  • The data is then gathered and analyzed by the core logic implemented in C++ programming language based on openFrameworks.
  • The core logic composes the music, and sends directions to the sound engine implemented in Max/MSP for audio output.
  • With the foundation of openFrameworks, the core logic can compose and render visuals by itself if needed.

Bodily Data Gathering

The screenshot above demonstrates all the bodily data that can be fetched with the Myo SDK. Multiple armbands can be connected to the system at the same time, which enables data capture from both arms.

Two types of bodily data are gathered. The majority of them are spatial data, which contains

  • orientation data,
  • accelerometer output, and
  • gyroscope output.

Indicating the direction of the forearm, the orientation data is calculated by the Myo SDK from accelerometer data and gyroscope data, and is originally provided in the form of a quaternion. The THM core logic then translates it into its equivalent Euler angles, which directly correspond to the

  • pitch (vertical angle),
  • yaw (horizontal angle), and
  • roll (rotational angle)

of the arm. The Euler angles are represented both numerically and graphically in the screenshot.

According to my experience, the pitch data is very reliable. Referring to the horizontal plane as origin, the number ranges between -π/2 (arm towards ground) and π/2 (arm towards sky), and is unrelated to the frontal direction of the body, and unrelated to whether the arm is pointing to the front or the back. After using the armband in a drama performance, I was surprised by how effective the pitch data indicates the emotional state of the performer. It might be one of the most exploited input data in THM as well.

The reference coordinate system of the yaw and roll data are determined by the position of the armband at the point of connection establishment. In consequence, when the user turns his or her body, the reading will be shifted. Another issue of these data is that the reference coordinate tends to drift over time, which also affects the usefulness of the reading. These data range from -π to π (representing a whole circle).

The raw data from the accelerometer and gyroscope can also be accessed via the Myo SDK. They measure the linear and rotational acceleration of the armband in g (the gravitational constant) or °/s (degree per second) respectively. By calculating the SRSS (square root of the sum of the squares) of its components, we get the magnitude of the linear acceleration. When the armband is placed still, this number stays near 1, representing the influence of gravity. When the armband is in free fall, the number approaches 0. It is apparent that acceleration of the arm contains rich rhythmic information. Therefore, it is promising to take advantage of the accelerometer and gyroscope output.

Another type of bodily data that can be acquired via the Myo SDK is gesture data. As indicated by the lower right part of the screenshot, when the armband is worn on arm and the sync gesture is performed and correctly recognized, the SDK provides additional information, including

  • which arm the armband is on,
  • the direction of the x-axis of the armband, and most importantly,
  • the gesture of the hand, which in its turn includes
    • rest
    • fist
    • wave in
    • wave out
    • fingers spread
    • thumb to pinky

While it sounds promising, the gesture data is actually hard to make use of in this project. One reason is that it requires synchronization every time the armband is worn, which is inconvenient and sometimes frustrating, since it is not always easy to make the sync gesture recognized by the Myo SDK. The essential reason, however, is that the gesture data is calculated from the electricity measured on the skin of the arm, which is actually a side effect of the muscle movement itself. Therefore, the calculated hand gesture cannot be perfectly accurate, and is sensitive to exterior influences such as tight clothes on the upper arm, or extreme arm positions. Based on the above considerations, I currently have few intention to make use of the gesture data in THM.

Cross-application Interoperability

Because the core logic and the sound engine of THM are implemented as separate applications base on different platforms (openFrameworks and Max/MSP, respectively), it is necessary to build the communication mechanism between them. The Open Sound Control (OSC) protocol nicely serves this purpose, enabling sending data and commands from the core logic to the sound engine in the form of UDP datagrams. In the core logic, the openFrameworks addon ofxOsc is used to form and send OSC messages. In the sound engine, the udpreceive patch is used to receive the messages, and then the route patch is used to categorize the data and commands, as demonstrated in the screenshot above.

The sound engine represented by the screenshot is only a demonstration of the interoperability between it and the core logic. It simply feeds the bodily data to two groups of oscillators to generate sound, with the frequencies related to arm pitches in the past few seconds, and the amplitudes to arm yaws. Together with the core logic, these primitive prototypes demonstrate the architecture of the THM system described in the beginning of this article.

The next step of the THM project is to explain the bodily data on higher levels (such as tempo, rhythm and emotion), and generate more artistic output.

The Humanistic Movement: Proposal

Read More

I’m passionate about body movements and its interaction with space. For the final project of my Major Studio 1, I propose The Humanistic Movement (THM) as a 8-week exploration of generative art from bodily data.

A Humanistic View on Generative Art

Perhaps the broadest definition of generative art is like this:

Generative art is art created with the use of an autonomous system which generates data or interprets data input, and maps it to human-perceivable scales of visual, audio and/or other sensory data as output.

This kind of definition is open-minded as well as meaningless. Perceivable is not equal to affective. Merely mapping data to the human scale doesn’t guarantee good art.

Instead, artists should look for a humanistic scale of output in order to infuse their generative work with rhythm and verve, and therefore provoke emotional and reflective reaction from the audience.

The Model: Generating Art from Body Movements

The humanistic scale itself, however, is a mystery. Not only new media artists, but also traditional artists in all artistic domains, have been struggling for a touch with the human heart. Hopefully, in the making of generative art we have a shortcut—to wire in human-being themselves as data source, generating art from human for human.

On the other hand, if the generative system merely translates its input mechanically, making one-to-one mapping from some sensory data stream to a perceivable data set, the result will be by no means exciting from the viewpoint of generative art—the system doesn’t has a genius for art. The best possible outcome is a new piece of “instrument”, which has a high requirement on its “player” for the creation of good artwork.

THM will be of higher intelligence. Rather than relying on the data source to generate art, it only uses it as a reference. The way it works can be imagined as a musician collaborating with a dancer in an improvisatory work. Each one is autonomous. The dancer moves her body on her own initiative, as well as the musician plays her instrument. They don’t base their every next step exactly on each other’s latest action. Rather, they communicate on higher levels of artistic elements such as tempo and mood, in this way exchanging inspirations in the process of the performance, and seeking to achieve harmony in their collaborative artwork.

THM, as a computational generative art system, will function as the above-mentioned improvisatory musician. It captures data from body movements, and generate music and/or visuals in tune with the input, as an attempt to achieve the level of artistry which usually belongs to human masters.

If fruitful, THM is going to be my long-term research and making theme, and the outcome will be a systematic approach to humanistic generative music/visuals. For this Major Studio project, after several explorative prototypes, the final outcome will be one art piece demonstrating the concept.

Data-gathering Technologies

In order to capture data from body movements, sensory technologies need to be considered and evaluated. THM doesn’t not mean to hinder its human collaborator. Rather, it would like her to move as free as possible; it would like her to dance.

Several data-gathering technologies have been considered:

1. Myo (Preferred)

Myo is a gesture control armband developed by ThalmicLabs. With a 9-axis inertial measurement unit (IMU), it calculates spatial data about the orientation and movement of the user's arm. In addition, by measuring the electrical activity generated by arm muscles, it recognizes hand gestures. It is wireless and lightweight, not a big hindrance to body movements. Two armbands can be paired to the same computer, enabling movement capture of both arms.

The biggest issue with Myo is that the data it captured is inaccurate. Since the skin electricity measured by Myo is only the side effect of muscle movement, the measurement can be interfered by exterior factors such as muscle-binding clothes (even on the upper arm). Furthermore, when the arm goes to extreme positions, the measured data tends to jump the opposite extreme. As a loosely coupled generative system, THM does not have strict requirements on data accuracy. Nevertheless, captured data need to be pre-processed to reduce unreasonable behaviors of the system.

2. Kinect (Back-up)

Being developed and sold by Microsoft by years, Kinect is a sophisticated body tracking solution which captures the position of the whole human body through a set of cameras. No body attachment of sensor is needed. There are few reasons not to try it out if time is sufficient. My only concern for now is that it requires the person to be in front of the cameras in order to be captured, therefore limits her movement to some extent.

3. Brainwave Gathering Technologies (Alterative)

Besides gathering bodily data, an alternative data-sources is the mind. Brain-Computer Interface has been being researched for decades, and various ready-to-use data gathering technologies, such as EPOC, Muse and MindWave, are being shipped to the market. A large part of the data gathered by these devices (EEG, Attention/Meditation level, etc) are on subconscious level. For our goal of gathering humanistic data input, the brainwave approach is also very promising. I might try this alternative in future steps.

Supportive Projects

For the better outcome of this Major Studio 1 project, I plan to merge the final projects of some other courses to support it. This will allow me more time to work on it, and resources from these supportive courses.

  1. Theoretical Research on Generative Music/Visuals in Independent Study: Symphony + Technology, to provide theoretical basis for this project.
  2. Final project with Max/MSP in Dynamic Sound and Performance, as the sound-generating engine for THM.
  3. (Speculative) final project with openFrameworks in Creativity and Computation: Lab if required by the course and/or time-permitted, as the visual-generating engine for THM.

Schedule

After this proposal, there will be three prototypes and a final presentation of the project. The date and expected outcome of each phase is listed below:

  • Oct. 27/29 Prototype 1: getting data from body movement
  • Nov. 03/05 Prototype 2: generating sound/visuals
  • Nov. 17/19 Prototype 3: art directions; finalizing design
  • Dec. 08/10 Final Presentation

New York City Panorama Symphony

Read More

https://vimeo.com/122164868

This project enables the audience to listen to New York City's skyline as a piece of polyphonic music. Panning across the panorama, the audience can not only enjoy a spectacular view of the city's skyscrapers, but also feel the rhythm and texture of these buildings by ear—a somehow exotic, but truly panoramic experience.

In preparation for music generation, a panorama photo of New York City was cleaned up and downgraded into 8 levels of grayscale. The processed image was scanned by a Processing sketch. For every vertical line of pixels, the height of the highest non-white pixel defines its base frequency; the overall level of the line's darkness defines the amplitude of the base frequency. To enrich the sound, the 6 lowest overtones of the base frequency have their respective amplitude defined by the amount of each grayscale level in the line, from the darkest to the lightest one—this is how the texture of the buildings is represented in the music. Via Open Sound Control protocol, all these calculated data are sent from the Processing sketch to a Max patch, where the music is generated accordingly.

(Photo credits: photographed by Jnn13, stitched by LiveChocolate)

You & You

Read More

https://vimeo.com/123268029

You & You is an interactive music program which performs a whole song based on 3 seconds of the user's voice input. A one-man chorus is built through repetition and tonal modification.

This project was implemented in Max/MSP. For Mac users, an OS X build can be downloaded and experienced.