Observe the Heart · 观心

Read More


The video above shows a technical prototype for Observe the Heart.

If you ask a Zen master how to meditate, he might answer you, "Observe the heart." But the heart is so abstract to imagine, not even to mention observation. Observe the Heart is an artistic attempt to represent the meditator's mental state, generating visuals and sounds based on realtime brainwave input. The generative visuals are projected back onto the meditator, transforming the introspective meditation into an observable performance, in a sense.

There is more to tell about the concept. While third-party audiences can watch and hear one's meditation, the meditator themselves couldn't experience the generative contents in real time (given that they close their eyes during the meditation, and may even wear earplugs to block the sound). It is then questionable who is this meditation for. Moreover, the meditator will nonetheless be curious about how their meditation looks and sounds like, and this mental activity will be captured by the brainwave sensor and be reflected by the generative output. Therefore, it could make it even harder for the meditator to really "observe the heart".

The experience is designed to be installed in a dark room. The meditator sits in the center of the ground, with a projector projecting the generative visuals onto them. The audiences watch the meditation from above in order to get a better view. In this demonstrative production, a NeuroSky MindWave Mobile EEG headset is used to sense the meditator's brainwave. An openFrameworks application analyses the brainwave signal, and drives a GLSL fragment shader to render the generative visuals, and a Max patch to generate the sound. The generative approaches could be enriched for better output in future productions.

Shan Shui on the Empire State Building (Proposal)

Read More


Shan Shui on the Empire State Building is a proposal of presenting Chinese shanshui paintings on the facade of skyscrapers such as the Empire State Building use projection mapping technique. In the demonstrative mockup video above, the projection is on a large print of a photo of the Empire State Building and other buildings in New York. The shanshui painting projected is painted by renowned contemporary painter Li Keran (李可染, 1907–1989).

Shanshui painting depicts natural scenes in a semi-abstracted purified way. Behind this very spiritual art form are the naturalistic ideology of the Chinese and their thinking about the relationship between urban life and people's yearning for nature. Imposing the depictions of the natural scenes directly onto skyscrapers—the symbol of urban life and the artificial world—makes a dramatic contrast between the two. Shan shui on the Empire State Building not only is an spectacle to watch, but also provokes the audience's awareness and consideration about this relationship.

(Shan shui on the Empire State Building is a collaboration between SHI Weili and Lisa MARKS. Shanshui painting credit: LI Keran. Photo credit: Daniel SCHWEN, Empire State Building as seen from Top of the Rock. Music credit: QIAO Shan, Flowing Water.)

Oculi: A Show of Alternative Spaces

Read More


腾讯视频链接 (video link for visitors in China): http://v.qq.com/page/u/h/5/u0304xxv3h5.html

Long after our farewells to the forest, the savannah and the cave, the homo sapiens are so accustomed to living in human-scaled cube-shaped white boxes. Having been interacting with this kind of space since birth, we take our boxed life for granted. We tend to not take much consideration of other forms of space, and even neglect our consciousness of space most of the time.

Oculi aims to wake up the audience’s sense of space. Being presented within an apartment, it consists of six installations under the name of oculus. Through the use of projection mapping, each oculus brings the image of a poetic alternative space into the exhibition venue, superimposing it onto the physical space of the apartment. These shape, scale or location alternations, in contrast to the otherwise ordinary living environment, evoke the audience’s nostalgia for their forgotten sensation and imagination of space.

Little Universe · 小宇宙

Read More


腾讯视频链接 (video link for visitors in China): http://v.qq.com/page/y/9/n/y0304z44l9n.html

There they are, poetic objects in the Little Universe, revolving around themselves, whispering to each other, regardless of whether or not they are visited by you—their exotic guest.

Colorful the objects are, and they are not mindless clouds of stardust. Each object has its own temperament— a tendency to get close to someone, and to run away from another. In this way, the Little Universe reveals its rich dynamics.


This interactive installation is designed to bring about a serene and meditative atmosphere. It has an intimate physical form, inviting the audience to interact with it, to touch and move it with their hands. Without interfering with the dialogue between the objects, the audience can still enjoy the poetic scene by watching, listening and contemplating their behaviors and relationships.

While the audience is not explicitly informed of the logic behind the behaviors, the five-element theory (五行) of Chinese philosophy inspires the distinct characteristics and relationships between the objects. The colors of the five objects represent the five elements—metal (金), wood (木), water (水), fire (火), and earth (土). There are two orders of presentation within the five-element system: in the order of mutual generation (相生), one element can give birth to another, while in the order of mutual overcoming (相克), the element can suppress yet another. Represented by the attracting and repelling forces between the particle systems surrounding the objects, the relationships between the poetic objects enrich the contemplative intricacy of the Little Universe.


The Little Universe installation is built to integrate four components:

  1. poetic objects with which the users interact,
  2. projection system,
  3. sound generation system, and
  4. position detection/tracking system.

The poetic objects are made with foam balls implanted with microcontrollers and infrared LEDs facing upwards, atop a flat surface as the cosmic background. The infrared light from the objects is captured by a Microsoft Kinect motion sensor, and analyzed by an openFrameworks application in order to locate the objects. A particle system is projected onto the objects, and based on the position of the objects and their relationships, the behaviors of the particle systems are updated and rendered by the openFrameworks application. The image is then processed by MadMapper and sent to the projector, which has been calibrated so that the projection will be aligned with the physical objects. Control messages are sent from the openFrameworks application to Ableton Live via the Open Sound Control (OSC) protocol, generating the ambient music and sound effects.

The position detection system precisely locates the objects on the table and tracks them throughout the installation. The projection on each object behaves with distinct characteristics depending on the viewer’s interaction, which requires the system to be able to distinguish the objects from each other. Thus, the application has to be always aware of the exact location of the individual objects.

The tracking system comprises of camera vision alongside infrared LEDs. Utilizing blob detection on Kinect’s infrared stream, the application will precisely locate the objects according to their implanted infrared LEDs. A signal processing procedure enables the system to distinguish the objects within 300 ms post detection.

Kinect’s infrared stream can afford up to 30 FPS, which translates to a 30 Hz sampling rate. Hence, theoretically, we can reconstruct the any input signals with a bandwidth less than 15 Hz. The infrared LEDs on the objects are constantly blinking, which generate patterns that are perceived as square wave signals by the camera vision system. All the LEDs in the objects generate periodic square waves with a period equal to one third of a second. However, each LED generates a signal with a distinct duty cycle. The application tells the objects apart by measuring the duty cycles of the LEDs through the Kinect’s data frames. After detection, the application starts tracking the objects, and in the event of tracking loss or confusion between objects, a precise identification will be performed again through frequency analysis.

(Little Universe is a collaboration among SHI Weili, Miri PARK, and Saman TEHRANI, with the guidance of Marko TANDEFELT.)

Frankenstein's Frankenstein

Read More


In this production of Frankenstein, the team experimented with several digital technologies in order to bring about an immersive theatrical experience.

Two projectors were used simultaneously. The first projector threw huge ambient images (fire, thunder, etc) onto the backdrop to set the keynote. This background image, together with the actors, was then captured by a webcam and sent to VDMX, where the projections were controlled. Before the captured image was sent to the second projector, its brightness and saturation were respectively mapped from the direction of the forearms of the two actors, which was captured by the Myo armbands. Therefore, not only would there be multiple overlaying images of the actors and the stage, but also that the attributes of this visual environment were responsive to the arm movement of the actors, which is a simple but effective indicator of the emotion of the characters.

In the end of the play, after killing its creator, the monster of Frankenstein began to vomit, indicating that it was going to give birth to its next generation—an echo of the circular wording of the play's title.

(Project team: Christopher DAMEN, Mark PUNCHINSKY, Stephanie BEATTIE, SHI Weili, Michael GLEN, Kieun KIM, LIU Jiaqi)