cover.gif

CANGJIE

 

 

仓颉  

Video Documentation

1' 43'' Teaser 

AWARD

Juried Selection 

The 23rd Japan Media Arts Festival

CREDIT

Art Concept Direction 

Virtual Reality Worldbuilding 

​Visualization & Sound Design

Creative Coding

+ Weidi Zhang

Intelligent System Design

Creative Coding

+ Donghao Ren 

 

Cangjie 

 

Humans and machines are in constant conversations. Humans start the dialogue by using programming languages that will be compiled to binary digits that machines can interpret. However, Intelligent machines today are not only observers of the world, but they also make their own decisions. If A.I imitates human beings to create a symbolic system to communicate based on their own understandings of the universe and start to actively interact with us, how will this recontextualize and redefine our coexistence in this intertwined reality?

This VR project provides an immersive exploration in semantic human-machine reality generated by an intelligent system in real-time through perceiving the real-world via a camera [located in the exhibition space]. Inspired by Cangjie, an ancient Chinese legendary historian (c.2650 BCE), invented Chinese characters based on the characteristics of everything on the earth, we trained a neural network that we call Cangjie, to learn the constructions and principles of all the Chinese characters. It perceives the surroundings and transforms it into a collage of unique symbols made of Chinese strokes. The symbols produced through the lens of Cangjie, tangled with the imagery captured by the camera are visualized algorithmically as abstract pixelated semiotics, continuously evolving and compositing an ever-changing poetic virtual reality. Cangjie is not only a conceptual response to the tension and fragility in the coexistence of humans and machines but also an artistic imagination of our future language in this artificial intelligent era.

The Early Chinese Characters 

Designed Based On The Characteristic Of Everything On Earth

Chinese Legendary Character, Cangjie, Observes Surroundings and Creates Symbols For Everything

Concept

 

System Map

 

Methodology

1. Machine Intelligence & Experimental Visualization

1.1 Converting Images to Chinese Strokes

we used unsupervised learning techniques to model Chinese character strokes. The learned model is then used to create novel characters based on the images.

1.2 Training and Using Bidirectional Generative Adversarial Networks (BiGAN)

We trained a neural network (named Cangjie) to learn vector stroke data of over 9000 Chinese characters by using BiGAN. After successfully training, the discriminator and the encoder/generator reach Nash Equilibrium. The network learns a low-dimensional latent representation of these images

  • Given an image, the encoder network can produce its latent representation.

  • Given the latent representation, the generator network can reconstruct the image.

  • The latent vectors follows a normal distribution.

Intelligent System Demo​

Novel symbols constructed by Chinese strokes are generating in realtime through reconstructing the live streaming using the trained neural network (Cangjie).

1.3 Experimental Visualization Using Neural Network Generated Image Data

The image data is firstly manipulated with image processing techniques which include image differencing, alpha compositing, filter design, and algorithmic transformation. Then we used OpenGL shading language (GLSL) to relocate the pixels from real-world texture to a position determined by the image generated by Cangjie. The RGBA channels of live streaming will determine the movements. The purpose is to create an ink flow that is consistently writing new symbols that Cangjie generated in realtime by using live streaming texture.

Visualization Demo​

Cangjie is writing in realtime based on its observation of the real world through a camera. The writing movement is determined by the RGBA channels of live streaming.

2. Spatial Visualization in Virtual Reality Space Using Image Data

1.1 Composition: Algorithmic Virtual World Structure 

Multiple Mathematic algorithms are implemented to create a world structure, including the Voronoi diagram (sparse). 

1.2 Texture Development: Data-Driven Abstract Patterns and Forms

Using different visual languages - arrays of lines, points, curves, photogrammetry point clouds, image data-driven agency, and image processing techniques to compose the virtual world.

 

User Interaction

The user interaction is realized in two ways. Firstly, there's a camera is set in the center of the installation and observes the surroundings. The audiences in the installation will be captured by the camera as live streaming that is processed by Cangjie (the trained neural network) which generates the semiotic visualizations. Secondly, live streaming of the surroundings (include audiences) will be implemented as textures in the VR space and the RGBA channels of this live stream will determine the particle movements and the ink flow directions. The audiences will be able to see themselves as includes in the textures in VR space and their movements will alter the appearance of the virtual world and change the agents' movements.

sound on 

All Rights Reserved. © 2019  Weidi Zhang