top of page




Weidi Zhang
Rodger Luo



ReCollection is an interactive AI art installation that assembles synthetic collective visual memories based on language input, blurring the boundaries between remembrance and imagination through AI system design and interactive experimental visualization.

VIDEO DOCUMENTATION   [ 2 minutes ] 

*please select at least 2k quality to watch this video 

Special Thanks to Dr. Sam Green

Voice Over credit to all the participants who Interacted with ReCollection System 

Introduce ReCollection:

This artwork was born of witnessing my grandmother's memory regression due to dementia, where her cherished stories dissolved into fragmented words. Dr. Mary Steedly once described memories as a "densely layered, sometimes conflictual negotiation with the passage of time", and in 2022, over 50 million people faced this painful reality of memory loss due to Alzheimer's and related dementias. Yet, amidst this poignant backdrop, the emergence of text-to-image AI systems in 2022 offered a glimmer of new perspective, as they harnessed the power of language to imagine and reassemble fragmented memories, possibly to weave what time and disease had stolen.


When we coexist with machines, will we accumulate synthetic recollections of collective symbiotic imagination? 

Is language capable of re-weaving and synthesizing memories? 

How does our collective memory inspire new visual forms and alternative narratives?



Recollection is an assemblage of intimate human-machine artifacts that emphasizes the contributions from three sides: artists, machines, and participants. This customized AI application facilitates multiple AI techniques, like speech recognition, text auto-completion, and text-to-image, to convert language input into image sequences of new memories. As an interactive experience, participants will whisper their personal memories with fragmented sentences, and our system will automatically fill in details, creating new touching visual memories.


We developed our customized AI system by fine-tuning a pre-trained transformer-based AI model to learn the documentaries of Alzheimer patients’ visual memories and their descriptions. The system imagines new memories of "love" and "loss" by interpreting real-time narratives from participants in the installation. Our system emerges as a vibrant and inclusive conversation starter, transcending boundaries with support for over 89 different languages, embracing the diverse cultural artifacts. 


We did not employ the generative visual output from our AI system as our final delivery in the installation. Instead, we draw inspiration from Monotype, a printmaking technique with a history that dates back to the 1640s, which produces unique visualizations on paper through printing and reprinting. Therefore, we program our system to automatically transform the machine-generated visuals algorithmically into an evolving real-time visualization with a unique aesthetic through image processing and experimental data visualization. 

By providing a conceptual framework for non-linear narratives, which constitute symbiotic imaginations, and future scenarios of memories, culture production, and reproductions. It may inspire the cure for memory regression by providing a future scenario, a thought experiment, and an intimate recollection of symbiosis between beings and apparatus. It raises people's awareness of future memory preservation and their empathy for the dementia community through a personalized aesthetic experience. It offers an artistic approach and future prototype for cultural heritage reproduction and re-imagination and explores the tensions that exist in the co-relations between visual representations, language, and narratives.


User Interaction

In the art installation, a participant will whisper fragmented memories into the microphone. The AI system will automatically fill in the details of the spoken words to complete the text with a narrative using the GPT-4, a large language model.  The completed narrative will be sent to Stable Diffusion to generate synthetic images representing the memories based on the machine’s interpretation. The images output by machines is further developed and visualized algorithmically as an evolving interactive experience.



Supported by 

Herberger Institute Research Building Investment

Media and Immersive eXperience Center, Arizona State University [US]



Worlds For Change, Media and Immersive eXperience Center, Arizona State University, 2023

Signal Immersive Gallery, Curated by Vancouver International Film Festival and DigiBC, Vancouver, BC, 2023

Siggraph Art Gallery, Los Angeles Convention Center, CA, US, 2023

International Symposium For Electronic Arts (ISEA), Paris, FR, 2023

bottom of page