
You Only Have Seven Seconds
2025
Weidi Zhang
Rodger Luo
You Only Have Seven Seconds
2025
An AI Art Film / Cinematic Visualization
(Duration: 5 minutes)
Artist(s)/Filmmaker(s): Dr. Weidi Zhang and Dr. Rodger Luo
Sound Production: Paul Taro Schmidt
Voice Denoising: Jonas Füllemann
Screenwriter: Dr. Lijiaozi Cheng
Special Thanks to: Dr. Sam Green
Introduction:
You Only Have Seven Seconds is a poetic, human–AI collaborative documentary that unfolds as both a visual archive of an internationally exhibited installation and a cinematic experiment in memory preservation. Against the backdrop of rising Alzheimer’s cases and the emergence of machine-generated false memories, the film reimagines remembrance through the lens of artificial intelligence.
Inspired by her grandmother’s cognitive decline, the artist(s) developed a custom AI system that transforms fragmented spoken recollections into synthetic visual sequences. Presented as an interactive AI art installation, the project ReCollection has traveled worldwide, welcoming thousands of visitors to whisper their fading memories—each within seven seconds—into the artwork and generate visual memories in real time. These memories form the foundation of an evolving archive, constructed through speech recognition, text auto-completion, and text-to-image generation.
Drawing from this archive, the film incorporates emerging speech-to-video technology to weave a collective, culturally diverse montage of love and loss. The participatory datasets are curated to build dialogical relationships, forming patterns, counterpoints, and resonances that evoke a poetics of relation. Initially conceived as a personal reflection, the work has evolved into a living, breathing, shared world-building experience that reassembles the ephemeral past into a dynamic portrait of collective human selfhood.
《七秒钟》是一部充满诗意的人工智能生成纪录短片。作品基于社区参与式的数据集,将“消逝的记忆”视觉化,探索在“回忆”与“想象”之间的叙事缝隙。在阿尔茨海默症病例不断增长与机器生成“虚假记忆”技术兴起的时代背景下,影片尝试以人工智能的视角重新思考记忆的本质。影片灵感来自艺术家目睹祖母认知退化的个人经历,从而创建了一套定制的 AI 系统,将人类支离破碎的口述记忆转化为合成视觉,以名为ReCollection的互动式 AI 艺术装置面向公众展示。这个装置在2023至2025年间,跨越多个国家地区,吸引了来自世界各地成千上万的观众参与。每位观众在 7 秒内低声说出一段模糊的记忆,系统通过语音识别、文本生成与图像合成算法,实时生成对应的视觉影像。随着参与人数的不断增加,这些“视觉记忆”逐渐汇聚成一个动态演化的记忆档案库。该影片基于装置所收集的声音与图像数据,构建了一组多元文化语境下关于爱与失落的群像。艺术家在创作过程中重新组织这些由公众生成的素材,打破传统线性叙事,借助最新语言生成影像技术,以人工智能与算法唤起一种“关系的诗学”:让个体记忆彼此对话与共振,生成具有人类共在感的视觉语言。影片源自个人反思,却已不断演化而重塑了群像记忆,展现了鲜活且流动的人类自我画像。
AWARD
Best AI Art Video Award, BAIFF Burano Artificial Intelligence Film Fesitval x Prompt AI Magazine, 2025
EXHIBITIONS / SCREENINGS
IEEE VIS Art Program, 2025, Austria [upcoming] 2025
Envisioning Intelligence, 2025, US [upcoming] 2025
Shenzhen Design Week, Hua Museum, 2025, China, 2025
BAIFF Burano Artificial Intelligence Film Fesitval, Venice, Italy 2025
Electronic Current, Gallery 130, Mississippi [upcoming] 2025
ChinaVis Art gallery 2025, CN
RESEARCH PUBLICATION
(Accepted) "You Only Have Seven Seconds: From Intimate Whispers to Shared Worlds in Participatory Data-Driven Cinematic Art" IEEE VISAP Art Paper 2025
*please select at least 2k quality to watch this video

THE MAKE OF You Only Have Seven Seconds
Bridging the gap between Community Participatory Data Visualization with Cinematic AI Art
You Only Have Seven Seconds is a time-based cinematic work shaped through a multi-stage pipeline that combines real-time human-machine interaction, multimodal AI, data curation, visual processing, and experimental post-production. The project originated with a public, interactive installation where participants were invited to share whispered, seven-second memory fragments. These recordings were transcribed using OpenAI’s Whisper model, then semantically expanded through a GPT-based language model. Each resulting text fragment was transformed into a still image using a fine-tuned version of Stable Diffusion.
From these interactions, we curated 64 complete dataset triads—consisting of the original voice recordings, the GPT-generated texts, and the corresponding AI-generated images. These served as the core material for the film’s visual structure. To bring these static frames to life, we used a text-to-video generation model, producing 10-second animated clips that maintained thematic and visual coherence with the original prompts. These sequences were further organized into five semantic categories—memories of family, food, place, achievement, and shared moments of commonality—while intentionally preserving overlaps and fluid transitions. The film opens with a deliberate narrative illusion: disparate voices recalling moments with a father, edited to momentarily feel like a singular story before unraveling into polyvocal fragmentation. Throughout, we retained intentional irregularities such as mic tests, stumbles, and re-recording attempts as meta-fragments—preserving the rawness of the participatory act and foregrounding the project’s live, mediated origins. The visual sequences were not used in their raw form. Instead, we processed them through a custom-built system to develop the film’s aesthetic language. Drawing inspiration from monotype printmaking and slit-scan photography, our goal was to represent the ephemeral, unstable quality of memory. Each image was converted to monochrome, then distorted algorithmically using a GLSL compute shader with additional layers included blob-tracking to recursively warped to suggest memory degradation. Sound and image were composed in parallel. Using FFT, frequency, and amplitude analysis, we created audio-reactive visual behavior that allowed the film to respond dynamically to the spectral qualities of each voice recording. The result is a richly layered, affect-driven film that merges generative AI with procedural aesthetics—creating a temporal visualization of memory that is fragmented, speculative, and deeply human.
Film Still, You Only Have Seven Seconds, 2025







