Digital Divinities:
Veil Of Myth


Concept Statement

Interactive Jewelry in Virtual Space

This project explores the intersection of myth, identity, and technology. Using TouchDesigner,real-time motion tracking and AI, traditional jewelry transforms into interactive digital masks. Each piece channels mythical beings, inviting viewers to engage, speak, and move—unveiling new forms of connection between the human and the divine.


Generative Pipeline

-

Generative Pipeline -

The system begins with a natural language prompt provided by the user.
This text is encoded using a CLIP model and passed into a Stable Diffusion pipeline, which synthesizes an image based on the semantic content of the prompt.

The process includes:

Prompt Encoding – the input text is embedded into a latent space using CLIP

Stable Diffusion Sampling – a pre-trained model generates a visual response that aligns with the prompt

Image Output – a 2D image emerges, reflecting the described concept



Technical Support

Switching model form-YINGLONG01.

Powered by TouchDesigner

TouchDesigner enables real-time visual responses to gesture, voice, and presence.

Switching model form-YAYU01.


TouchDesigner x Lidar

LiDAR-Enhanced Interaction in Real-Time Jewelry Display

By combining TouchDesigner’s real-time rendering with LiDAR motion tracking, this installation reacts to viewers’ presence, creating an immersive extension of jewelry beyond the physical.

“ Speck to Model “ Project

01 | Text-to-Image Generation

In TouchDesigner

Real-Time Rendering

Using TouchDesigner, mythological concepts are transformed into responsive visuals through real-time interaction.

Switching model form-YAYU02.

Switching model form-YINGLONG02.

TouchDesigner x AIGC

Integrating Generative AI into Conceptual Jewelry Practice

HOW IT WORKS?

👉

Using a voice interface, viewers describe anything in natural language.


The spoken input is transcribed by an AI model and interpreted to generate a point cloud-based digital form.


Each prompt activates a visual transformation, translating language into dynamic design.

How It’s Made?

Test on the wall with a projector.

02 | Background Removal

After generating the image from text, the system removes its background to isolate the object for further processing.

This is achieved through:

Automatic background segmentation using a pretrained model.

Alpha masking to preserve only the visible parts of the generated object.

Clean matte output, which is passed to the 3D reconstruction step.

By removing the environment and noise, the system ensures only the core form is passed into the point cloud pipeline or 3D geometry model.

The workflow includes:

Background mask inversion and cleanup
Ensures only the desired object is used in the reconstruction.

3D Geometry Reconstruction (TripoSR)
The image is processed into a mesh with depth, shape, and surface coherence.

Axis Alignment & Mesh Adjustment
Orientation corrections ensure the mesh fits TouchDesigner’s coordinate space.

Live Streaming to TouchDesigner
Using
Comfy3DPacketToTD, the 3D data is sent into TouchDesigner for real-time visualization as point cloud, sculpture, or dynamic visual response.

After generating and refining the AI image, the system reconstructs its 3D geometry using TripoSR — a diffusion-based model capable of turning 2D images into textured 3D meshes.

03 | From Image to 3D Mesh