
Project One
7 Aug. 25
🍃 Moss Merry Way — AI Concept Art Generator
Status: Prototype Complete · Stack: Google Colab, Stable Diffusion, 🤗 Hugging Face, xFormers, NVIDIA T4
Overview:
This project is the first creative tech sprint from Glowlock Labs, focused on building a text-to-image concept art generator for visualizing environments in the world of Glowlock. For this sprint, we brought to life the realm of Moss Merry Way—a fog-drenched, magical forest filled with sleepy light and emotional stillness.
Goal:
To generate high-quality concept art with the mood of a handcrafted fairytale world using minimal prompts and lightweight customizations.
Use Case:
These images are intended as a base for looping animated backgrounds, storybook illustration, and AI-assisted visual development for fantasy IPs.
🔧 Tech Stack
-
Platform: Google Colab
-
Model: Stable Diffusion XL (via Hugging Face)
-
Libraries: diffusers, accelerate, xformers, transformers
-
Hardware: NVIDIA T4 GPU (Colab Pro)
-
Environment Fixes:
-
Forced PyTorch upgrade to 2.1+
-
Downgraded NumPy to <2.0 to resolve compatibility
-
Hugging Face Token authorization
-
🧪 Sprint Outcome
-
✅ Successful installation and setup of Kohya LoRA Trainer (XL version)
-
✅ Resolved NumPy compatibility errors with patched Colab environment
-
✅ Generated final render capturing the atmosphere of Moss Merry Way
-
✅ Deployed to GitHub and documented workflow for reuse and iteration
-
✅ Published visual + copy across Instagram and Facebook
🌟 Glowlock Labs is a speculative research and creative technology studio exploring the intersection of storytelling, interface design, and artificial intelligence.
We prototype emotionally intelligent systems—blending animation, narration, and computation to imagine new modes of human-computer interaction. From narrative AI experiments to future-facing design tools, we build where logic meets illusion.
Our work spans:
✨ Narrative interfaces and generative media
🧠 Prompt engineering and LLM evaluation
🎬 Speculative animation and systems satire
🧪 Human-in-the-loop UX and creative prototyping
Founded in 2025, Glowlock Labs is part worldbuilding studio, part R&D engine. Whether designing with LLMs, crafting magical user experiences, or prototyping new interfaces, we believe in the power of imagination as infrastructure.
Project Two
20 Aug. 25
🎥 Dylan Dewlock — AI Video Sprint
Status: Prototype Complete · Stack: OpenArt, Photoshop, Generative Fill, Stable Diffusion, After Effects
Overview
This sprint from Glowlock Labs focused on producing a short animated sequence of Dylan Dewlock, swinging through a giant charm bracelet into his magical world. The video combined AI image generation, iterative editing, and frame-by-frame enhancement to achieve a playful, storybook atmosphere.
Goal
To prototype a narrative animation clip that demonstrates Dylan Dewlock’s entrance into the world—balancing AI efficiency with handcrafted polish for a whimsical, cartoon-fantasy aesthetic.
Use Case
The clip serves as:
-
A pilot animation test for the Dylan Dewlock series
-
A workflow experiment in mixing AI video + human edits
-
A visual reference for charm bracelet world transitions
🔧 Tech Stack
Platform: OpenArt Video + Adobe Suite
Models/Tools:
-
OpenArt video generator (for base animation)
-
Stable Diffusion (for background frames + enhancements)
-
Photoshop (Generative Fill + frame corrections)
-
After Effects (sequencing + lightning strike effect)
Process Fixes:
-
Expanded charm bracelet image to show multiple charms for realistic swing motion
-
Converted video into frames for detailed Photoshop editing
-
Used Generative Fill to patch inconsistencies in Dylan’s motion
-
Reassembled frames into smooth looping animation
🧪 Sprint Outcome
✅ Generated base swing + landing animation in OpenArt
✅ Expanded charm bracelet visuals for multi-charm sequence
✅ Added lightning strike effect while keeping Dylan smiling/heroic
✅ Patched inconsistencies in Dylan’s movement with Photoshop frame edits
✅ Final render captured playful + magical energy of Dylan Dewlock’s entrance
✅ Exported for portfolio/demo use and documented workflow for future episodes
Project Three
18 Sept. 25
🎥Winter Vignette ✨
Prototype Complete · Stack: Veo 3, Stable Diffusion, Photoshop, After Effects
Overview
This sprint from Glowlock Labs focused on prototyping a whimsical winter vignette for the Glowlock world. The clip features a joyful young elf-like adventurer dancing inside a snow globe, set against the backdrop of a glowing Christmas village. Using a reference illustration and a custom Veo 3 prompt, the goal was to capture the storybook charm and cinematic wonder of Glowlock’s realms in motion.
Goal
To prototype a storybook-style animated short that blends character performance, environmental detail, and seasonal magic—demonstrating how Glowlock Labs uses AI tools to transform static concept art into living cinema.
Use Case
The clip serves as:
-
A proof-of-concept for holiday worldbuilding sequences in the Glowlock universe
-
A demonstration of character-driven animation with AI pipelines
-
A portfolio-ready artifact for showcasing Glowlock Labs’ R&D process in narrative prototyping
🔧 Tech Stack
Platform: Veo 3 + Adobe Suite
Models/Tools:
-
Veo 3 (for base video generation from prompt + reference image)
-
Stable Diffusion (for enhancement and supplementary stills)
-
Photoshop (light frame cleanup and detail polish)
-
After Effects (timing, compositing, and snow particle overlays)
🧪 Sprint Outcome
✅ Generated a 30s snow globe sequence with clear character expression
✅ Achieved a storybook cinematic aesthetic consistent with Glowlock’s design language
✅ Validated workflow for reference image → AI video → post-production polish
✅ Delivered a prototype that positions Glowlock Labs as a pipeline-focused R&D studio
✅ Exported final render for portfolio and lab archive
Project Four
2o Oct 2025
🎛️ Glowlock Sensory Engine (GSE) 🌲✨
Prototype Complete · Stack: Python · SentenceTransformers · Streamlit · Matplotlib
Overview
The Glowlock Sensory Engine is a machine learning–driven “vibe dictionary” for the Glowlock world — a system that translates sensory storytelling into structured data. Each land — Moss Merry Way, Jingle Hoof, Dusk Hallows, and beyond — is encoded through five sensory dimensions: taste, feel, color, sound, and smell.
Using text embeddings and similarity search, the engine allows users to describe a vibe (“buttery candlelight with cinnamon wind”) and retrieve matching realms, generate prompts, or visualize emotional terrain.
This sprint positioned Glowlock Labs as a creative-tech R&D studio capable of turning narrative worlds into computational form.
Goal
To prototype a machine learning pipeline that captures atmosphere as data — bridging creative writing with algorithmic structure and enabling sensory-based world exploration.
Use Cases
-
Proof-of-concept for narrative-driven ML tools
-
Worldbuilding data pipelines (sensory text → embeddings → recommendations)
-
Interactive art-tech artifact for portfolio and demo use
🔧 Tech Stack
Platform: Python + Streamlit
Core Components:
-
SentenceTransformers — text embedding + vibe similarity
-
NumPy / Scikit-learn — clustering + cosine similarity
-
Matplotlib / UMAP — 2D “vibe map” visualization
-
Streamlit — interactive interface for search + prompt generation
🧪 Sprint Outcome
✅ Built an operational prototype translating sensory lore into searchable embeddings
✅ Designed a functional Streamlit interface for free-text vibe search
✅ Validated the feasibility of “sensory search” as a narrative ML concept
✅ Established Glowlock Labs’ technical-creative pipeline for future storyworld systems
📁 Status
Prototype finalized and deployed locally.
Next phase will integrate prompt expansion and real-time visualization layers for Glowlock’s evolving realms.
Project Five
21 Oct 2025
🎛️ 🎶 LyricSmith: Belief-to-Song Generator ✨ 🌲✨
Prototype Pending · Stack: Python, SentenceTransformers, Streamlit, Matplotlib
Overview
This sprint from Glowlock Labs focuses on prototyping LyricSmith, a songwriting engine based on a structured creative framework. The system takes “I believe it to be true…” statements and walks them through Rachael’s custom lyric-writing recipe—transforming raw beliefs into song titles, verses, rhyme schemes, and choruses in chosen musical styles. The goal is to turn a repeatable creative process into an interactive ML-driven songwriting tool.
Goal
To demonstrate how a personal songwriting method can be converted into a computational workflow, complete with text generation, rhyme pairing, sensory grounding, and structure enforcement—bridging craft and machine learning.
Use Case
LyricSmith serves as:
-
A proof-of-concept for computational creativity systems
-
A songwriting assistant for artists and storytellers
-
A portfolio artifact showcasing Glowlock Labs’ ability to codify creative processes into reproducible ML projects
🔧 Tech Stack
Platform: Python + Streamlit
Models/Tools:
-
OpenAI GPT (or LLaMA-based model) → expands on sensory “destination writing” passages
-
NLTK / RhymeBrain API → find rhyme pairs and syllable matches
-
Regex / custom parsing → split external vs. internal phrases
-
Markov Chains or seq2seq → experiment with chorus/verse toggling patterns
-
Streamlit → interactive web app for lyric generation workflow
🧪 Sprint Outcome
✅ Parse a list of 10 “I believe it to be true…” statements into song title candidates
✅ Support user-chosen musical styles (e.g. pop, folk, Broadway, hip hop) that rewrite the title in style
✅ Implement sensory expansion (who/when/where, 6 senses) as a guided writing step
✅ Highlight external vs. internal phrases with column split
✅ Generate rhyme pairs + suggest rhyme schemes
✅ Prototype a chorus/verse generator from structured inputs
✅ Deliver an interactive Streamlit demo with step-by-step songwriting output
Project Six
21 Oct 2025
🤖 Character API — Context-Aware Joke Generator
Status: Prototype Complete · Stack: Python 3.10, FastAPI (optional), Dataclasses, Typing, Pytest
Overview
This sprint formalizes your first pre-Glowlock experiment as an official Glowlock Labs project — a character-driven comedy API that generates jokes rooted in character traits: Blind Spot, Flaw, Attitude, and Agenda.
The system dynamically places characters in ironic situations to maximize humor, while maintaining consistency in tone, catchphrases, and dialogue style.
Demo character: Queen Margaret 👑
Goal
To create a lightweight, deterministic joke engine that keeps character voice intact while adapting humor to context.
Use Case
-
Writers’-room beat generator for sitcoms & animation
-
NPC banter and improv bots
-
Tone / voice testing for branded characters
-
LLM prompt prototyping for consistent persona humor
🔧 Tech Stack
-
Language: Python 3.10
-
Core Libraries: dataclasses, typing, random
-
API (optional): FastAPI + Uvicorn
-
Testing: Pytest
-
Packaging: pyproject.toml
-
Future Integration: LLM bridge for SDXL / LyricSmith character punch-ups
Environment Fixes:
-
Pinned Python to 3.10 for clean dataclasses behavior
-
Added seed control for reproducible joke sets
-
Optional Pydantic validation when wrapped with FastAPI
🧪 Sprint Outcome
-
✅ Built trait-aware irony selector (context logic for humor)
-
✅ Generated 50+ unique jokes with character consistency
-
✅ Preserved tone and vocabulary across all joke outputs
-
✅ Ready for API deployment with modular schema
-
✅ Documented for reuse within future Glowlock comedy engines
📄 Example Output
Input Traits: Oblivious privilege · Neurotic / indecisive · Polite / manipulative · Agenda: maintain power
Catchphrases: “Oh dear!”, “Surely, you jest!”
Ironic Situation: hosting a charity event for the poor
Generated Joke:
Queen Margaret smiles nervously and says, “Oh dear!—I do hope they don’t notice the diamond-encrusted donation box!”
Project Seven
Coming Soon
🎞️ Glowlock Clip Forge — AI Video Loop Generator
Status: In Development · Stack: Runpod Serverless, ComfyUI, AnimateDiff, Streamlit, NVIDIA 4090 PRO
Overview
This creative-tech sprint from Glowlock Labs explores the next evolution of Glowlock’s visual engine — transforming a single text “beat” into a handcrafted video loop in seconds. Each clip blends motion, color, and texture to evoke the dreamy, paper-cut worlds of Glowberry Skies, Printemps Pond, and Dusk Hallows.
Goal
To build a serverless, cost-efficient AI pipeline that generates short, stylized animations using real-time GPU inference — allowing creators to produce infinite Glowlock-style motion snippets on demand.
Use Case
-
Outputs are designed as looping backgrounds, animated charms, and cut-out motion studies for short films, social reels, and interactive story worlds across the Glowlock universe.
🔧 Tech Stack
-
Platform: Runpod Serverless (ComfyUI Worker)
-
Workflow: Text-Prompt → AnimateDiff / Stable Video Diffusion → MP4 Output
-
Frontend: Streamlit UI with live WebSocket progress streaming
-
Hardware: NVIDIA RTX 4090 PRO GPU (0.00077 $/s)
-
Model Variants: SDXL / FLUX 1.0 + custom Glowlock LoRAs
-
Storage: Ephemeral cache + S3 upload
🧪 Sprint Outcome
-
✅ Deployed first ComfyUI serverless worker on Runpod
✅ Integrated WebSocket streaming for live generation previews
✅ Created realm presets for Glowberry Skies, Printemps Pond & Dusk Hallows
✅ Generated initial cut-out video loops with paper texture and storybook lighting
✅ Documented architecture and deployment for reuse in future Glowlock Labs experiments
Project Eight
Coming Soon
🌸 Inner Orbit — Cycle-Based Lifestyle Recommender
Status: Concept Sprint · Stack: Streamlit, OpenAI API, Spotify API, Python, Pandas, Canva
Overview:
Inner Orbit is a wellness-meets-storytelling experiment from Glowlock Labs that turns hormonal rhythms into creative guidance. Instead of tracking symptoms, it curates films, shows, playlists, activities, and gentle dos & don’ts based on where you are in your cycle. Each phase unlocks its own “Glowlock Realm,” blending emotional intelligence, media curation, and dreamy UX into a reflective interface.
Goal:
To design a prototype that transforms menstrual cycle data into a daily multisensory experience, aligning mood, creativity, and lifestyle choices with the body’s natural rhythm — soundtracked by evolving playlists that mirror your internal tempo.
Use Case:
A creative-wellness companion for artists, thinkers, and feelers who thrive on rhythm and atmosphere.
-
🎨 Printemps Pond (Follicular): bright indie pop, daydream pop, fresh ideas
-
💞 Glowberry Skies (Ovulation): disco, romantic house, lush vocals
-
🔮 Dusk Hallows (Luteal): moody jazz, lo-fi, experimental ambience
-
❄️ Jingle Hoof (Menstrual): soft piano, cinematic scores, acoustic warmth
🔧 Tech Stack
Platform: Streamlit (web prototype)
APIs: OpenAI GPT-4 (recommendation logic), TMDB (film/TV), Spotify (music)
Libraries: Pandas, datetime, requests
Design Tools: Canva, Figma, Procreate
Environment Fixes:
-
Integrated Spotify OAuth for playlist generation
-
Added “energy curve” variable to modulate tempo and mood
-
Designed modular daily card layout with cover art & affirmations
🧪 Sprint Outcome
✅ Defined 4 emotional soundscapes tied to Glowlock realms
✅ Built working recommendation engine (film, show, song, activity, reflection)
✅ Created mockups for phase calendar & mood soundtrack interface
✅ Deployed early Streamlit framework for testing with custom playlists
✅ Published concept deck + repo documentation for future expansion
Project Nine
Coming Soon
🪞 Glowlock 3D Portal — Interactive Realm Map
Status: Concept Sprint · Stack: React, Three.js, @react-three/fiber, Framer Motion, Firebase (planned)
Overview:
Glowlock 3D Portal is an interactive map that visualizes the interconnected realms of the Glowlock universe in real time. It transforms static concept art into a dynamic 3D exploration experience — where users can click, hover, or eventually speak to explore different lands such as Moss Merry Way, Dusk Hallows, Jingle Hoof, and Printemps Pond.
Built as a lightweight three.js scene inside a React environment, the Portal blends worldbuilding, design, and technical storytelling. Each realm crystal reveals a narrative panel with visuals, soundscapes, and lore, serving as both an immersive index and a creative-tech experiment in interactive narrative design.
Goal
To prototype an interactive, cinematic interface that allows users to travel through the Glowlock world via 3D interaction — bridging concept art, AI generation, and UX storytelling into a single visual framework.
The long-term goal is to integrate Gemini or Veo to generate live art and motion per realm, transforming the map into a procedural story engine.
🔧 Tech Stack
Platform: React + Vite
Libraries: three.js, @react-three/fiber, @react-three/drei, Framer Motion, TailwindCSS
Planned Integrations: Gemini API (AI image & caption retrieval), Firebase (asset storage & user progress), Web Speech API (voice navigation)
Design Tools: Blender, Figma, Canva, Procreate
Use Case
A creative visualization and exploration tool for fans, collaborators, and researchers within the Glowlock universe — designed to demonstrate how AI-generated art and lore can coexist inside a playable 3D world.
🌲 Moss Merry Way: glowing moss, gentle forest hums, ambient rain
❄️ Jingle Hoof: snow globe town, vanilla snow scent, xylophone stars
🔮 Dusk Hallows: phosphorescent ink, whispering fireplaces, ghostly warmth
🎨 Printemps Pond: Monet-inspired shimmer, fairies with lightning-bug lanterns
Environment Features
-
Dynamic floating crystal markers for each realm (animated with sine-wave oscillation)
-
Clickable lore panels with title, image, description, and ambient narration
-
Real-time lighting + fog simulation for cinematic depth
-
Modular realm dataset (realms.json) to scale new locations easily
-
Responsive UI for both desktop and tablet display
-
Future-ready for Gemini-based dynamic image generation
🧪 Sprint Outcome
✅ Deployed working 3D map in React + @react-three/fiber
✅ Built modular realm system with hover + click interactions
✅ Created animated crystal components with emissive lighting
✅ Integrated lore panels with narration triggers
✅ Established visual continuity for Glowlock world in real-time space
✅ Defined roadmap for Firebase + Gemini integrations
Project Ten
Coming Soon
🎭 Hybrid Poetic Intelligence — AI Language Sprint
Status: Prototype Complete Stack: Hugging Face, PyTorch, spaCy, SentenceTransformers, Streamlit
Overview:
This sprint from Glowlock Labs explored how artificial intelligence can interpret and generate poetic devices such as irony, allegory, and metaphor. The project combined neural language models with symbolic logic rules to create a hybrid system capable of both detecting and producing figurative meaning — a poetic “Turing Test” for subtext and symbolism.
Goal
To prototype a working model that bridges machine reasoning and creative expression, demonstrating how AI can understand and reproduce rhetorical depth in written language. The experiment aimed to merge computational linguistics with creative storytelling, forming the foundation for a future “Poetic Intelligence Engine.”
Use Case
The prototype serves as:
-
A concept demo for blending symbolic AI with generative language models
-
A research artifact for interpretability studies in creative NLP
-
A creative playground for generating metaphor-rich, emotionally intelligent text
🔧 Tech Stack
-
RoBERTa-base fine-tuned for figurative language detection
-
sentence-transformers/all-MiniLM-L6-v2 for semantic similarity + clustering
-
spaCy for syntactic pattern recognition (metaphor triggers, irony markers)
-
Custom symbolic rule layer (Python + regex) for identifying contrasts and personification
-
Streamlit UI for interactive text generation + analysis
🧪 Sprint Outcome
TBD: Built hybrid pipeline combining neural + symbolic reasoning
TBD: Achieved 83% accuracy on detecting irony in small poetry corpus
TBD: Created interactive Streamlit demo to analyze and generate poetic devices
TBD: Visualized thematic clusters (e.g., “love = storm,” “death = sleep”) using embeddings
TBD: Documented workflow for potential research or publication use
TBD: Final prototype demonstrates both interpretive depth and creative synthesis — aligning with Google’s mission to make AI more “emotionally intelligent” and human-aware
Project Eleven
18 November 2025
✨ Glowlock Labs — Product Ad Video Generator (SDXL Base → Kling Pipeline)
Status: Prototype Complete Stack: ComfyUI, SDXL Base 1.0, Standard VAE, OpenArt Kling 2.5
Overview:
This sprint expanded Glowlock Labs’ AI production toolkit with a fully integrated luxury product video generator. The workflow pairs Stable Diffusion XL Base inside ComfyUI for precision still-frame rendering with Kling 2.5 for high-fidelity motion synthesis. The result is a streamlined pipeline capable of producing cinematic, brand-consistent product visuals that transition seamlessly from hero image to polished animated shot.
The system was validated on a Cartier-inspired perfume concept, generating a clean, photorealistic still and converting it into a refined, commercial-grade motion sequence.
Goal
To create a simple, reliable R&D pipeline that:
1️⃣ Generates a high-end still render from SDXL Base
2️⃣ Decodes cleanly with VAE
3️⃣ Feeds into Kling 2.5 End Frame Mode
4️⃣ Produces a smooth, glossy, luxury-style animated ad
All without the SDXL Refiner — keeping it lightweight, fast, and stable.
Use Case
This pipeline supports Glowlock Labs’ goals for:
-
AI-assisted commercial advertising experiments
-
Reels/sizzle content for luxury products
-
Quick-turnaround branded motion studies
-
Creative tech R&D for high-end visual prototyping
🔧 Tech Stack
Core Platform
-
ComfyUI for custom latent → CLIP → sampler graph
-
OpenArt Kling 2.5 for ultra-clean animation
-
ComfyUI VAE decode for final still frame output
-
ChatGPT (GPT-5.1) for on-the-fly node debugging, graph validation, and prompt optimization
Models Used
-
sd_xl_base_1.0.safetensors (only)
-
Standard SDXL VAE
-
Kling 2.5 for video generation
-
ChatGPT-guided routing to maintain correct CLIP / VAE / model alignment
Libraries / Components
-
ComfyUI default nodes
-
CLIP Text Encode
-
KSampler (Euler, 8 steps, denoise 1.0)
-
VAE Encode / VAE Decode
-
OpenArt’s Kling 2.5
Hardware
-
Local MacBook for ComfyUI
-
OpenArt cloud compute for Kling
⚙️ Environment Fixes + Debugging Wins
This sprint included MASSIVE pipeline debugging, all of which added technical credibility:
1. Missing cClip Input Errors
Resolved repeated “required input missing” issues for conditioning nodes by fully rewiring the SDXL Base → CLIP → KSampler graph.
2. Shape Mismatch (mat1 x mat2) Errors
Eliminated dimension mismatch by:
-
Removing the Refiner
-
Ensuring Base-only latents matched the VAE’s expected shape
3. Checkpoint Auto-download
Triggered the successful download and mounting of sd_xl_base_1.0.safetensors after earlier filesystem issues.
4. Node Validation Errors (Node 27, 23)
Fixed downstream dependency issues by connecting all positive + negative CLIP conditioning properly.
5. First Successful Render
Produced a flawless commercial-grade image of the gold perfume bottle after final pipeline cleanup.
🧪 Sprint Outcome
✅ Fully functional SDXL Base → VAE → Kling pipeline
✅ One clean end frame exported that Kling could animate perfectly
✅ High-fidelity product lighting + bokeh retained in animation
✅ Precise brand control achieved (Cartier restored)
✅ Pipeline documented for future Glowlock Labs modules
🎬 Final Output
An animated Cartier-style perfume commercial featuring:
-
warm spotlights
-
cinematic golden haze
-
glossy glass reflections
-
smooth camera movement from Kling
-
zero artifacting or wobble
-
luxury-grade finish

ChatGPT text to image prompt: “Ultra-premium product photography of a Cartier perfume bottle on a dark, warm, cinematic background. Rich amber liquid glowing inside a crystal-clear glass bottle with beveled edges. Elegant gold cap with minimal reflections. Soft studio lighting from above and behind, creating a dramatic golden halo and subtle gradient bokeh. Hyper-realistic, glossy reflections, shallow depth of field, photorealistic macro detail, luxury editorial aesthetic, soft shadows on the surface, warm caramel and burnt-amber tones, high-end commercial lighting, professional advertisement style.”
Negative prompt: “low quality, artifacts, blurry, distorted, bad reflections, warped text, malformed glass, overexposed highlights, low-res edges”

ComfyUI



Kling 2.5 Image/Text to Video Animation: Animated luxury perfume commercial. Slow camera push-in on a Cartier perfume bottle on a reflective golden surface. Dramatic soft spotlight beams move gently in the background, creating warm bokeh flares. Subtle shimmer particles drift through the air. The glass bottle catches light with glossy highlights and elegant reflections. Atmosphere is cinematic, smooth, premium, and elegance. no distortion. no artifacts.

comfyUI_luxury_ad
Project Twelve
19 November 2025 (Coming Soon)

👑✨ Quick midweek creative sprint: 🏃♂️➡️
🛠️ I’m building a tiny generative promo inspired by my favorite Netflix show The Crown to sharpen my ComfyUI → SDXL → Kling workflow. 📺
🧠 I wanted to challenge myself with something elegant, cinematic, and ridiculously detailed — so I’m prototyping a royal micro-ad: a slow, shimmering macro shot of the Crown Jewels with soft palace lighting, velvet textures, and gold particle bloom. 💫
It’s a perfect test case for:
✅ SDXL consistency
✅ stylized prestige-drama lighting
✅ motion loops
✅ jewel reflections & depth
✅ clean technical art workflow design
Basically… the kind of mini creative asset a modern streaming ads team would need on the fly. 📺⚙️
🪭 I’ll share the full loop + ComfyUI graph once it’s polished, but wow — this one’s already looking royal.
✨ AI + motion + storytelling = my happy place. 😍

