When Wisp took the Coachella stage for her debut, the visuals needed to match the energy: dreamy, distorted, shoegaze-inflected. Kaiber Labs was brought on to produce the full visual set using Superstudio's generative tools.
We trained a custom LoRA specifically for Wisp, designed to render painterly textures, rich atmospheric depth, and a gleaming protagonist figure. The visual language started with moody, distortion-heavy imagery and evolved into ethereal scenes featuring fog, phantom figures blooming and dissolving, slow motion falling back into haze between the band's more explosive moments.
The animations were tuned to pulse with the music's emotional arc, then settle into ambient texture during quieter passages. The final output spanned both Coachella weekends, projected across the festival's massive stage screens.
I began working as Kaiber’s Product Manager between the previous Labs project and Wisp. This meant i was closer to the inner-workings of the canvas and had direct control over implementing our learnings from this project.
Kaiber’s open canvas tool had matured since the previous applied research project. It was now possible to run a full production entirely within the tool, allowing for deeper integrations and stress testing of high volume creative workflows. Every workflow bottleneck, every missing feature, every moment where the tool got in the way of the creative process became a direct input into the product roadmap. This project, along with the previous Yaeji and Grimes collaborations, helped refine the feedback loop between Labs and the product team that shaped how Superstudio evolved.
The maturity of the tool allowed us to greatly refine the custom canvas process. Despite creating the production entirely in the canvas product, simplifying and productizing workflows still presented a challenge. We approached this by asking: what can a user learn about the product by using this canvas? We separated our key insights into sections on the canvas, each intended to teach the user a core concept.
- Building a custom model. After introducing LoRA training to the canvas, we noticed that only advanced users were taking full advantage of the functionality, despite support messages suggesting new users needed it as well. We used the Wisp canvas as a way to teach users not only about the process to create a LoRA on the canvas, but also how they work in the context of an IRL production.
- Keyframing key moments. By using the created LoRA, a user could generate a large amount of imagery in a consistent style. However, we knew that users had trouble translating high volume imagery into cohesive creative output. By framing the process as keyframe generation and suggesting an organization framework on the canvas, we could introduce the concept of a step-by-step workflow to users having trouble activating.
- Custom model to motion. We continue to stress workflow development while introducing video generation. The user has now seen imagery train a LoRA, keyframes generate from that LoRA, and videos generate from those keyframes. Aside from showing the user a clear path through a complex featureset, we also introduce a way to avoid a common user pitfall: prompting. The addition of a LoRA in the process ensures the stylistic integrity of the initial keyframe, allowing the user to focus entirely on directing the shot through a more focused prompt.
- The final product. A simple section showing a curated selection of video outputs introduces the idea of the canvas as a curation and presentation tool for output management. Extremely simple, but effective as a step to complete the full workflow.
The visibility of Coachella allowed Wisp’s fanbase to connect with her show through the canvas almost immediately after seeing it. We saw solid user numbers for the canvas specifically in addition to higher conversion to purchase events than the vanilla product onboarding at the time.

