Decart & Lucy AI: Change Your Character Live — The Real-Time World Model That's Rewriting Reality
#1 AI Platform in Bangladesh
2026-03-13 | Deep Dive
Decart & Lucy AI: Change Your Character Live — The Real-Time World Model That's Rewriting Reality
Imagine you're watching a Twitch stream. The streamer looks like… themselves. Then someone types "turn into a samurai" in the chat. Instantly — not after a loading screen, not after a cut — the streamer
becomes a samurai. Their movements are perfectly tracked. The armor sways with their body. The background shifts to a feudal Japanese castle. All in real-time. No post-production. No green screen.
This isn't science fiction. This is
Lucy 2.0 by Decart AI, and it's the single most impressive demonstration of real-time generative AI the world has ever seen.
In this deep dive, we'll explore the company behind it, the technology that powers it, the products that led to this moment, and why "Change Your Character Live" isn't just a party trick — it's a paradigm shift.
---
🏢 Who Is Decart AI?
The Founders: From Military Intelligence to AI Frontier
Decart was founded in
2023* by two alumni of Israel's legendary *Unit 8200 — the elite military intelligence division often compared to the NSA, which has produced founders of companies like Waze, Check Point, and CyberArk.
-
Dean Leitersdorf* (CEO & Co-Founder) holds BSc, MSc, and PhD degrees in Computer Science from the *Technion — Israel Institute of Technology. His academic background in high-performance computing and low-level optimization would prove critical to Decart's core advantage: raw speed.
-
Moshe Shalev* (CPO & Co-Founder) brings a background as a *cyber operator, contributing deep expertise in systems engineering and real-time processing.
The two met during their military service, and their shared obsession with making AI
truly real-time — not "fast," not "near-real-time," but actually synchronous with human perception — became the DNA of Decart.
The Team
Decart's engineering team reads like a who's-who of Israeli tech talent. At least
13 employees are alumni of the Technion's excellence program. The team has deep expertise in:
-
Low-level GPU programming (writing custom CUDA kernels, not just using PyTorch)
-
High-performance computing at the chip level
-
Multi-cloud AI training infrastructure
-
Diffusion model optimization for inference speed
From a team of 15 in 2023, Decart has grown to over
60 employees* across offices in **San Francisco**, **New York**, **Tel Aviv**, and *northern Israel.
Funding: The Fastest Billion-Dollar Valuation in AI History?
Decart's fundraising trajectory has been nothing short of breathtaking:
| Round | Date | Amount | Valuation |
| :--- | :--- | :--- | :--- |
| Seed | Oct 2024 | $21 million | — |
| Series A | Dec 2024 | $32 million | — |
| Series B | Aug 2025 | $100 million |
$3.1 billion |
In less than two years, Decart went from stealth to a
$3.1 billion valuation*. For context, that's faster than even OpenAI's early trajectory. Investors include *Sequoia Capital and other top-tier firms.
The company also achieved something rare in the AI world:
profitability*. Decart generates millions in revenue by licensing its *GPU optimization technology to other AI labs and cloud providers. This isn't just a research lab burning cash — it's a company with a working business model.
---
🕹️ The Product Journey: From Oasis to Lucy 2.0
To understand Lucy 2.0, you need to understand the products that came before it. Each one was a stepping stone — a proof of concept that pushed the boundaries of what "real-time AI generation" actually means.
1. Oasis — The First Playable AI World (October 2024)
Oasis was Decart's debut product, and it landed like a thunderbolt.
Built in collaboration with
Etched* (a custom AI chip company), Oasis was the *first-ever real-time generative AI open-world game. Think of it as Minecraft, except every single frame is generated by an AI model, not rendered by a game engine.
How it worked:
- The model took player inputs (movement, actions) and generated the next frame of the game world in real-time
- Initial specs:
460p resolution, 20 fps on NVIDIA H100 GPUs
- The environment responded dynamically to player actions — blocks could be placed, terrain could be modified, and the world evolved
Why it mattered:
- It was the first proof that a generative model could replace a traditional game engine entirely
- It demonstrated that AI could maintain
spatial coherence (the world didn't randomly change between frames)
- Over
1 million users signed up within weeks of launch
Oasis was rough around the edges — 460p isn't exactly 4K — but the concept was revolutionary. For the first time, a video game existed entirely inside a neural network.
2. MirageLSD — Live Stream Diffusion (July 2025)
If Oasis proved that AI could generate worlds,
MirageLSD proved that AI could
transform them.
MirageLSD (the "LSD" stands for
Live-Stream Diffusion) was the first AI model capable of transforming live video feeds in real-time. Not pre-recorded video. Not Zoom filters.
Live video.
Technical leap:
-
Under 40 milliseconds per frame — fast enough that the transformation feels instantaneous to the human eye
- Could transform any live video source (webcam, phone camera, RTMP stream)
- Style transfers, environment changes, and "art mode" transformations all happened on-the-fly
Key use cases:
- Streamers could transform their entire visual feed into an anime world
- Artists could paint with AI in real-time
- Musicians could create visual experiences that morphed with their performance
MirageLSD became available on
Crusoe Cloud* in October 2025 and quickly attracted a community of creative streamers and artists. In October 2025, Decart also announced a collaboration with *ElevenLabs to create "Living Characters" — AI-generated avatars with both visual and vocal personality.
3. Lucy 2.0 — The Real-Time World Model (January 2026)
And then came
Lucy 2.0.
If Oasis was a proof of concept and MirageLSD was a specialized tool, Lucy 2.0 is the
platform. It's the culmination of everything Decart learned, packaged into the most powerful real-time generative video model ever created.
Lucy 2.0 isn't a video generator. It's a
world model. The distinction matters.
---
🧠 What Is a "World Model" and Why Does It Matter?
A
video generator (like Sora, Kling, or Runway) takes a prompt and produces a clip. It has a beginning, an end, and a fixed duration. Once the clip is done, the model's job is over.
A
world model does something fundamentally different: it creates a persistent, continuous reality that responds to input in real-time. There is no "clip." There is no "end." The model runs as long as you need it to, maintaining coherence across time, space, and identity.
Think of it this way:
| Feature | Video Generator | World Model (Lucy 2.0) |
| :--- | :--- | :--- |
| Output type | Fixed-length clip (5-60 sec) | Continuous, infinite stream |
| Latency | Seconds to minutes | Under 100ms per frame |
| Interactivity | None (prompt → output) | Real-time (prompt → live edit) |
| Identity persistence | Often drifts | Maintained for hours |
| Cost model | Per-clip ($0.10-$5.00) | Per-hour (~$3/hour) |
| Input | Text/image prompt | Live video + text + images |
This table reveals why Lucy 2.0 is a fundamentally different category of technology. It doesn't compete with Sora. It competes with
reality augmentation.
---
🔧 Lucy 2.0: The Technical Deep Dive
Architecture
Lucy 2.0 uses a
pure diffusion architecture — but it's unlike any diffusion model you've seen before.
Key technical innovations:
1.
Autoregressive Stateful Pipeline: Each frame is conditioned on the
entire previous state, not just the last few frames. This is what makes identity persistence possible. The model doesn't "forget" what the character looks like after 30 seconds.
2.
Optimized TensorRT Kernels: Decart didn't just use off-the-shelf inference. They wrote custom TensorRT kernels that prune unnecessary convolution layers, reducing latency without sacrificing quality.
3.
No Depth Maps, No 3D Models*: Lucy 2.0 learns physics and spatial relationships *directly from video data. It doesn't need depth sensors, LiDAR, or 3D mesh models. Point any camera at a scene, and the model understands the geometry.
4.
Smart History Augmentation*: This is Decart's proprietary technique for maintaining quality over extended periods. During training, the model is exposed to its own imperfect outputs and *penalized for quality drift*. This is why Lucy 2.0 can run for *hours without the visual quality degrading — a problem that plagues every other video model.
Technical Specifications
| Spec | Value |
| :--- | :--- |
| Resolution |
1080p (Full HD) |
| Frame Rate |
30 fps |
| Latency |
Under 100ms per frame |
| Max Duration |
Unlimited (tested for multi-hour sessions) |
| Cost |
~$3/hour for sustained generation |
| Supported Inputs | Webcam, smartphone, RTMP stream |
| Hardware | NVIDIA GPUs, AWS Trainium, Google Cloud |
| Editing Method | Text prompts, reference images, or both |
| Quality Maintenance | Smart History Augmentation (no degradation over time) |
These numbers are staggering. To generate 1080p video at 30fps with under 100ms latency for $3/hour is approximately
100x cheaper than what was possible two years ago.
---
🎭 "Change Your Character Live" — The Killer Feature
This is the feature that made the internet collectively lose its mind.
What It Does
"Change Your Character Live" allows a person on a live video stream to
instantly transform into any character — fictional, historical, fantastical, or otherwise — while maintaining:
-
Perfect motion tracking (the character moves exactly as they do)
-
Facial expression mapping (the character smiles when they smile)
-
Physical coherence (clothing wrinkles, hair bounces, armor reflects light)
-
Environmental consistency (the background transforms to match the character's world)
How It Works in Practice
Here's a real scenario demonstrated at
TwitchCon 2025:
1. A Twitch streamer is sitting at their desk, talking to chat
2. A viewer types:
"Turn into a medieval knight"
3.
Instantly — within one frame — the streamer's appearance transforms:
- Their clothes become plate armor
- Their background becomes a castle interior
- Their face retains their expressions but takes on a more stylized look
4. They continue talking. The armor
moves with them. When they lean forward, the chainmail shifts. When they gesture, the gauntlets follow.
5. Another viewer types:
"Now make it anime"
6. The entire scene shifts to anime style. Same pose, same expression, completely different visual language.
7. The streamer never stops talking. There is no loading screen. There is no buffering.
This is all happening at
1080p, 30fps, with under 100ms of latency.
Why "Change Your Character Live" Is Revolutionary
This isn't just a cool demo. It represents several fundamental breakthroughs:
1. Democratization of Visual Effects
What used to require a team of VFX artists, a green screen studio, and hours of post-production can now be done by a single person with a webcam. A bedroom streamer now has access to the same visual transformation capabilities as a Hollywood studio.
2. The End of Static Avatars
VTubers currently use pre-rigged 2D or 3D models that take weeks to commission and can only express a limited range of movements. Lucy 2.0 makes every camera feed an avatar — one that can change at any moment, with infinite variety, and with full-body motion expression.
3. Interactive Content at Scale
When viewers can change what a streamer looks like in real-time, the content becomes truly interactive. This isn't "choose your own adventure" — it's "you are the adventure." The audience becomes a co-creator.
4. Identity Without Limits
A streamer can be anyone, anywhere, at any time. They can be a dragon in a fantasy world for one segment, a news anchor in a virtual studio for the next, and then become themselves again — all within a single continuous stream.
---
🌐 Beyond Streaming: Lucy 2.0's Wider Applications
While "Change Your Character Live" is the attention-grabbing headline, Lucy 2.0's applications extend far beyond Twitch.
🛍️ E-Commerce & Virtual Try-On
Imagine shopping for clothes online. Instead of looking at a static image on a model, you see a
live video of yourself wearing the outfit. You spin around. The fabric moves. You try a different color — it changes instantly. This is happening now with Lucy 2.0's virtual try-on capabilities.
The implications for online retail are massive. Return rates in fashion e-commerce hover around
30-40% — largely because customers can't accurately see how clothes will look on them. Real-time virtual try-on could slash that number dramatically.
🤖 Robotics Training & Simulation
This is perhaps Lucy 2.0's most underrated application.
Robots trained in simulation often fail in the real world because simulated environments look too "clean" — perfect lighting, pristine surfaces, no visual noise. Lucy 2.0 can take a simulated environment and
degrade it in real-time: adding smoke, changing lighting conditions, simulating weather, introducing visual clutter.
This "sim-to-real" bridge could accelerate robotics development significantly. Instead of training robots in perfect digital worlds that don't match reality, you train them in imperfect, dynamic worlds that
do.
🎬 Virtual Production & Film
The "LED wall" virtual production technique (made famous by The Mandalorian) requires expensive hardware and pre-rendered backgrounds. Lucy 2.0 could potentially replace this with
live-generated backgrounds that respond to camera movement and actor positioning in real-time.
A director could say "make it rain," and it starts raining — not in post, but live on set, visible to everyone.
🎓 Education & Training
Medical students could see a patient's symptoms visualized on their own body. Military personnel could train in dynamically changing environments. Language learners could be "transported" to a virtual version of the country whose language they're studying.
📱 Live Social Media
TikTok and Instagram creators could add Hollywood-quality effects to their content without any editing skills. A creator filming on their phone could become a cartoon character, stand on the moon, or appear as a historical figure — all in real-time, all from their phone camera.
---
📊 The Lucy Product Family
Lucy 2.0 isn't just one model — it's a
family of specialized models designed for different use cases:
| Model | Purpose | Input → Output |
| :--- | :--- | :--- |
|
Lucy Edit Live | Real-time video editing | Live video → Modified live video |
|
Lucy (I2V) | Image to video conversion | Static image → Dynamic video |
|
Lucy Edit (V2V) | Video-to-video transformation | Recorded video → Transformed video |
|
Lucy Image (T2I) | Text to image generation | Text prompt → Image |
|
Lucy Image Edit (I2I) | Image-to-image editing | Image + prompt → Modified image |
All models share the same underlying architecture, but each is fine-tuned for its specific use case. The
Lucy Edit Live model is the one powering the "Change Your Character Live" demos.
API Access
Decart offers a developer-friendly API platform for integrating Lucy's capabilities into third-party applications. This includes:
-
Real-time video editing endpoints (Lucy Edit Live)
-
Batch processing endpoints (Lucy Edit, Lucy Image)
-
RTMP stream ingestion (for live streaming integrations)
-
Webhook support for event-driven workflows
The API makes it possible for any developer to build Lucy-powered features into their own app — from a beauty company building a virtual try-on feature to a game studio adding dynamic character skins.
---
🏗️ The Decart-Technion Partnership
In a move that underscores their commitment to Israel's AI ecosystem, Decart has established a
joint AI research center* with the Technion — Israel Institute of Technology. The Technion's excellence program has even been renamed the *Technion-Decart Excellence Program.
This partnership focuses on:
-
Fundamental AI research in real-time generation
-
Talent pipeline development for the next generation of AI engineers
-
Applied research that feeds directly into Decart's product development
Dean Leitersdorf has publicly stated that he envisions this partnership as part of making Israel "a global AI powerhouse." Given that Decart has already become one of the most valuable AI companies in the world from Israeli roots, that vision seems well on its way.
---
📈 The Competitive Landscape
Lucy 2.0 doesn't compete with traditional video generators. It sits in a class by itself:
real-time, continuous, interactive video generation. But there are adjacent competitors to be aware of:
| Company | Product | Key Difference from Lucy 2.0 |
| :--- | :--- | :--- |
| OpenAI | Sora 2 | Clip-based (not continuous), higher latency |
| Google | Veo 2 | Research-grade, limited public access |
| Runway | Gen-4 | Focused on creative clips, not live streams |
| Minimax | Hailuo | Cost-effective clips, but no real-time mode |
| Kling | Kling 2.0 | Strong quality, but clip-based |
The fundamental difference: all of these generate
clips*. Lucy 2.0 generates *reality. It's the difference between a photograph and a window.
---
🔮 What's Next for Decart?
Based on public statements and product trajectory, here's what we can expect:
-
4K resolution support: Currently at 1080p, but with the Series B funding and hardware partnerships, 4K real-time generation is on the roadmap
-
Mobile apps: iOS and Android applications are in development, which would allow anyone with a smartphone to use Lucy's transformation capabilities
-
Voice integration: The ElevenLabs partnership suggests that "Living Characters" — AI avatars with both visual and vocal personality — are coming soon
-
Enhanced facial consistency: Improvements to how well the model preserves identity across extreme style changes
-
Object manipulation: More precise control over specific objects within the scene
---
💡 The Bigger Picture: Why Lucy 2.0 Matters
We're at an inflection point in the history of visual computing.
For 30 years, "real-time graphics" meant game engines — pre-built 3D models, textures, and physics simulations running on GPUs. This approach has produced incredible results (look at Unreal Engine 5), but it's fundamentally limited: every asset has to be
created by a human.
Lucy 2.0 represents the beginning of a new era:
real-time generative graphics. Instead of rendering pre-made assets, we're generating new visual reality from scratch, frame by frame, guided by simple text and image prompts.
The implications are staggering:
-
Games that generate unique worlds for every player
-
Live streams where every viewer sees a personalized version of the content
-
Education where students experience history rather than read about it
-
Retail where you never need to imagine how something looks on you
-
Film where virtual production happens in real-time, not in post
We're not there yet — Lucy 2.0 is still in its early days, and the technology will improve dramatically over the coming months and years. But the direction is clear. The era of pre-rendered, static, clip-based visual content is ending. The era of live, continuous, interactive, generative visual reality is beginning.
And Decart, with Lucy 2.0 at its helm, is leading the charge.
---
🔗 Related Resources
-
Top 30 AI Models March 2026 Ranking
-
Best AI Video Generator Subscriptions in BD
-
AI Recap 2025: The Year in Review
-
How to Pay for AI with bKash
---
Decart's Lucy 2.0 is available via API. Explore the latest AI models — including real-time video generators — on MangoMind. One subscription, every frontier model.