Decart builds the foundational model for real-time, frame-by-frame generative video — perception, decision, and
action instantly visible. The company operates across the entire computational stack, from GPU assembly to world
models, and is already in production with NVIDIA and AWS.
NVIDIA · AI GRIDAWS · BEDROCKSEQUOIABENCHMARKZEEV VENTURES
PREPARED BYTal Nechushtan·Independent analysis · not an official Decart document
DECART · SIGNET
REAL-TIME · GENERATIVE
TEL AVIV · SAN FRANCISCO · NEW YORK01 / 17
DECART AI / INVESTMENT MEMORANDUM / § 02 / THESIS
REV. APR 2026 / CONFIDENTIAL / PAGE 02
//01 · THESISAPRIL 2026
A category is being built around Decart's technical lead.
01
Only production real-time world model on the planet.
101 fps · 10 ms inference · 40 ms time-to-first-frame at 1080p. Every other world model in the category — Cosmos,
Genie 3, World Labs, AMI Labs — is pre-computed, research-stage, or non-interactive.
02
The moat sits below CUDA.
GPU-assembly mega-kernels; activations stay in on-chip SRAM. AWS confirmed 4× faster inference at half the cost on
Trainium3.
03
The signal behind the cap table is the story.
NVIDIA has put capital in and deployed Decart in production on the AI Grid and Isaac Lab. Sequoia has participated
in every round since seed. These are deployer relationships, not portfolio signals.
DECART AI02 / 17
DECART AI / INVESTMENT MEMORANDUM / § 03 / CATEGORY PRIMER
REV. APR 2026 / CONFIDENTIAL / PAGE 03
//02 · CATEGORY PRIMERAPRIL 2026
// FIRST PRINCIPLES
What is a world model?
An AI system that simulates an interactive environment — with physics, memory, and
real-time response to user input — rather than rendering a fixed video clip.
VIDEO MODEL
A clip you watch.
Sora, Veo, Runway. The model writes a pre-determined sequence, frame by frame, up front. Once rendered, nothing
in the scene responds — it plays back.
Decart (Oasis, Lucy, Mirage), DeepMind Genie 3, NVIDIA Cosmos. Frames are generated autoregressively as you act —
the environment changes because you did.
INTERACTIVEPHYSICS + MEMORYMS-SCALE LATENCY
REAL-TIME PLAYERS · APRIL 2026
DecartSHIPPED · REAL-TIME
Google DeepMindGENIE 3 · PREVIEW
NVIDIACOSMOS · PHYSICAL AI
OTHER APPROACHES · ADJACENT BETS
World Labs· Fei-Fei Li
SPATIAL INTELLIGENCE · 3D FROM TEXT/IMAGE
$1B · FEB '26
$5B POST
AMI Labs· Yann LeCun
JEPA · LEARN FROM REALITY, NOT LANGUAGE
$1.03B · JAN '26
$4.5B POST
WHY IT MATTERS
Two of the AI field's senior-most researchers — Fei-Fei Li and Yann LeCun — each raised >$1B in the past three months
to build world models. When frames become interactive, every
screen-based product becomes a candidate rewrite. Decart is the only shipped real-time entrant.
DECART AI03 / 17
DECART AI / INVESTMENT MEMORANDUM / § 04 / MODEL SPECS
REV. APR 2026 / CONFIDENTIAL / PAGE 04
//03 · MODEL · SPECSAPRIL 2026
Model, measured.
A single real-time inference stack. Four numbers that define the category — and one sample output that proves it.
THROUGHPUT
0
FPS · 1080P
INFERENCE LATENCY
0ms
PER FRAME
AWS CONFIRMED
0×
FASTER · 0.5× GPU COST · TRAINIUM3
COST VS THE NEXT BEST
0×
CHEAPER PER HOUR OF 1080P VIDEO
LUCY 2 · SUSTAINED
$0.25 / hour — 1080p / 30 fps sustained. Available on Amazon Bedrock.
DECART AI04 / 17
DECART AI / INVESTMENT MEMORANDUM / § 05 / CAPITAL
REV. APR 2026 / CONFIDENTIAL / PAGE 05
//04 · CAPITAL · ROUNDSAPRIL 2026
From $100M to $3.8B. Eighteen months.
Four rounds since founding. Sequoia and Benchmark have participated in every round. The latest Series B is led by Radical Ventures with a major investment from NVIDIA.
§
ROUND
DATE
RAISED
POST-MONEY
LEAD
01
Seed
Sequoia · Zeev Ventures
OCT 2024
$21M
$100M
Sequoia Capital
02
Series A
Benchmark · Sequoia · Zeev
DEC 2024
$32M
$500M
Benchmark
03
Series A · II
Sequoia · Benchmark · Zeev · Aleph
AUG 2025
$100M
$3.1B
Sequoia Capital
04 · CURRENT
Series B
Radical · NVIDIA · Sequoia · Benchmark
APR 2026
$300M
$3.8B
Radical Ventures
CAPITAL DEPLOYED
$453M total raised · < $10M of prior capital spent before Series B · NVIDIA strategic position
DECART AI05 / 17
DECART AI / INVESTMENT MEMORANDUM / § 06 / EFFICIENCY
REV. APR 2026 / CONFIDENTIAL / PAGE 06
//05 · EFFICIENCYAPRIL 2026
// BURN & REVENUE
"We won't spend billions to make billions."
Foundational AI is synonymous with ruinous burn. Decart is running the opposite playbook — shipping category-defining
models on a fraction of the capital, while already booking multi-year hyperscaler revenue.
MONTHLY BURN
<$0M
PER MONTH · FULLY LOADED
Orders of magnitude below any foundational-model peer at this valuation tier. Proof of how efficient the stack
has made the company — the same kernel work that ships the product compresses the cost of shipping the product.
HYPERSCALER CONTRACT
0-YEAR
8-FIGURE ARR · TOP-3 CLOUD
Multi-year infrastructure contract with a top-3 hyperscaler — signed revenue at eight-figure ARR, not a pilot.
Proof the stack is priced, contracted, and forecastable.
DECART AI06 / 17
DECART AI / INVESTMENT MEMORANDUM / § 07 / WORLD MODELS
REV. APR 2026 / CONFIDENTIAL / PAGE 07
//06 · COMPANY TYPEAPRIL 2026
A world models company.
Most "AI startups" building today are application-layer — wrappers, agents, workflows. They sit on top of OpenAI, Anthropic,
Google. Decart is underneath, competing with those same labs on the
category they can't serve: real-time generative video.
APPLICATION LAYER
Wrappers on frontier labs.
Cursor, Lovable, Perplexity, Harvey. Distribution, product, integration. Moat is user data and workflow lock-in.
Capacity to train a frontier model: none.
FOUNDATIONAL LAYER
Decart, OpenAI, Anthropic, DeepMind.
Training frontier models from scratch. Hardware–model co-design. Talent pool globally: 50–100 people
with the specific combination of skills required. Decart has them.
DECART AI07 / 17
DECART AI / INVESTMENT MEMORANDUM / § 08 / TEAM
REV. APR 2026 / CONFIDENTIAL / PAGE 08
//07 · TEAMAPRIL 2026
One of the most technical teams in AI, period.
The founders are the technical talent. GPU-assembly expertise and production-grade systems
engineering — the two disciplines that define Decart's lead — live at the top of the company.
Dean Leitersdorf
CO-FOUNDER · CEO
BSc, MSc and PhD in CS at the Technion in five years — doctorate at 23, among the youngest in Technion history.
Research: distributed computing and GPU optimization algorithms, the exact substrate of Decart's inference stack.
Moshe Shalev
CO-FOUNDER · CPO
13 years in Unit 8200. Built and ran AI operations for IDF Intelligence under Sariel. Low-level systems, HPC, and
chip-level optimization — the production-systems discipline that defines how Decart ships.
Orian Leitersdorf
CHIEF SCIENTIST
Technion PhD at 21 — broke his brother's record. Hardware accelerators specialist. Leads the kernel and
architecture work that turns assembly-level inference wins into shipped models.
Brig. Gen. Yossi Sariel
CHIEF STRATEGIST
Former commander of Unit 8200. Author of The Human Machine Team, the framework for AI-military integration
read across allied defense establishments. Joined the company in February 2026.
TALENT PIPELINE
Technion–Decart Excellence Program (endowed) · joint AI research center MoU · 8 current employees from the program ·
Unit 8200 alumni network (50% of Israeli unicorns — Wiz, Palo Alto, CyberArk, Waze)
DECART AI08 / 17
DECART AI / INVESTMENT MEMORANDUM / § 09 / SIGNAL
REV. APR 2026 / CONFIDENTIAL / PAGE 09
//08 · THE INVESTORSAPRIL 2026
The investors behind the cap table.
Three institutions anchor Decart's cap table — two of the most selective venture firms in the world, plus a
strategic investor that turned its capital into a production deployment.
INVESTOR · SEED THROUGH B
Every round since seed — led the Seed
Sequoia has been in every Decart round — the strongest signal the firm can send. The same partnership pattern
behind Wiz and xAI.
4 / 4 ROUNDSSEED LEAD
INVESTOR · SERIES A LEAD
Led Series A — follow-on through A·II and B
Benchmark led the Series A at a 5× step-up weeks after seed and has continued in every subsequent round —
Benchmark almost never participates at this tempo.
SERIES A LEADACTIVE BOARD
STRATEGIC · SERIES B
Production integration, not a portfolio bet
NVIDIA is both an investor and a customer. Decart is deployed on the AI Grid and integrated into Isaac Lab —
the production stack runs on NVIDIA silicon.
PRODUCTION · AI GRIDISAAC LAB
DECART AI09 / 17
DECART AI / INVESTMENT MEMORANDUM / § 10 / BELOW CUDA
REV. APR 2026 / CONFIDENTIAL / PAGE 10
//09 · THE TECHNICAL MOATAPRIL 2026
The stack is written below CUDA.
We write below CUDA, down to GPU assembly. That is how we keep latency under forty milliseconds.
— Dean Leitersdorf, CEO · AWS re:Invent 2025
01 · PROBLEM
CUDA is the industry's default GPU layer — great for productivity, but it launches every operation as a
separate kernel and round-trips activations through slow off-chip memory (HBM). At 40 ms per frame, that
overhead is the budget.
02 · MOVE
Fuse dozens of ops into a single mega-kernel — one launch, not
hundreds. Activations stay in on-chip SRAM. In the hot path, drop further to hand-written
PTX (GPU assembly). Same technique on AWS via the Neuron Kernel
Interface on Trainium.
03 · RESULT
4× faster inference at half the GPU cost — AWS-confirmed on
Trainium3. Same inference stack, seven hardware families.
The same GPU-assembly work that powers Lucy 2 applies cleanly to transformer inference.
A software-only drop-in at the CUDA layer — no retraining, no model changes, no new hardware — applied to the
workloads OpenAI, Anthropic, and Google run at hyperscale.
MECHANISM
Fused CUDA mega-kernels + hand-written PTX collapse the memory and launch overhead that dominates transformer
decoding. A validated ~20% reduction — any LLM, any modern GPU, no
model changes.
STATUS
In active conversations with multiple hyperscalers — all inbound. Decart has
not productized and is not pursuing deals. World models remain the
company's singular focus.
OPTIONALITY
Structural hedge on the thesis. If world models take longer than expected, Decart already owns a validated
>$20B/yr infrastructure business sitting idle — pricing power compounds with every token served.
MARKET · >$100B 2026 GLOBAL LLM INFERENCESTATUS · Not for sale
DECART AI11 / 17
DECART AI / INVESTMENT MEMORANDUM / § 12 / PRODUCT FAMILY
REV. APR 2026 / CONFIDENTIAL / PAGE 12
//11 · PRODUCT FAMILYAPRIL 2026
Three shipped models. One substrate.
Each product sits on the same real-time inference stack. The model is the company's research output; the stack is the
company's commercial output. Decart has productized both.
LIVE OUTPUT
Oasis
SHIPPED · 2024
WORLD MODEL
The first real-time interactive, generative AI video model — keyboard inputs render a playable open world frame by
frame, no game engine. 10M users in 2 weeks, faster than ChatGPT at
equivalent launch. Elon Musk publicly endorsed it; MIT Technology Review called it the future of real-time video.
"A UNIVERSE IN A TRANSFORMER"
● LIVE · REFERENCE SWAP
Lucy 2
SOTA · 2026
LIVE VIDEO MODEL
Frame-level edits to live video. Character swaps, product placement, real-time data augmentation for robotics
training. Shifts high-fidelity video editing from offline rendering to live interaction.
101 fps, 10 ms latency, $0.25/hr — 100× cheaper than the next best.
AVAILABLE ON AMAZON BEDROCK
DIFFUSION STREAM
MirageLSD
RESEARCH → PRODUCTION
LIVE-STREAM DIFFUSION
The first system to achieve infinite, real-time video generation with zero latency. Transforms any live stream —
Twitch, camera feeds, OBS sources — from a text prompt, mid-broadcast.
Sub-35 ms end-to-end at the edge.
FIRST-OF-KIND · CATEGORY DEFINING
RESEARCH VELOCITY
Latest: Adaptive-Origin Guidance (AdaOr) — continuous edit-intensity slider for text-driven video editing · ElevenLabs partnership on living characters
DECART AI12 / 17
DECART AI / INVESTMENT MEMORANDUM / § 13 / ROADMAP
REV. APR 2026 / CONFIDENTIAL / PAGE 13
//12 · ROADMAPAPRIL 2026
Three stages. Each a category-defining market.
01
Each stage targets an exceptionally large market.
02
Capital-sparing; ROI-first sequencing.
03
"We won't need billions to generate billions."
STAGE 01
2024 — 2026
AI Infra Optimization
Best-in-class low-latency inference. Improved AWS Trainium chip performance >30×. Production deployment on
NVIDIA AI Grid. In revenue discussions with multiple hyperscalers.
CURRENTAWS · NVIDIA
STAGE 02
2025 — 2027
Immersive Experiences
World models in Gaming, E-Commerce, Live-Streaming, and Advertising. Major demand surge projected in 9–12
months as inference cost crosses the $0.10/hr threshold.
GAMINGCOMMERCELIVE · ADS
STAGE 03
2026 — 2028 +
Physical AI
World models transition from virtual to physical: Robotics, Manufacturing, Defense Simulation. Decart's
efficiency becomes the unlock for always-on, on-device simulation.
ROBOTICSMANUFACTURINGDEFENSE
PRODUCTION PARTNERS
AWSAmazon Bedrock
DECART AI13 / 17
DECART AI / INVESTMENT MEMORANDUM / § 14 / GO-TO-MARKET
REV. APR 2026 / CONFIDENTIAL / PAGE 14
//13 · GO-TO-MARKETAPRIL 2026
Four channels. Each with demand already in place.
01 · GAMING
$250B
GLOBAL MARKET
AI-native worlds replace the engine. Unlocks when inference reaches $0.10–0.20/hr — projected Q1 2027.
02 · E-COMMERCE
Amazon
VIA BEDROCK
Virtual try-on with no 3D models or depth sensors. Webcam in, physics-consistent output in real time.
03 · LIVE STREAMING
140M MAU
TWITCH · OBS PLUGIN
Streamers transform their scene from natural language in-broadcast. Freemium, funded by infra licensing.
04 · ADVERTISING
$56B
AMAZON ADS · 2025
Dynamic product placement in live video. Physics-consistent lighting and shadows at scale.
Each channel converts an existing platform relationship — AWS, Amazon, NVIDIA — into a production deployment.
Bedrock reduces direct-sales burden on three of four simultaneously.
DECART AI14 / 17
DECART AI / INVESTMENT MEMORANDUM / § 15 / PHYSICAL AI
REV. APR 2026 / CONFIDENTIAL / PAGE 15
//14 · PHYSICAL AIAPRIL 2026
The same model is a physics simulator.
A model trained on entertainment video has learned the physical structure of the world. A single real-world demo becomes
thousands of physically consistent training examples — at costs measured in thousands, not millions.
INTEGRATED · Q1 2026
NVIDIA Isaac Lab
Robotics training platform
Decart inside the production platform where commercial robotics companies train policies for real warehouses,
factories, and logistics. Adoption friction removed at point of highest purchase intent.
$2.1B → $17.2B BY 2030 · 42% CAGR
Manufacturing
Industrial physical-AI platform
Generating high-diversity training scenarios for assembly-line robots in hours instead of months. VLA investment
exceeded $3.8B in 2025 — nearly triple the 2023 level.
~$3/HR FOR 1080P SIMULATION
Defense Simulation
Autonomous systems training
Contested, degraded, adversarial conditions — generated from a single recording. Institutional access through
Yossi Sariel. NVIDIA AI Grid is the same hardware layer as JADC2, ABMS, Project Convergence.
DECART AI15 / 17
DECART AI / INVESTMENT MEMORANDUM / § 16 / APPLICATIONS
REV. APR 2026 / CONFIDENTIAL / PAGE 16
//15 · APPLICATIONSAPRIL 2026
Five markets. One stack.
The same real-time inference stack powers every vertical below. Click any tile to play with audio.
SAME MODEL · DIFFERENT PROMPT
101 FPS · 10 MS · $0.25/HR
DECART AI16 / 17
DECART AI / INVESTMENT MEMORANDUM / § 17 / IN CONCLUSION
REV. APR 2026 / CONFIDENTIAL / PAGE 17
//16 · IN CONCLUSIONAPRIL 2026
// IN CONCLUSION
Three moats, one company.
01 · TALENT
The densest systems bench in generative video.
Ex-Meta, ex-Google Brain, ex-NVIDIA kernel authors. The people who can write Trainium and CUDA at the
register level number in the low hundreds globally — Decart employs a disproportionate slice.
02 · LATENCY
10 ms per frame. Nobody else is close.
Real-time interactive video at 101 fps is the first primitive of a new computing surface — games,
telepresence, XR, robotics all collapse into the same API once frames become interactive.
03 · OPTIMIZATION
A below-CUDA stack that compounds.
Custom kernels on Trainium3 deliver 4× faster inference at half the GPU cost — AWS-confirmed. Every
efficiency gain lowers the floor on what real-time video can cost, widening the category.
The research output is the model. The commercial output is the stack. Decart has productized both — and the
same engineering substrate
extends cleanly into transformer inference, robotics, and simulation.
READ THIS FIRST
Every pixel you're about to see is generated fresh, live, at 101 fps —
your webcam is the input, a foundation model is the output.
This is not a Snapchat filter.