Self-hosting guide and hardware requirements — for when weights drop.
Weights Not Yet Released
HappyHorse-1.0 claimed #1 on the Artificial Analysis leaderboard in April 2026. The team has announced plans to open-source the full model, but weights and inference code have not been published as of April 8, 2026.
Released by GAIR Lab + Sand.ai, daVinci-MagiHuman is the open-source model most closely linked to HappyHorse-1.0's architecture. Fully available on GitHub and HuggingFace — you can run it today while waiting for HappyHorse-1.0 weights.
Model Size
~30 GB (FP16)
~15 GB (FP8 quantized)
Min VRAM
48 GB (full precision)
40 GB (FP8)
Recommended GPU
H100 80 GB
or A100 80 GB
Inference Time
~38 s
for a 5 s 1080p clip on H100
Framework
PyTorch
expected
Distillation
DMD-2
8-step inference
These providers support H100 / A100 80 GB — the minimum for running HappyHorse-1.0.
RunPod
H100 / A100 available, pay-per-minute pricing.
Vast.ai
Cheapest H100 rates, large community GPU marketplace.
Lambda Labs
Dedicated GPU instances, enterprise-grade reliability.
No spam. One email when it's live. Unsubscribe anytime.