On March 24, 2026, OpenAI announced it is closing the Sora product. The app — web and mobile — shuts on April 26, 2026. The API follows on September 24, 2026. There is no successor. Sora will continue inside OpenAI as an internal research effort on world models, but the thing you use to make videos is going away.

That is a little over two weeks from now.

If you have Sora clips you care about, this article is for you. If you were thinking about using Sora for a project this month, this article is also for you. And if you are wondering which of the other video models can actually take its place — same.

First: get your work out

Before anything else, export your existing generations. You do not want to rush this on April 25.

OpenAI’s official guidance is straightforward. In the Sora app, hover over any video or image, click the three-dot menu, and select Download. You can also go to sora.chatgpt.com/exports/me to bulk-export everything you have made.

Do it this week. Download is the one thing you can only do while the app is still open to you.

Credits are, per OpenAI’s standard terms, non-refundable. There has been no special shutdown refund announced at the time of writing. If you are on a subscription that includes Sora access, the non-Sora parts of that subscription continue. Just assume the Sora credits you have are used-or-lost.

What Sora was actually good at

To choose a replacement well, it helps to be honest about what Sora did that made it feel different.

From OpenAI’s own language in the Sora 2 announcement, three things stood out:

Physics. Sora was unusually good at things like rebounds, splashes, cloth movement, weight. If a basketball missed a shot, it bounced off the backboard instead of clipping through it. Most early video models got this wrong. Sora got it mostly right.

Controllability across multiple shots. You could give Sora intricate instructions spanning several shots and it would persist the world state — the same room, the same light, the same objects — across them. That is the thing that made it feel more like film than flipbook.

“Realistic, cinematic, and anime styles.” OpenAI’s own words. Sora was not a specialist — it held its own across three very different visual languages.

These are the things you want to preserve when you move. Not every replacement does all three equally well.

Where to go

Five models are genuinely worth your attention. Each one has a different strength.

Kling 3.0 — the closest like-for-like

Kuaishou released Kling 3.0 on February 5, 2026. Fifteen seconds per generation. Native 4K at 60 frames a second. Integrated audio — dialogue, lipsync, and sound design generated alongside the video rather than added afterwards.

Kling is the replacement most Sora creators are actually picking. For multi-shot sequences and the physical-feeling motion Sora was known for, it is the cleanest one-to-one move.

Access it at klingai.com or through a multi-model platform like Flora Fauna, which carries Kling 3.0 Pro and Standard alongside Kling 2.6 and 2.1.

Veo 3.1 — the audio specialist

Google’s Veo 3.1 has been out since October 2025 and is the best video model for anything where audio matters. Dialogue and sound effects are generated at the same time as the image, synchronised from the start. No post-production layering. We have a full Veo workflow guide here.

Clips are four, six, or eight seconds per generation. You can extend a scene iteratively to build longer sequences, and 4K output became available in January 2026. Access through the Gemini app, Flow (flow.google), or through Flora Fauna, which carries Veo 3.1, Veo 3.1 Fast, and the cost-reduced Veo 3.1 Lite that Google released on March 31, 2026.

If your Sora work leaned on dialogue or character voices, Veo is your move.

Runway Gen-4.5 — the film-look one

Runway released Gen-4.5 on December 1, 2025. It is the model most closely associated with narrative, directed work — the look and feel of a short film you sat down to plan. Image-to-video, keyframes, video-to-video. A mature web interface at runwayml.com with monthly plans that start around $15.

Gen-4.5 added native audio in December 2025 — dialogue, sound effects, and ambient — though Veo 3.1 and Kling 3.0 still lead on lip-sync precision. If you are working on a short film, a music video, or anything where the visual language is the point, Runway is the model that most looks like it was directed.

Seedance 2.0 — via Flora, for now

For UK, European, and US creators, the practical route to Seedance 2.0 right now is through Flora Fauna, which carries both Seedance 2.0 and Seedance 2.0 Fast. Here is why that detour exists.

ByteDance released Seedance 2.0 in February 2026. On paper it is exceptional — multi-shot consistency and physically accurate human motion that lands it near the top of every current video benchmark. But in mid-March, ByteDance paused the global launch. CapCut integration rolled out only to select non-Western markets — Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, Vietnam. Direct API access is limited to fal.ai.

Flora is, at the time of writing, the cleanest way to use this model if you are outside the live regions.

MiniMax Hailuo 2.3 — the micro-expression specialist

Hailuo 2.3 is the outlier. Six-second clips, by design. The team has been explicit that the cap is intentional — short, dense, beautifully animated moments rather than long generations. Best for close-ups, character emotion, subtle motion.

Not a Sora replacement for long-form work. But if you want the kind of tiny performative beats that a six-second clip can carry — a laugh, a glance, a breath — Hailuo 2.3 is often the best choice.

Quick recommendation table

What you were using Sora forMove to
Multi-shot sequencesKling 3.0
Dialogue and lipsyncVeo 3.1
Narrative, directed, film-lookRunway Gen-4.5
Character emotion, close-upsMiniMax Hailuo 2.3
Physical realism, human motionSeedance 2.0 (via Flora)
One subscription for all of the aboveFlora Fauna

One observation before you pick: most working video creators are not using a single model. They use two or three, each for what it is best at. Kling for the establishing shots, Veo for the dialogue scene, Hailuo for the reaction. That is a slower realisation to arrive at than “just replace Sora” — but it is the realisation almost everyone ends at eventually.

Translating a Sora prompt

Here is a Sora-style prompt and the two small changes you make to move it to Kling or Veo.

Original Sora prompt:

Wide shot of a woman in a red coat walking through a misty pine forest at dawn. Soft mist curls around her ankles. Light filters through the canopy in long beams. She pauses, looks up, and smiles faintly.

For Kling 3.0 — keep the camera language, keep the action, add an audio line at the end:

Wide tracking shot. A woman in a red wool coat walks through a misty pine forest at dawn. Soft mist curls around her ankles. Shafts of warm light filter through the canopy. She pauses, looks up, and smiles faintly. Ambient forest sounds — distant birdsong, soft wind through the pines, her footsteps on damp earth.

For Veo 3.1 — same shape, but add a line of dialogue, because that is what Veo rewards:

Wide tracking shot. A woman in a red wool coat walks through a misty pine forest at dawn. She pauses and looks up at shafts of warm light filtering through the canopy, then smiles faintly. She whispers to herself, “I remember this place.” Ambient forest soundscape — birdsong, soft wind, her footsteps muffled on moss.

Notice the change: Sora did not need the whispered line. Veo does. It is the one place where the model shifts what you give it — because audio generation is where Veo actually gets to show off. What Sora taught you about directing a shot still applies. You are just adding a voice to it.

A note on timing

Most of the short-term pain of this shutdown is avoidable if you move this week. Export your Sora work now. Pick one replacement model — Kling 3.0 if you want one answer — and spend an afternoon prompting it until you have a feel for how it differs from Sora. By the time April 26 arrives, the migration will already be behind you.

If you subscribe to a multi-model platform instead, most of the transition happens without you having to think about it. Flora Fauna already carries every model mentioned in this article — Sora included, though that listing will disappear on or around April 26.

Sora was a good piece of software. It is a reasonable thing to be a little sad about. But the video model landscape is, right now, the strongest it has ever been — and the things Sora was best at are things several other models have quietly caught up on. You will be fine.


Art & Algorithms publishes guides, tutorials, and prompt packs at the intersection of art and code. Subscribe for the full archive.