The Design Loop

OpenAI Launches Sora 2 Video & Audio Model

Sora 2, OpenAI’s next-gen generative video + audio model, brings more physical accuracy, controllability, synchronised dialogue, and a new iOS app.

#
Tools
#
Motion Design

OpenAI has released Sora 2, the next generation of its video + audio generation system, aiming to elevate realism, control, and expressiveness. The original Sora (from Feb 2024) proved that basic video generation could work; Sora 2 pushes boundaries in simulating physics, world continuity, and narrative fidelity.

Highlights & Capabilities:

  • More physically realistic outcomes, e.g. bouncing balls, proper momentum, object permanence.
  • Enhanced controllability: it can follow multi-shot instructions and maintain consistent world state.
  • Supports mixing real and generative content: you can inject yourself or objects into scenes with accurate appearance & voice via “cameos.”
  • Generates synchronized dialogue, sound effects, and background audio, not just visuals.

To showcase Sora 2, OpenAI launched a new Sora app for iOS:

  • You can create and remix videos.
  • The app includes invite-based rollout, parental controls, and consent for cameos (you control who can use your likeness).
  • The app’s feed emphasizes creation over consumption, offers safety tools, and isn’t optimized for “time spent.”

Sora 2 is launching for free initially (with usage limits), with plans to expand access via web, API, and a “Pro” version for higher quality output.

SOURCE: https://openai.com/index/sora-2/

Related Posts

More Like This

Ready to Join the Loop?

It takes five seconds. Join hundreds of other designers who get our free, weekly brief.

Subscribe
By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.