Sora 2: OpenAI's commitment to short video with AI

  • Sora 2 launches alongside a social app with a vertical feed and 10-second clips
  • Improvements to physics, shot consistency, and native audio generation
  • Likeness check, image usage notifications, and parental controls
  • Initial rollout on iOS for the US and Canada by invitation, free with limits and Pro option

Gravel 2

OpenAI's next-generation artificial intelligence video Arrives with a social twist: Gravel 2 Not only is it pushing a more capable model, but it's also debuting an app that makes clip creation instant and, if desired, instantly shareable. The promise is simple: you record a short take To capture your voice and face, you choose the scene and the system places you inside without losing the thread between shots.

The move puts into context a journey that began with ChatGPT and continued with image models: the first Sora It already hinted at the permanence of objects and a certain visual coherence, but now the leap focuses on the physical fidelity and in sound integration. The goal is for the glitches to feel human, not algorithmic glitches, and for the clip's world to retain its state when linking takes.

What's new in the Sora 2 model?

OpenAI describes Sora 2 as a major advancement, with more believable behavior in everyday actions and complex scenes: If a ball doesn't go in, it bounces where it lands. And it doesn't disappear; if there are multiple shots, the environment state is maintained. Furthermore, the system accepts richer instructions and supports short narratives with greater control over style, duration, and transitions.

Another notable novelty is the sound part: The model generates voices, effects and soundscapes synchronized with the image, so that dialogue fits with the environment (from a busy street to fireworks). The result is a more complete audiovisual package that raises the bar compared to the previous version.

The social app: vertical feed, remixes, and cameos

Along with the model, OpenAI is launching an app designed for social creation and sharing. The heart is a feed vertical mobile type with clips up to 10 seconds, like buttons, comments and option to remix. The For you page It adjusts through natural language and prioritizes interaction with people nearby to avoid endless passive consumption.

To appear in a clip, the app requires image and voice verification and, from there, allows cameos in scenes generated by you or others. There's no uploading from your camera roll: capture is done within the app itself, making it easy to control and permission tracking at all times.

Identity, well-being and security

The resemblance verification is accompanied by notifications every time someone uses your image, even if the video remains in draft form. Additionally, you can revoke permissions and request the removal of clips in which you appear, with human moderation to prevent harassment and impersonation.

OpenAI says the experience seeks to encourage creation over doomscrollingFor teenagers there are viewing and generation limits, additional controls for cameos, and parental tools from ChatGPT. The company indicates that synthetic content is marked with source signals and digital credentials to facilitate traceability.

Sora 2 availability, access, and pricing

The deployment starts in iOS by invitation in the United States and Canada, with plans to expand access to more countries later. For now, it is not available in Spain. The service begins free with limits subject to computing capacity and there will be a higher-quality Sora 2 Pro variant for ChatGPT subscribers, plus web access and a future API.

What Sora 2 does well and what it still lacks

The examples show artistic gymnastics, somersaults or action scenes with more consistent physics, and in animation or stop-motion the results look especially solid. Even so, flaws persist: in realistic sequences, details such as unnatural trajectories or bounces (for example, in a beach volleyball game), and in consecutive shots, coherence may be lacking.

Competition and market context

Sora 2 lands in a rising competitive landscape: Runway with Gen-4 On the creative front, Google is integrating Veo 3 into YouTube and Meta is testing a feed of short videos generated by AI. TikTok, meanwhile, is tightening rules on sensitive synthetic content. OpenAI's pivot is bet on your own social experience around your video engine.

Copyright and Safeguards

The company operates with filters that block requests potentially problematic due to copyright issues, which sometimes prevents certain clips from being generated. In parallel, OpenAI is facing copyright lawsuits, with the New York Times case being one of the most high-profile, and is defending content exclusion and removal mechanisms for rights holders.

What the user can do

For creators, small agencies or individual profiles, Sora 2 opens the door to produce clips with audio and effects at low cost and quickly. Cameo features and state persistence between shots facilitate short pieces with continuity, while remix options and a social feed invite you to build collaborative formats back and forth.

With a combination of audiovisual model and social app, Sora 2 aims to make video generation something everyday, but with clear boundaries on security, identity and rightsThe offering kicks off with 10-second clips, mandatory image verification, and copyright filters, with limited availability and a Pro and API path that will determine its evolution.

Creation of educational videos
Related article:
How to create educational videos. Some suggestions