According to OpenAI, Sora is capable of creating:
Complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt but also how those things exist in the physical world.
The model can even produce a video from a single still image, and it is capable of filling in missing frames in an existing video or extending its duration. OpenAI acknowledges that the model “may struggle with accurately simulating the physics of a complex scene,” but the showcased results are quite impressive. You can check out the shared videos to see for yourself.
Prompt: “A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.” pic.twitter.com/0JzpwPUGPB
— OpenAI (@OpenAI) February 15, 2024
Earlier this month, OpenAI revealed plans to add watermarks to its text-to-image tool DALL-E 3. However, it mentioned that these watermarks can easily be removed. Similar to its other AI products, OpenAI will have to address the potential repercussions of fake AI-generated photorealistic videos being mistaken for genuine content.
The company claims that it is developing tools to identify misleading content, including a detection classifier capable of recognizing videos generated by Sora.
Not too long ago, the social media giant Meta boosted its image generation model Emu by adding two AI-based features. These features can edit and create videos from text prompts, too. It looks like the future of AI-generated Reels and short videos is getting closer than we expected.
#Move #оver #Hollywood #OpenAIs #model #Sora #creates #minutelong #videos #prompts