Sora, text-to-video model, OpenAI Inc., San Francisco, California, USA


Beyond Our Reality · Made by Don Allen Stevenson with Sora

Apr 3, 2024

"I’m so excited to unveil something truly groundbreaking in collaboration with OpenAI: a glimpse into the future of storytelling with Sora technology.

In this trailer, we explore a parallel world Beyond our Reality, where the boundaries of imagination are expanded, bringing to life a few creatures I have dreamed up. What you’re seeing is not traditional footage but the result of cutting-edge AI-generated video technology that blurs the lines between reality & fantasy. I tried to ground my outputs in something kind of familiar like animals, but also something that was currently impossible in biology, these hybrid creatures.

As we step into this new era, I understand the apprehension surrounding the rapid evolution of our creative industries. I really think Sora offers a different kind of visual canvas, expanding my creative possibilities and complementing my different creative crafts. I have always been a one-person creative studio, so there were inherent limits to what I could create alone. With Sora I feel I can tell stories at a scale I didn’t think was possible before.

As I continue to be an early artist, working with Sora, I promise to be mindful of its profound impact. I will continue to share knowledge about it in an educational, creative capacity.

I feel like we are unlocking a new era of creative storytelling that we have never been able to imagine collectively before! Stay curious and creative!!!" -@DonAllenIII www.instagram.com/donalleniii


No Priors Ep.61 | OpenAI's Sora Leaders Aditya Ramesh, Tim Brooks and Bill Peebles

Apr 25, 2024

AI-generated videos are not just leveled-up image generators. But rather, they could be a big step forward on the path to AGI. This week on No Priors, the team from Sora is here to discuss OpenAI’s recently announced generative video model, which can take a text prompt and create realistic, visually coherent, high-definition clips that are up to a minute long.

Sora team leads, Aditya Ramesh, Tim Brooks, and Bill Peebles join Elad and Sarah to talk about developing Sora. The generative video model isn’t yet available for public use but the examples of its work are very impressive. However, they believe we’re still in the GPT-1 era of AI video models and are focused on a slow rollout to ensure the model is in the best place possible to offer value to the user and more importantly they’ve applied all the safety measures possible to avoid deep fakes and misinformation. They also discuss what they’re learning from implementing diffusion transformers, why they believe video generation is taking us one step closer to AGI, and why entertainment may not be the main use case for this tool in the future. Show Notes:

0:00 Sora team Introduction
1:05 Simulating the world with Sora
2:25 Building the most valuable consumer product
5:50 Alternative use cases and simulation capabilities
8:41 Diffusion transformers explanation
10:15 Scaling laws for video
13:08 Applying end-to-end deep learning to video
15:30 Tuning the visual aesthetic of Sora
17:08 The road to “desktop Pixar” for everyone
20:12 Safety for visual models
22:34 Limitations of Sora
25:04 Learning from how Sora is learning
29:32 The biggest misconceptions about video models
 
Last edited:

Baby Alpaca · Sora Showcase​


Jul 18, 2024

We’re continuing to slowly expand Sora testing to more creatives (digital VFX pioneers, architects, choreographers, engineering artists, and creative entrepreneurs) to help us understand the model's capabilities and limitations, shaping the next phase of research to create increasingly safe AI systems over time. While access is still extremely limited while safety testing is underway, we wanted to celebrate these artists’ work with their reflections on how they use Sora in their process. (The underlying visuals were generated solely with Sora without any VFX, but artists added sound and edited using traditional filmmaking software.)Chris Kittrell, known by his stage name ‪@babyalpaca‬ is a Los Angeles/Colorado based musician who crafts AI-generated visual art and immersive environments for his music. With his music video for "Shadows," Kittrell uses Sora to create a surreal journey through a brutalist-style castle, exploring different realms of a dream world featuring 113 AI-prompted clips and 20 filmed overlays. “Sora allowed me to create locations and character actions to tell my narrative in ways that would have been impossible with a crew of one and a limited budget.”
 
Last edited:

Charlotte Triebus · Sora Showcase​


Jul 17, 2024

We’re continuing to slowly expand Sora testing to more creatives (digital VFX pioneers, architects, choreographers, engineering artists, and creative entrepreneurs) to help us understand the model's capabilities and limitations, shaping the next phase of research to create increasingly safe AI systems over time. While access is still extremely limited while safety testing is underway, we wanted to celebrate these artists’ work with their reflections on how they use Sora in their process. (The underlying visuals were generated solely with Sora without any VFX, but artists added sound and edited using traditional filmmaking software.)‪@charlottetriebus2800‬ is a performance artist using art and technology to influence her dance and movement practice. In this project, she worked with Sora to create unexpected forms inspiring the choreography and movement for her dance collective. “I bring Sora's understanding of movement into the rehearsal, alluding to a dimension that comes from agents that have no human form. My dancers then either study and elaborate on Sora’s proposals or respond to them to create a communication between dimensions.”
 

Tim Fu · Sora Showcase​


Jul 18, 2024

We’re continuing to slowly expand Sora testing to more creatives (digital VFX pioneers, architects, choreographers, engineering artists, and creative entrepreneurs) to help us understand the model's capabilities and limitations, shaping the next phase of research to create increasingly safe AI systems over time. While access is still extremely limited while safety testing is underway, we wanted to celebrate these artists’ work with their reflections on how they use Sora in their process. (The underlying visuals were generated solely with Sora without any VFX, but artists added sound and edited using traditional filmmaking software.)Tim Fu is a designer and founder of ‪@StudioTimFu‬ a high-tech architectural practice pioneering in computational design and AI, formerly from Zaha Hadid Architects. “Sora revolutionizes architecture by allowing us to vividly explore concepts, while we can build these ideas to life.”“Beyond images and videos, generative visualization serves as a design process. Spatial quality and materiality can be readily explored in unprecedented speeds, allowing architects and designers to focus on the core values of design instead of the production of visuals.”
 

Getting started with Sora

Dec 9, 2024

Sora is here at sora.comNow you can generate entirely new videos from text, bring images to life, or extend, remix, or blend videos you already have. We’ve developed new interfaces to allow easier prompting, creative controls, and community sharing. We hope that this early version of Sora will help people explore new forms of creativity. We can’t wait to see what you create.
 
Back
Top