It’s time to celebrate the incredible women who are leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn more
New York-based Runway ML, also known as Runway, was one of the first startups to focus on realistic models of high-quality generative AI video creation.
But following the debut of its Gen-1 model in February 2023 and Gen-2 in June 2023, the company has since seen its star dimmed by other highly realistic AI video generators, namely OpenAI’s Sora model, still unpublished, and the Dream Machine model from Luma AI. released last week.
That changes today, however, as Runway hits back in a big way in the generative AI video war: Today it announced Gen-3 Alpha, which a blog post says is the “first of a next series of trained models”. by Runway on a new infrastructure built for large-scale multimodal training”, and a “step towards building general world models”, or AI models capable of “representing and simulating a wide range of situations and interactions , like those encountered in reality. world.” See examples of videos made with Gen-3 Alpha by Runway below throughout this article:
Gen-3 Alpha allows users to generate high-quality, detailed and highly realistic video clips up to 10 seconds long, with high accuracy and a range of emotional expressions and camera movements.
Registrations for VB Transform 2024 are open
Join business leaders in San Francisco July 9-11 for our flagship AI event. Connect with your peers, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. Register now
No specific release date has yet been given for the model, with Runway only showing demo videos on its website and social account on Runway or whether it will require a paid subscription to access it (which starts at $15 per month or $144 per year).
After this article was published, VentureBeat interviewed Anastasis Germanidis, co-founder and chief technology officer of Runway, who confirmed that the new Gen-3 Alpha model would be available in “days” for paid Runway subscribers, but that the free tier was activated. deck to get the model at a yet-to-be-announced time in the future.
A Runway spokesperson echoed this statement by emailing VentureBeat to say, “Gen-3 Alpha will be available in the coming days, available to paid Runway subscribers, our Creative Partner Program, and enterprise users. »
On LinkedIn, Runway user Gabe Michael said he expected to receive access later this week:
On video), and some new ones which are now only possible with a more efficient basic model »
Germanidis also wrote that since the release of Gen-2 in 2023, Runway has learned that “video delivery models are far from saturating the performance gains from scaling, and that these models, by learning the task of predicting video, build truly powerful representations of the visual world. »
Diffusion is the process by which an AI model is trained to recompose visuals (still or moving) of concepts from pixelated “noise”, based on learning these concepts from pairs of images. images/videos and annotated texts.
Runway states in its blog post that Gen 3-Alpha is “trained jointly on videos and images” and “is the result of a collaborative effort by an interdisciplinary team of researchers, engineers and artists,” although specific datasets have not yet been released. been disclosed – following a trend of most other major AI media generators also not disclosing precisely what data their models were trained on, and whether some was obtained through paid licensing deals or simply scraped from the Web.
Critics argue that AI model creators should pay the original creators of their training data through licensing deals and have even filed copyright infringement lawsuits to do so, but AI model companies AI models by and large take the position that they are legally allowed to train on any publicly published model. data.
The Runway spokesperson emailed VentureBeat the following question when asked what training data was used in Gen-3 Alpha: “We have an internal research team that oversees all of our training and we “Let’s use selected internal datasets to train our models.”
Interestingly, Runway also notes that it has already “collaborated and partnered with leading entertainment and media organizations to create custom versions of Gen-3,” which “allow for more stylistically controlled characters and more coherent, and target specific artistic and narrative requirements, including: other characteristics.”
No specific organizations are mentioned, but previously the filmmakers behind acclaimed and award-winning films such as Everything, everywhere, at the same time And The People’s Joker revealed that they used Runway to create effects for parts of their films.
Runway includes a form in its Gen-3 Alpha announcement inviting other interested organizations to get their hands on custom versions of the new model to apply here. No pricing has been released regarding the cost of training a custom model.
Meanwhile, it’s clear that Runway isn’t giving up the fight to become a dominant player or leader in the rapidly evolving field of AI video creation.
Source link