You are viewing a single comment's thread from:

RE: LeoThread 2024-10-29 05:12

in LeoFinance3 months ago

Runway's Act-One, new AI animation generator

Runway, a US-based AI research company specialising in creative software, has launched Act-One, a tool for generating character animations based on simple video and voice inputs.

Runway, a US-based AI research company specialising in creative software, has launched Act-One, a tool for generating character animations based on simple video and voice inputs. According to Runway, Act-One is designed to streamline animation production, offering an alternative to the typically complex and resource-intensive pipelines used in facial animation.

#runway #hollywood #ai #animation #software

Sort:  

Traditional animation workflows for realistic facial expressions require motion capture equipment, multiple video references, and detailed face rigging—steps that can be costly and time-consuming. Act-One bypasses these requirements by allowing users to create animated characters directly from a video and voice recording, making it feasible to produce animations with a simple camera setup, says Runway in an official blog.

The tool supports a range of character styles, from realistic portrayals to stylised designs. Act-One translates facial expressions and subtle movements—such as micro-expressions and eye-line adjustments—from actors onto different character designs, even if the character's proportions differ from the source footage. This capability enables new options in character design without the need for motion capture, as per the company.

Introducing Act-One
A new way to generate expressive character performances using simple video inputs.

At Runway, our mission is to build expressive and controllable tools for artists that can open new avenues for creative expression. Today, we're excited to release Act-One, a new state-of-the-art tool for generating expressive character performances inside Gen-3 Alpha.
Act-One can create compelling animations using video and voice performances as inputs. It represents a significant step forward in using generative models for expressive live action and animated content.

Capturing the Essence of a Performance
Traditional pipelines for facial animation often involve complex, multi-step workflows. These can include motion capture equipment, multiple footage references, manual face rigging, among other techniques. The goal is to transpose an actor's performance into a 3D model suitable for an animation pipeline. The key challenge with traditional approaches lies in preserving emotion and nuance from the reference footage into the digital character.

Our approach uses a completely different pipeline, driven directly and only by a performance of an actor and requiring no extra equipment.

Animation Mocap
Act-One can be applied to a wide variety of reference images. The model preserves realistic facial expressions and accurately translates performances into characters with proportions different from the original source video. This versatility opens up new possibilities for inventive character design and animation.

Live Action
The model also excels in producing cinematic and realistic outputs, and is remarkably robust across camera angles while maintaining high-fidelity face animations. This capability allows creators to develop believable characters that deliver genuine emotion and expression, enhancing the viewer's connection to the content.

New Creative Avenues
We've been exploring how Act-One can allow the generation of multi-turn, expressive dialogue scenes, which were previously challenging to create with generative video models. You can now create narrative content using nothing more than a consumer grade camera and one actor reading and performing different characters from a script.

Safety
As with all our releases, we're committed to responsible development and deployment. We're releasing this new tool with a comprehensive suite of content moderation and safety precautions, including:

Even more robust capabilities to detect and block attempts to generate content containing public figures;
Technical measures to verify that an end user has the right to use the voice that they create using Custom Voices; and
Continuous monitoring to detect and mitigate other potential misuses of our tools and platform.

Looking Ahead
We're excited to see what forms of creative storytelling Act-One brings to animation and character performance. Act-One is another step forward in our goal to bringing previously sophisticated techniques to a broader range of creators and artists.

We look forward to seeing how artists and storytellers will use Act-One to bring their visions to life in new and exciting ways.

Access to Act-One will begin gradually rolling out to users today and will soon be available to everyone.

Creating with Act-One on Gen-3 Alpha

Introduction
Gen-3 Alpha is the first of upcoming models that offer improvements in fidelity, consistency, motion, and speed over previous generations of models.

Act-One allows you to bring a character image to life by uploading a driving performance to precisely influence expressions, mouth movements, and more.

In this article, driving performance refers to the video that will influence an image. Character image refers to the image that will be animated by the driving performance.

This article outlines how to use Act-One on Gen-3 Alpha, input best practices, the available settings, and more.

Best Practices for Act-One Input
Before diving in, review these best practices to ensure that your input selections will set your generation up for success. Most output issues can be addressed by using inputs that follow these recommendations.

Driving Performance:

Well-lit with defined facial features
Single face framed from around the shoulders and up
Forward-facing in the direction of the camera
Face is in frame for the entire video
Ensure the face doesn't move in and out of the frame
Clear mouth movement and expressions
Certain expressions, such as sticking out a tongue, are not supported
Minimal body movement
No face occlusions in frame
No cuts that interrupt the shot
Follows our Trust & Safety standards

Character Images:

Well-lit with defined facial features
A single face framed from around the shoulders and up
Forward-facing in the direction of the camera
Follows our Trust & Safety standards

Runway Changes Animation Forever with its New Model

“I don't think text prompts are here to stay for a long time,” said Runway CEO Cristóbal Valenzuela.

Runway, an NYC based AI video startup, announced Act-One, a new state-of-the-art tool for generating expressive character performances, inside Gen-3 Alpha. The access to Act One is currently limited. Act-One can generate compelling animations by just using video and voice performances as inputs. The tool reduces the reliance on traditional motion capture systems, making it simpler to bring characters to life in production workflows. On their blog, Runway uploaded several videos and styles showcasing the different ways in which the tool is used.

#generateiveai #runway #technology #actone #video #ai

Simplifying Animation for Creators

Act-One simplifies animation by using a single-camera setup to capture actor performances, eliminating the need for motion capture or complex rigging. The tool preserves realistic facial expressions and adapts performances to characters of different proportions. The model delivers high-fidelity animations across various camera angles and supports both live-action and animated content. It expands creative boundaries for professionals as they require only consumer-grade equipment to produce expressive multi-turn dialogue scenes with a single actor.

“Traditional pipelines for facial animation often involve complex, multi-step workflows. These can include motion capture equipment, multiple footage references, manual face rigging, among other techniques. Our approach uses a completely different pipeline, driven directly and only by the performance of an actor and requiring no extra equipment,” per a statement on their blog.

Runway’s Continues its Reign in the GenAI Video Space

Last month, Runway partnered with Lionsgate to introduce AI into filmmaking. Runway aims to bring these tools for artists, and by extension bring their stories to life. This deal would open the doors for many of these stories to appear on the big screen eventually. Runway’s tools have also been employed in Hollywood before.

“I don’t think text prompts are here to stay for a long time. So a lot of our innovation has been on creating control tools,” said Runway CEO Cristóbal Valenzuela in an interview on how AI is coming to Hollywood and the need for giving creators more access and freedom over video generation.

Runway also has an AI Film Festival which is dedicated to celebrating artists who incorporate emerging AI techniques in their short films. Launched two years ago, the festival aims to spark conversation about the growing influence of AI tools in the film industry and to engage with creators from diverse backgrounds, exploring their insights and perspectives.

Others in the Race

OpenAI’s flagship model, Sora, is not publicly available yet, nor is there any update from the company regarding its release but hopefully it will be launched after elections. Genmo also unveiled a research preview of Mochi 1 – an open-source model designed to generate high-quality videos from text prompts.

Earlier this month, Meta also entered the Gen AI space with its MovieGen. Adobe also introduced Gen AI to video with its Adobe Firefly. Luma’s Dream Machine was made freely available for experimentation on its website. In terms of competition from China, Minimax officially launched its Image-to-Video feature. Even Kling introduced new capabilities to its model including a lip sync feature.