Gen1 Runway Experiments

Gave a try to the video-to-video AI tool available in Runway.
This tool can take a picture or prompt to change the aesthetic of the initial video.
I took a short film I made many years ago for my design bachelor. This video piece was created with 3D software (3ds Max and After Effects), and it was purposely done with a minimalistic look and feel (simple colors, no reflections, no refractions, no textures, and low lighting ray bounces) because We didn’t want to spend a lot of time in the rendering process. Ultimately, it took around two weeks to render the whole thing.

Now that we have these type of AI tools available to us, I wanted to check how much quick post-processing can be added without investing a ton of time into re-rendering each shot.
We gave it a try with try different prompts:

  1. Make all the textures metallic with a purple tint

  2. Make this video as if it was drawn with a pencil. Only maximum 3 gray colors, and include black.

  3. Make all the materials as if they were clay

  4. Make the aesthetic like if it was made of clay

Results were delivered extremely fast (if you compare it with the number of minutes that this could have taken me with the old method = post-processing in After Effects) at around 1 minute.

You can totally distinguish each video aesthetic. Of course, there are some artifacts around them, but it is still an impressive result given the time spend in these little experiments.

Something that comes to my mind is: How to keep the aesthetic of the characters across shots.
In theory, adding a description of the composition and a physical description of the character should help the model represent the characters across the shots consistently; however, it also makes me want to play with a character that changes throughout the film


Input

Output