Skip to main content

How to make an entire SHORT FILM with AI

I created a nostalgic looking short film in under 16 hours using nothing but my voice, video editing software and 6 different Ai tools. 

But why?

The Video AI company Runway ML held a contest and opened up their suite of tools to anyone who registered.  I thought, hey, I’m anyone, so I signed up.

In the competition we were given 60,000 AI minutes or 300,000 credits  to create a 1 – 4 minute film in 48 hours. Additionally we had to include three elements in the film from a list of numerous possible outcomes, one element from each of the three categories:

  1. Character Archetype : creature from outer space, mysterious time traveler, a solitary warrior, a glamorous performer, evil politician, eccentric artist, wronged lover, fugitive, crime boss, young rebel
  1. Scene: empty theater, subway stop, abandoned pool, empty streets of a big city at night, abandoned building, grocery store, office space filled with cubicles, highway at night, public bathroom, foggy beach
  2. Object: reflecting mirror, old photograph, burning car, old corded phone, traffic light, exit door

But I didn’t have 48 hours like everyone else, I only had 16 hours to do the competition so I needed to use my time wisely.

I jumped into ChatGPT and started with this prompt, “ I am a filmmaker using the ai tool Runway ML to create a film for a 48 hour film festival. I am going to add in the contest requirements and you will be my writer’s assistant helping me shape the story based on the contest prompt. Say ready when you understand and are awaiting input”

Chat GPT came up with 15 different story combinations using the elements given by runway, but I wasn’t sold on any of them, so I asked again.  But nothing felt like me…

Aas I looked at the generations over and over again an idea for a story popped into my head.  So I entered the info from my brain and let chatGPT do its magic to help lightly structure a story.  This loose story direction would be good enough for now. 

My elements I chose for the story were:

1. A Creature From Space

2. A Grocery Store

3. An Exit door

I then used a cinematic prompt in Midjourney to generate hundreds of stills that I could use as starting points for Runway ML.  Thanks to @cyberjungle for the amazing prompt structure.  Make sure to watch their video linked in the description if you want to learn how to generate cinematic images.  

CyberJungle’s prompt structure looks like this: 

Cinematic Scene, [Subject and Action], [Year Story takes place], [Shot], [Film stock or Camera], [Director], [Lighting style] –style raw –ar 16:9

my prompt looks like this: 

Cinematic scene, dark isle of a grocery store at night, 1984, wide shot, Kodak Portra 400, Steven Spielberg, Motivated lighting –style raw –ar 16:9

Using my adjusted prompt, I kept generating and re-prompting, until I came up with a look and feel for the grocery store that I liked.  

After I finalized the look and feel for the setting of the 1980s grocery store,  I used Midjourney to come up with the creature design.  After a few failed attempts of generating character sheets I realized having a consistent creature would be very difficult so I decided to change the story from one creature from outer space  to a race of jellyfish blob-like creatures. This would give me leeway in the creature design so not every character shot would have to match perfectly, but would still be somewhat consistent. 

Once I found an image of a creature I liked, I used it as a reference for Midjourney and pasted it before my prompts.  

I kept prompting and re prompting, varying subtle and varying strong until I had enough images to bring the story to life in Runway Gen 2 Image to Video.

Runway is amazing and I didn’t have to do much except drop in the source image, play with motion controls and hit generate.  The key to making good shots with AI is small tweaks in the settings, consistent rerolling, regeneration and most of all patience. 

After about 6 hours of generating hundreds of videos I finally downloaded 47 of the most usable shots.  They were pretty small file size and the resolution isn’t quite what I was used to with professional video.  So I used Topaz AI to upscale the video files to a 4k prores .mov file and had the software add in some film grain to take away some of the AI Smoothness that Runway interpolation produces. 

Finally it was time for the edit- which took the least amount of time.  I used Adobe Premiere Pro and my normal editing workflow.  I had the first rough cut in about an hour.

I then added some additional grain filters and film scratches to give it more of an 80s vibe and to again, hide some of the smoothness that current AI tools produce.  

I nested my complete timeline, duplicated it and added a fast blur effect to give the film a bloom like feel that we are used to with cinematic lenses from the 80s.

Once the visual story was locked into place I wrote some voice over based on the light outline I created with ChatGPT and then recorded the performance on my phone.  I then used Altered.Ai to morph my voice a bit as it does a much better job as VO actor than me.  Just see for yourself:

  1. Original VO
  2. Altered.Ai morph
  3. Final with filter

Finally I used Adobe’s Text AI tool to capture the VO in text form and create the subtitles for me.

16 hours and 6 AI tools later, I called it a day and was finished. 

Out of Hundreds of submitted films, Let Us Explore was one of 8 to win an award for Best Character.  With all the Runway Credits I have received as a prize, get ready for more fun films to come! 

And if you haven’t yet watched the full film click below to do so.