Introduction to AI Film Making
I love films. I love Old Hollywood. I love animation — and stories with heroes. I randomly decided to enter a film festival after animating a few of my AI-generated images. With no story or concept in mind, I sat down at my laptop with three weeks to create something. The brief was simple: produce a film that was one to ten minutes in length and incorporated AI in a meaningful way.
The Creation Process
Initially, I felt like a one-minute film was all I could manage. To my surprise, I ended up with a 6-minute, 11-second film equipped with a full soundtrack. A testament to how far AI and access to AI have come. Keep in mind, while I call myself an AI storyteller, I have no formal training in the art of film. I’m actually a scientist and an AI professional.
Setting the Scene
The story takes place entirely outdoors and is centered on a river. Water is the recurring visual anchor — from the calm surface of the river to stormy ocean waves. I aimed to capture both fantasy and realism, using AI-generated images to set the visual tone and animation to bring the elements to life.
Image Creation
I used Google ImageFX for image generation. I created thousands of images by repeating, refining, and adjusting prompts until I found the ones that felt right. It was a process of iteration and intuition. In the end, I selected 31 final images to build the film. Every image was custom prompted with no presets or shortcuts. I focused on composition, wardrobe, and tone. I wanted scenes that looked lived in, not overly stylized.
Character Design
I designed characters by prompting specific details related to wardrobe, fabric type, color palette, and environmental consistency. Each final character was the result of multiple iterations. I revised prompts until textures (like folds in fabric), skin tone realism, and lighting felt accurate and believable within the context of the environment. One of the characters emerged as the visual anchor of the story — a woman featured repeatedly throughout the film.
Animation
Once I had the images, I animated them using Runway Gen-3 Alpha. My options were to generate either 5- or 10-second videos. I chose 10-second videos to give myself flexibility in editing. I trimmed some of them to fit pacing and transitions. I kept motion prompts simple and clear: water, wind, waves, storms, and cloth movement. The goal was fluidity, not chaos.
Editing and Sound
I used CapCut Pro to arrange and finalize the video. I added the 31 final animations from Runway and then ordered the videos based on my storyboard. For emphasis on some scenes, I slowed the speed of the video so that the character had more screen time. I layered ambient sound (rain, rivers, ocean) with instrumental tracks and natural environmental sounds, like birds. There was no dialogue — an intentional choice, because I wanted the river to “sing” as the main supporting character of the film.
Tools and Challenges
The tools I used included Google ImageFX, Runway Gen-3 Alpha, CapCut Pro, and royalty-free audio libraries. The challenges I faced included time constraints, image selection, character consistency, and prompt sensitivity. I had just three weeks to go from idea to execution, and narrowing down to 31 usable images took constant testing and refinement.
Final Output
The final output was a 6-minute, 11-second film with a runtime of 1080p and a file size of over 500 MB. I submitted it to Runway’s 2025 AI Film Festival.
What I Learned
This project taught me how powerful AI can be when paired with clear creative direction. I learned to prompt with intention, think visually and structurally, work within limitations, build consistency across AI-generated outputs, and trust my instincts.
Authorship and AI
That question lingers over every project created with generative AI. I used AI to generate images and animation, but I also spent hours crafting prompts, refining visual details, building continuity, and shaping the emotional tone of the film. So, what does authorship mean in this context? Maybe the better question is: If I stole something, why does it look so much like me?
Conclusion
I was already dazzled by the progress and capabilities of AI image generation tools, but the animation tools have deepened my appreciation and hope for the future. What I experienced making this film — and what we are now experiencing culturally — is the beginning of a new era in film and media. A new generation of storytellers, many without formal artistic training, will be able to tell stories we haven’t heard before, or reimagine familiar ones with fresh perspectives.
FAQs
- Q: What tools were used to create the film?
A: Google ImageFX, Runway Gen-3 Alpha, CapCut Pro, and royalty-free audio libraries. - Q: How long did it take to create the film?
A: Three weeks. - Q: What was the biggest challenge faced during the creation process?
A: Time constraints, image selection, character consistency, and prompt sensitivity. - Q: What did the author learn from the project?
A: The author learned to prompt with intention, think visually and structurally, work within limitations, build consistency across AI-generated outputs, and trust their instincts. - Q: What is the author’s background?
A: The author, Sophia Banton, is an Associate Director and AI Solution Lead in biopharma, specializing in Responsible AI governance, workplace AI adoption, and strategic integration across IT and business functions.