0

Cinematic Prompt Library

Professional AI video generation prompts. Curated for filmmakers, creators, and visionaries.

How to Use AI Video Generation Prompts

Crafting the perfect text-to-video prompt is an art. Here's a step-by-step guide to get stunning results from Runway ML, Pika Labs, Kling AI, and Stable Video Diffusion.

01

Be Specific About Visuals

Describe exact colours, lighting, textures, and composition. Instead of "a forest", write "an ancient redwood forest at golden hour, shafts of amber light filtering through dense canopy, ground fog at knee level".

02

Describe Camera Movement

AI video models respond to cinematographic language. Use terms like "slow dolly forward", "orbit shot", "handheld tracking", or "top-down crane view" to define motion. This transforms static images into cinematic scenes.

03

Set the Mood & Style

Reference cinematic benchmarks: "film noir shadows", "Studio Ghibli warmth", "hyper-realistic 8K", or "dreamlike watercolour animation". Mood keywords dramatically influence the AI's output style.

04

Specify Duration & Pacing

Guide the tempo with phrases like "slow-motion water droplets", "time-lapse sunrise over 10 seconds", or "rapid montage of urban life". Pacing affects how the AI renders motion between frames.

05

Layer Atmospheric Details

Add environmental richness with sound and weather contexts: "rain-soaked cobblestones reflecting neon signs", "dry heat shimmer over the Sahara", or "aurora borealis dancing above frozen tundra".

06

Iterate and Refine

The best prompts are refined through iteration. Copy a prompt from our gallery, test it in your AI tool, then adjust specific elements. Small changes in wording can produce dramatically different results.

Compatible AI Video Tools

  • Runway ML Gen-3 Alpha — Industry-leading photorealistic video generation with advanced motion control.
  • Pika Labs — Fast generation with excellent style control; supports reference images.
  • Kling AI — Exceptional at long-form video (up to 2 minutes) with coherent motion.
  • Stable Video Diffusion — Open-source model ideal for fine-tuning and custom workflows.
  • Sora (OpenAI) — Advanced spatial reasoning and multi-scene storytelling capabilities.