Runway Development History: From Research Lab to Gen-3 Alpha and AI Video

Runway Development History: From Research Lab to Gen-3 Alpha and AI Video

Runway has grown from a New York–based research lab into one of the most recognized names in AI video generation. Co-founded by Cristóbal Valenzuela, Alejandro Matamala, and Anastasis Germanidis in 2018, the company co-released Stable Diffusion, then launched a series of generative video models—Gen-1, Gen-2, and Gen-3 Alpha—that defined text-to-video and image-to-video workflows for creatives and studios. This article traces Runway’s development from founding through Gen-3 Alpha and its ecosystem of tools.

Founding and Early Years

Runway was founded on December 1, 2018, in New York City. The founders met at NYU’s Interactive Telecommunications Program (ITP) and set out to put machine learning into the hands of creatives. The company started with a model directory and tools to deploy ML at scale in multimedia apps, then raised seed funding ($2M in 2018), Series A ($8.5M in December 2020), and Series B ($35M in December 2021).

In August 2022, Runway and Stability AI co-released Stable Diffusion, an open-source text-to-image model based on latent diffusion. That work laid groundwork for Runway’s own generative video models and cemented its role in the broader generative AI ecosystem.

Milestone Date (approx.) Description
Founding December 2018 Runway founded in NYC; focus on ML for creatives
Stable Diffusion August 2022 Co-release of open-source text-to-image model
Gen-1 2023 First major video model; input video + prompt → new video
Gen-2 June 2023 Text/image to video without structure conditioning; Extend up to 18s
Series C / extension Dec 2022 / June 2023 Series C $50M; extension $141M (Google, Nvidia, Salesforce), $1.5B valuation
Gen-3 Alpha June 2024 Text/Image to Video, extensions 5–10s, Turbo, camera controls

Gen-1 to Gen-2: From Conditioning to Open-Ended Video

Runway Gen-1 Gen-2 AI video generation workflow

From structured conditioning to open-ended generation: Gen-1 and Gen-2

Gen-1 required an input video to guide structure and motion; users applied text prompts to transform or stylize that video. It proved that diffusion-based video generation could be productized. Gen-2, released in June 2023, was a leap: it could generate video from text or a single image without a structure-conditioning clip. Development had started in September 2022, building on the success of latent diffusion in images. Gen-2 also introduced Extend, letting users extend generations (up to 18 seconds), with each extension producing new variations.

Gen-3 Alpha and the Current Stack

Gen-3 Alpha launched in June 2024 with higher fidelity, better motion, and stronger prompt adherence. It was trained on both video and image data and powers Text to Video, Image to Video, and Text to Image, plus control features like Motion Brush and Advanced Camera Controls. Gen-3 Alpha Turbo offered faster iteration. Extensions allow extending clips by 5 or 10 seconds (typically up to three extensions, ~40 seconds total). Expand Video reframes videos into different aspect ratios (e.g. landscape to vertical) by generating content beyond the frame instead of cropping.

Product and Ecosystem

Beyond core models, Runway has built a full creative suite:

  • Text to Video / Image to Video: Generate or extend clips from prompts or reference images
  • Extend: Add 5–10s (Gen-3) or up to 18s (Gen-2) with optional new prompts
  • Expand Video: Change aspect ratio and add content at the edges
  • Motion Brush & Camera Controls: Direct motion and camera behavior in Gen-3 Alpha
  • Runway Research: Papers and releases (e.g. “Scale, Speed and Stepping Stones: The path to Gen-2”) that document the science behind the product

Runway is used in advertising, film, and social content; the company has raised hundreds of millions in funding and reached multi-billion-dollar valuations, with Gen-3 Alpha and later models at the center of its roadmap.

Summary

Runway’s path from a 2018 research-oriented startup to a leader in AI video mirrors the industry’s shift from image to video generation. Co-releasing Stable Diffusion, then shipping Gen-1, Gen-2, and Gen-3 Alpha in quick succession, Runway has made text-to-video and image-to-video practical for professionals. Extend, Expand, and camera controls continue to close the gap between AI output and traditional production workflows.

Key Takeaways

  • Runway was founded in December 2018 in NYC; co-released Stable Diffusion in August 2022.
  • Gen-1 used input video as structure conditioning; Gen-2 (June 2023) enabled text/image-to-video and Extend (up to 18s).
  • Gen-3 Alpha (June 2024) added higher fidelity, Motion Brush, camera controls, and 5–10s extensions.
  • Expand Video (Gen-3 Alpha Turbo) reframes clips to new aspect ratios by generating beyond the frame.
  • Runway Research publishes technical work; the company is a major beneficiary of AI video funding and adoption.

Try Runway on FuseAITools for text-to-video, extend, and more in one place.

Disclaimer: Release dates and features are based on public information and may change. Check Runway’s official site and FuseAITools for current capabilities.