Home/多媒体与音视频/FireRed-OpenStoryline
F

FireRed-OpenStoryline

by @FireRedTeamv
3.5(0)

FireRed-OpenStoryline is an AI video editing agent that transforms traditional manual editing into an intent-driven directing process through natural language interaction and LLM-driven planning. It supports transparent human-computer collaboration and provides reusable Style Skills, helping users efficiently create professional and consistent video stories.

AIvideo-editingLLMnatural-language-processingstorytellingGitHub
Installation
git clone https://github.com/FireRedTeam/FireRed-OpenStoryline.git
compare_arrows

Before / After Comparison

1
Before

Traditional video editing requires manual operation, is time-consuming and labor-intensive, relies on professional skills, and struggles to maintain style consistency.

After

AI agents enable intent-driven editing, natural language interaction, efficiently producing professional and stylistically consistent videos.

description SKILL.md

🤗 HuggingFace Demo🌐 Homepage

FireRed-OpenStoryline turns complex video creation into natural, intuitive conversations. Designed with both accessibility and enterprise-grade reliability in mind, FireRed-OpenStoryline makes video creation easy and friendly to beginners and creative enthusiasts alike.

Deriving from the saying "A single spark can start a prairie fire", the name FireRed represents our vision: to spread our SOTA capabilities—honed in real-world scenarios—like sparks across the wilderness, igniting the imagination of developers worldwide to reshape the future of AI together.

✨ Key Features

  • 🌐 Smart Media Search & Organization: Automatically searches online and downloads images and video clips that match your requirements. Performs clip segmentation and content understanding based on your thematic media.
  • ✍️ Intelligent Script Generation: Combines user themes, visual understanding, and emotion recognition to automatically construct storylines and context-aware narration. Features built-in Few-shot style transfer capabilities, allowing users to define specific copy styles (e.g., product reviews, casual vlogs) via reference text, achieving precise replication of tone, rhythm, and sentence structure.
  • 🎵 Intelligent Music, Voiceover & Font Recommendations: Supports personal playlist imports and auto-recommends BGM based on content and mood, featuring smart beat-syncing. Simply describe the desired tone—e.g., "Restrained," "Emotional," or "Documentary-style"—and the system matches suitable voiceovers and fonts to ensure a cohesive aesthetic.
  • 💬 Conversational Refinement: Rapidly cut, swap, or resequence clips. Edit scripts and fine-tune visual details—including color, font, stroke, and position. All edits are performed exclusively via natural language prompts with immediate results.
  • Editing Skill Archiving: Save your complete editing workflow as a custom Skill. Simply swap the media and apply the corresponding Skill to instantly replicate the style, enabling efficient batch creation.

NEWS

  • 🎬 2026-04-02: Added the AI Transition Generation feature, which automatically creates transition shots based on the ending frame of one clip, the opening frame of the next, and a natural-language description, making scene transitions smoother and the narrative more coherent.
  • 🚀 2026-03-22: Introduced an ASR-based rough cut skill for speech videos, enabling automatic removal of filler words, disfluencies, and repeated sentences, with timestamp-aligned segmentation for cleaner and more efficient speech editing workflows.
  • 🔥 2026-03-12: Integrated with OpenClaw, adding two OpenClaw Skills — openstoryline-install and openstoryline-use — covering the initial installation/first-run workflow and the actual usage workflow, respectively. Also added Skill usage instructions for Claude Code, making it easier for Claude Code to install and invoke the project in accordance with the repository guidelines.
  • 2026-02-10: FireRed-OpenStoryline was officially open-sourced.

🏗️ Architecture

✨ Demo

🤖 Use Through an Agent

FireRed-OpenStoryline supports usage through Agent Skills. We provide two Skills:

  • openstoryline-install: for installation, configuration, and first-run verification.
  • openstoryline-use: for starting the service and running the actual video editing workflow.

OpenClaw

Just tell OpenClaw: “I want to try OpenStoryline. Help me install the required Skills,” and it will automatically trigger the installation. If the installation runs into problems, use the following commands to install them manually:

openclaw skills install openstoryline-install
openclaw skills install openstoryline-use

If your current OpenClaw version does not support openclaw skills install, or if installation still fails, you can use ClawHub instead:

npx clawhub install openstoryline-install
npx clawhub install openstoryline-use

Once installed, you only need to send your media assets to OpenClaw, and it can help you complete the entire process from installing FireRed-OpenStoryline to generating the final video.

Claude Code

This repository comes with built-in Claude Code Skills. If you start Claude Code from the root directory of this repository, you can use the project-level Skills included in the repo directly. Claude Code can then help you install and use FireRed-OpenStoryline.

/openstoryline-install
/openstoryline-use

If you want to install these two Skills into your own global Claude Code configuration, run:

mkdir -p ~/.claude/skills
cp -R .claude/skills/openstoryline-install ~/.claude/skills/
cp -R .claude/skills/openstoryline-use ~/.claude/skills/

Other Compatible Agents (Experimental)

These Skills are based on an open Agent Skills format, so in theory they can also be installed into other compatible agents. For example, you can install them into Codex via the Skills CLI:

npx skills add FireRedTeam/FireRed-OpenStoryline --skill openstoryline-install --agent codex
npx skills add FireRedTeam/FireRed-OpenStoryline --skill openstoryline-use --agent codex

Or use the commands below with the --global flag to install these Skills into the user-level directory so they are available across projects:

npx skills add FireRedTeam/FireRed-OpenStoryline --skill openstoryline-install --global
npx skills add FireRedTeam/FireRed-OpenStoryline --skill openstoryline-use --global

📦 Install

1. Clone repository

# If git is not installed, refer to the official website for installation: https://git-scm.com/install/
# Or manually download the code
git clone https://github.com/FireRedTeam/FireRed-OpenStoryline.git
cd FireRed-OpenStoryline

2. Create a virtual environment

Install Conda according to the official guide (Miniforge is recommended, it is suggested to check the option to automatically configure environment variables during installation): https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html

# Recommended python>=3.11
conda create -n storyline python=3.11
conda activate storyline

3. 📦 Resource Download & Installation

3.1 Automatic Installation (Linux and macOS only)

sh build_env.sh

3.2 Manual Installation

A. MacOS or Linux
  • Step 1: Install wget (if not already installed)

    # MacOS: If you haven't installed Homebrew yet, please install it first: https://brew.sh/
    brew install wget
    
    # Ubuntu/Debian
    sudo apt-get install wget
    
    # CentOS
    sudo yum install wget
    
  • Step 2: Download Resources

    chmod +x download.sh
    ./download.sh
    
  • Step 3: Install Dependencies

    pip install -r requirements.txt
    
B. Windows
  • Step 1: Prepare Directory: Create a new directory named resource in the project root directory.

  • Step 2: Download and Extract:

  • Step 3: Install Dependencies:

    pip install -r requirements.txt
    

🚀 Quick Start

Note: Before starting, you need to configure the API-Key in config.toml first. For details, please refer to the documentation API-Key Configuration

1. Start the MCP Server

MacOS or Linux

PYTHONPATH=src python -m open_storyline.mcp.server

Windows

$env:PYTHONPATH="src"; python -m open_storyline.mcp.server

2. Start the conversation interface

  • Method 1: Command Line Interface

    python cli.py
    
  • Method 2: Web Interface

    uvicorn agent_fastapi:app --host 127.0.0.1 --port 8005
    

🐳 Docker

Pull the Image

# Pull image from Docker Hub official repository
# Recommended for users outside China
docker pull openstoryline/openstoryline:v1.0.1

# Pull image from Alibaba Cloud Container Registry
# Recommended for users in China (faster and more stable)
docker pull crpi-6knxem4w8ggpdnsn.cn-shanghai.personal.cr.aliyuncs.com/openstoryline/openstoryline:v1.0.1

Start the Container

docker run \
  -v $(pwd)/config.toml:/app/config.toml \
  -v $(pwd)/outputs:/app/outputs \
  -v $(pwd)/run.sh:/app/run.sh \
  -p 7860:7860 \
  openstoryline/openstoryline:v1.0.1

After starting, access the Web interface at http://0.0.0.0:7860

📁 Project Structure

FireRed-OpenStoryline/
├── 🎯 src/open_storyline/           Core application
│   ├── mcp/                         🔌 Model Context Protocol
│   ├── nodes/                       🎬 Video processing nodes
│   ├── skills/                      🛠️ Agent skills library
│   ├── storage/                     💾 Agent Memory
│   ├── utils/                       🧰 Helper utilities
│   ├── agent.py                     🤖 Build Agent
│   └── config.py                    ⚙️ Configuration management
├── 📚 docs/                         Documentation
├── 🐳 Dockerfile                    Docker Configuration
├── 💬 prompts/                      LLM prompt templates
├── 🎨 resource/                     Static resources
│   ├── bgms/                        Background music library
│   ├── fonts/                       Font files
│   ├── script_templates/            Video script templates
│   └── unicode_emojis.json          Emoji list
├── 🔧 scripts/                      Utility scripts
├── 🌐 web/                          Web interface
├── 🚀 agent_fastapi.py              FastAPI server
├── 🖥️ cli.py                        Command-line interface
├── ⚙️ config.toml                   Main configuration file
├── 🚀 build_env.sh                  Environment Build Script
├── 📥 download.sh                   Resource downloader
├── 📦 requirements.txt              Runtime dependencies
└── ▶️ run.sh                        Launch script

📚 Documentation

📖 Tutorial Index

TODO

  • Add the function of voiceover type video editing.
  • Add support for voice cloning
  • Add more transition/filter/effects effects functions.
  • Add image/video generation and editing capabilities.
  • GPU-accelerated rendering and highlight selection.

Acknowledgements

This project is built upon the following excellent open-source projects:

Core Dependencies

  • MoviePy - Video editing library
  • FFmpeg - Multimedia framework
  • LangChain - A framework that provides pre-built Agents

📄 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

⭐ Star History

forumUser Reviews (0)

Write a Review

Effect
Usability
Docs
Compatibility

No reviews yet

Statistics

Installs1.7K
Rating3.5 / 5.0
Version
Updated2026年4月7日
Comparisons1

User Rating

3.5(0)
5
0%
4
0%
3
0%
2
0%
1
0%

Rate this Skill

0.0

Compatible Platforms

🔧Manual

Timeline

Created2026年4月7日
Last Updated2026年4月7日