Home/AI 工程/openai-whisper
O

openai-whisper

by @steipetev
4.8(21)

Uses Whisper CLI for local speech-to-text, enabling offline speech recognition without API keys.

OpenAI WhisperSpeech-to-Text (STT)Audio TranscriptionNatural Language ProcessingAI ModelsGitHub
Installation
npx skills add steipete/clawdis --skill openai-whisper
compare_arrows

Before / After Comparison

1
Before

Relying on online APIs for speech-to-text introduces issues such as network latency, privacy concerns, and costs.

After

Using local Whisper CLI for speech-to-text eliminates the need for API keys, offering security and high efficiency.

description SKILL.md


name: openai-whisper description: Local speech-to-text with the Whisper CLI (no API key). homepage: https://openai.com/research/whisper metadata: { "openclaw": { "emoji": "🎤", "requires": { "bins": ["whisper"] }, "install": [ { "id": "brew", "kind": "brew", "formula": "openai-whisper", "bins": ["whisper"], "label": "Install OpenAI Whisper (brew)", }, ], }, }

Whisper (CLI)

Use whisper to transcribe audio locally.

Quick start

  • whisper /path/audio.mp3 --model medium --output_format txt --output_dir .
  • whisper /path/audio.m4a --task translate --output_format srt

Notes

  • Models download to ~/.cache/whisper on first run.
  • --model defaults to turbo on this install.
  • Use smaller models for speed, larger for accuracy.

forumUser Reviews (0)

Write a Review

Effect
Usability
Docs
Compatibility

No reviews yet

Statistics

Installs646
Rating4.8 / 5.0
Version
Updated2026年3月16日
Comparisons1

User Rating

4.8(21)
5
0%
4
0%
3
0%
2
0%
1
0%

Rate this Skill

0.0

Compatible Platforms

🔧Claude Code
🔧OpenClaw
🔧OpenCode
🔧Codex
🔧Gemini CLI
🔧GitHub Copilot
🔧Amp
🔧Kimi CLI

Timeline

Created2026年3月16日
Last Updated2026年3月16日