Home/多媒体与音视频/podcast-generation
P

podcast-generation

by @sickn33v
4.7(22)

Generate AI-driven podcast-style audio narratives using Azure OpenAI's GPT technology.

AI Audio GenerationPodcast ProductionAzure AIText-to-SpeechAudio NarrationGitHub
Installation
npx skills add sickn33/antigravity-awesome-skills --skill podcast-generation
compare_arrows

Before / After Comparison

1
Before

Before using AI podcast generation skills, producing podcasts required a significant amount of time for scriptwriting, manual recording, post-production editing, and sound effect processing. The entire process was time-consuming and costly, limiting the frequency of content updates.

After

After using AI podcast generation skills, we only need to provide text content, and the system can quickly generate high-quality, natural, and fluent podcast-style audio narratives. This greatly shortens the production cycle, reduces costs, enables us to publish new content at a higher frequency, and enhances user engagement.

description SKILL.md


name: podcast-generation description: "Generate AI-powered podcast-style audio narratives using Azure OpenAI's GPT Realtime Mini model via WebSocket. Use when building text-to-speech features, audio narrative generation, podcast creatio..." risk: unknown source: community date_added: "2026-02-27"

Podcast Generation with GPT Realtime Mini

Generate real audio narratives from text content using Azure OpenAI's Realtime API.

Quick Start

  1. Configure environment variables for Realtime API
  2. Connect via WebSocket to Azure OpenAI Realtime endpoint
  3. Send text prompt, collect PCM audio chunks + transcript
  4. Convert PCM to WAV format
  5. Return base64-encoded audio to frontend for playback

Environment Configuration

AZURE_OPENAI_AUDIO_API_KEY=your_realtime_api_key
AZURE_OPENAI_AUDIO_ENDPOINT=https://your-resource.cognitiveservices.azure.com
AZURE_OPENAI_AUDIO_DEPLOYMENT=gpt-realtime-mini

Note: Endpoint should NOT include /openai/v1/ - just the base URL.

Core Workflow

Backend Audio Generation

from openai import AsyncOpenAI
import base64

# Convert HTTPS endpoint to WebSocket URL
ws_url = endpoint.replace("https://", "wss://") + "/openai/v1"

client = AsyncOpenAI(
    websocket_base_url=ws_url,
    api_key=api_key
)

audio_chunks = []
transcript_parts = []

async with client.realtime.connect(model="gpt-realtime-mini") as conn:
    # Configure for audio-only output
    await conn.session.update(session={
        "output_modalities": ["audio"],
        "instructions": "You are a narrator. Speak naturally."
    })
    
    # Send text to narrate
    await conn.conversation.item.create(item={
        "type": "message",
        "role": "user",
        "content": [{"type": "input_text", "text": prompt}]
    })
    
    await conn.response.create()
    
    # Collect streaming events
    async for event in conn:
        if event.type == "response.output_audio.delta":
            audio_chunks.append(base64.b64decode(event.delta))
        elif event.type == "response.output_audio_transcript.delta":
            transcript_parts.append(event.delta)
        elif event.type == "response.done":
            break

# Convert PCM to WAV (see scripts/pcm_to_wav.py)
pcm_audio = b''.join(audio_chunks)
wav_audio = pcm_to_wav(pcm_audio, sample_rate=24000)

Frontend Audio Playback

// Convert base64 WAV to playable blob
const base64ToBlob = (base64, mimeType) => {
  const bytes = atob(base64);
  const arr = new Uint8Array(bytes.length);
  for (let i = 0; i < bytes.length; i++) arr[i] = bytes.charCodeAt(i);
  return new Blob([arr], { type: mimeType });
};

const audioBlob = base64ToBlob(response.audio_data, 'audio/wav');
const audioUrl = URL.createObjectURL(audioBlob);
new Audio(audioUrl).play();

Voice Options

VoiceCharacter
alloyNeutral
echoWarm
fableExpressive
onyxDeep
novaFriendly
shimmerClear

Realtime API Events

  • response.output_audio.delta - Base64 audio chunk
  • response.output_audio_transcript.delta - Transcript text
  • response.done - Generation complete
  • error - Handle with event.error.message

Audio Format

  • Input: Text prompt
  • Output: PCM audio (24kHz, 16-bit, mono)
  • Storage: Base64-encoded WAV

References

  • Full architecture: See references/architecture.md for complete stack design
  • Code examples: See references/code-examples.md for production patterns
  • PCM conversion: Use scripts/pcm_to_wav.py for audio format conversion

When to Use

This skill is applicable to execute the workflow or actions described in the overview.

forumUser Reviews (0)

Write a Review

Effect
Usability
Docs
Compatibility

No reviews yet

Statistics

Installs1.9K
Rating4.7 / 5.0
Version
Updated2026年3月16日
Comparisons1

User Rating

4.7(22)
5
0%
4
0%
3
0%
2
0%
1
0%

Rate this Skill

0.0

Compatible Platforms

🔧Claude Code
🔧OpenClaw
🔧OpenCode
🔧Codex
🔧Gemini CLI
🔧GitHub Copilot
🔧Amp
🔧Kimi CLI

Timeline

Created2026年3月16日
Last Updated2026年3月16日