Home/AI 工程/ai-ad-prompt-guide
A

ai-ad-prompt-guide

by @creatify-aiv
3.5(0)

This is a prompt guide specifically designed for AI video and image generation, especially suitable for the advertising industry. It provides battle-tested prompt frameworks to help users write truly effective AI generation instructions. The content covers the general SLCT framework, hallucination prevention techniques, and professional photography vocabulary, while also offering model-specific advice, aiming to significantly improve the quality and efficiency of generated content, reduce trial and error, and enable your AI creative ideas to be realized more precisely.

AI PromptingVideo GenerationImage GenerationAdvertising AIClaude SkillGitHub
Installation
npx skills add creatify-ai/ai-ad-prompt-guide
compare_arrows

Before / After Comparison

1
Before

Lack of systematic prompt engineering methods leads to poor AI video and image generation, often resulting in 'hallucinations' or deviations from expectations, wasting significant time on trial-and-error and revisions.

After

By adopting the practical framework in this guide, you can efficiently write precise, high-quality AI prompts, effectively avoid common issues, and significantly enhance the visual effects and creative relevance of generated content.

description SKILL.md


name: ai-ad-prompt-guide description: | Battle-tested prompting guide for AI video and image generation in advertising. Covers universal prompting rules, hallucination prevention, the SLCT framework for structured prompts, camera movement vocabulary, UGC formulas, product shot techniques, and the pass³ quality test. Includes model-specific guidance for Sora 2, Veo 3, Kling, Flux, Nano Banana Pro, Seedance, and more.

Use when: "write a video prompt", "fix hallucination", "prompt for Sora", "generate B-roll", "UGC prompt", "product shot prompt", "which model should I use", "AI image prompt", "prompt engineering for video", "camera movement", "SLCT", "pass³ test", "AI generation prompt", "text to video prompt", "image to video prompt", "model comparison", or any AI generation prompting question.

AI Ad Prompt Guide

Universal prompting framework for AI video and image generation — works across all major models.


Part 1: Universal Prompting Rules (Standalone)

1.1 The SLCT Framework

Every effective AI generation prompt has four components. Use SLCT as a checklist:

S — Subject: What is the main focus? L — Lighting/Look: What's the visual mood? C — Camera: What angle, movement, and framing? T — Technical: Resolution, aspect ratio, duration, style keywords?

SLCT Examples

Product B-roll:

S: A glass bottle of amber serum on a marble bathroom counter
L: Soft golden morning light streaming from the left, creating gentle highlights on the glass
C: Slow push-in from medium to close-up, shallow depth of field
T: 4K, 5 seconds, photorealistic, product photography style

UGC-style:

S: A woman in her late 20s opening a package on her couch, looking excited
L: Natural indoor lighting, warm tones, slightly imperfect like a phone camera
C: Medium shot, handheld slight shake, selfie-style front camera angle
T: 9:16 vertical, 5 seconds, realistic, casual home setting

Cinematic hero shot:

S: A sleek electric car driving along a coastal highway at sunset
L: Dramatic golden hour, long shadows, warm highlights on the car body
C: Low-angle tracking shot from the front quarter, smooth dolly movement
T: 16:9, 8 seconds, cinematic, film grain, anamorphic lens flare

1.2 Hallucination Prevention

AI models hallucinate when prompts are ambiguous, contradictory, or physically impossible. These rules minimize bad outputs:

The 5 Rules of Hallucination Prevention

  1. Be spatially explicit: "A bottle on the LEFT side of a marble counter, a plant on the RIGHT" — not "a bottle near a plant on a counter"

  2. Limit entities: Maximum 3 main subjects per scene. More = more chance of merging/distortion

  3. Avoid negatives: Don't say "no people in the background" — instead describe what IS there: "empty cafe with wooden chairs"

  4. Use real-world references: "lighting like a Vogue cover shoot" anchors the model better than "beautiful professional lighting"

  5. Specify quantities: "Two coffee cups" not "coffee cups". "A single person" not "a person" (which might generate multiple)

Common Hallucination Triggers & Fixes

TriggerProblemFix
"A person holding a product"Distorted hands/fingers"Close-up of product on table, hands NOT in frame" or use image-to-video with a real photo
"Text on the product label"Garbled textGenerate image without text, add text in post-production
"Multiple people talking"Face mergingOne person per scene, composite in editing
"Brand logo visible"Distorted logoAdd logo as overlay in post, not in prompt
"Complex physical interaction"Physics breaksBreak into simpler shots, edit together
"Specific celebrity resemblance"Legal/ethical issues + poor resultsUse descriptive attributes instead

1.3 Camera Movement Vocabulary

Use these precise terms — AI models understand film terminology better than casual descriptions.

MovementDescriptionBest For
Push-inCamera moves toward subjectBuilding tension, revealing detail
Pull-back / Dolly outCamera moves away from subjectReveal shots, establishing context
Tracking shotCamera follows subject laterallyMovement, energy, following action
Pan (left/right)Camera rotates on axisScanning a scene, transitions
Tilt (up/down)Camera angles up or down on axisRevealing height, drama
Crane up / Crane downCamera rises or descends verticallyEstablishing shots, reveals
Orbit / ArcCamera circles the subject360 product views, drama
Dolly zoom / VertigoZoom + dolly create disorienting effectDramatic moments (use sparingly)
Handheld / SteadicamSlight natural movementUGC feel, documentary style
Static / Locked-offNo movementProduct shots, clean compositions
Slow-motionReduced playback speedEmphasizing action, luxury feel
TimelapseSped-up footageProcess shots, before/after over time

Camera Angle Vocabulary

AngleEffectUse Case
Eye-levelNeutral, relatableTalking heads, product demos
Low anglePowerful, aspirationalLuxury products, hero shots
High angleOverview, diminishingEstablishing, flat-lay product
Bird's eye / Top-downGeometric, cleanFlat-lay, food, organized layouts
Dutch angleTension, uneaseDramatic ads (rare, use carefully)
Over-the-shoulderIntimate, POVUGC, unboxing, first-person

1.4 UGC Prompt Formulas

These templates generate authentic-feeling content that doesn't look "AI generated."

Unboxing Formula

A [age] [gender] sitting [location], opening a [color] package.
[Lighting]: Natural [time of day] light from a nearby window, warm tones.
[Camera]: Medium close-up, slightly shaky handheld, phone camera quality.
[Expression]: Genuine surprise and excitement.
[Duration]: 5 seconds.
[Style]: Realistic, casual, user-generated content aesthetic.

Product Review Formula

A [age] [gender] looking directly at camera, holding up a [product].
[Setting]: [Casual home location — kitchen, bathroom, living room].
[Lighting]: Natural indoor light, not studio-perfect.
[Camera]: Front-facing selfie angle, slight phone tilt, 9:16 vertical.
[Expression]: Enthusiastic, conversational, making eye contact.
[Duration]: 5 seconds.
[Style]: Authentic UGC, not polished commercial.

Lifestyle / Day-in-the-Life Formula

A [age] [gender] using [product] during their [morning/evening] routine.
[Setting]: [Realistic home environment].
[Lighting]: Soft natural light, golden hour warmth through windows.
[Camera]: Medium shot following the action, gentle handheld movement.
[Action]: [Specific natural action — applying, pouring, wearing].
[Duration]: 5 seconds.
[Style]: Lifestyle photography, editorial casual.

1.5 Product Shot Techniques

Hero Product Shot

A [product] centered on a [surface], [environment context].
[Lighting]: [Dramatic/soft/natural] studio lighting with [specific direction].
[Background]: [Clean/textured/contextual] — [specific description].
[Camera]: [Static macro / slow orbit / push-in] with shallow depth of field.
[Props]: [1-2 complementary items that add context without competing].
[Duration]: 5 seconds.
[Style]: High-end product photography, [brand mood — luxurious/minimal/vibrant].

Before/After Product Shot

Split composition: LEFT side shows [before state], RIGHT side shows [after state].
[Transition]: Smooth wipe or morph from left to right over 3 seconds.
[Lighting]: Even, clean lighting to show detail in both states.
[Camera]: Static, locked-off shot. Centered framing.
[Duration]: 5 seconds.
[Style]: Clean, medical/scientific feel OR dramatic transformation.

1.6 The Pass³ Quality Test

Before using any AI-generated asset in an ad, run it through this 3-pass test:

Pass 1 — Physics Check (2 seconds)

  • Do objects obey gravity?
  • Are reflections correct?
  • Do shadows match light sources?
  • Are proportions realistic?

Pass 2 — Detail Check (5 seconds)

  • Hands: correct number of fingers, natural poses?
  • Text: readable or garbled? (If garbled, plan to overlay in post)
  • Faces: symmetrical, natural expressions?
  • Edges: clean boundaries between objects?

Pass 3 — Brand Check (3 seconds)

  • Does the lighting match your brand mood?
  • Is the color palette on-brand?
  • Would this fit on your website/social feed without looking out of place?
  • Could a viewer tell this is AI? (For UGC, "slightly imperfect" is fine)

Decision: If it fails any pass, regenerate with an adjusted prompt. Don't fix bad generations in post — it's faster to regenerate.


Part 2: Model-Specific Guidance

2.1 Model Selection Decision Matrix

Use CaseBest ModelWhy
Product B-roll (from image)Kling 2.1 MasterBest motion quality from product photos
Cinematic establishing shotsVeo 3.1Best cinematic quality and coherence
Quick B-roll (budget)Sora 2 (standard)Good quality at lowest cost
UGC-style contentSeedance v1 ProNatural human motion
Text-heavy imagesNano BananaBest text rendering in images
Product photographyNano BananaBest product fidelity
Image editing/compositingFlux Pro KontextBest for editing existing images
Long-form video (10s)Wan 2.5 Preview (1080p)Best quality/cost for longer clips
Fast turnaroundVeo 3.1 Fast or Sora 2 standardFastest generation times
Maximum quality (no budget limit)Kling 2.1 Master or Veo 3.1Highest fidelity

2.2 Model-Specific Prompting Tips

Sora 2

  • Excels at: Smooth camera movements, consistent lighting, coherent scenes
  • Struggles with: Fine text, complex multi-person interactions
  • Tip: Use descriptive scene-setting language. Sora responds well to cinematic terminology.
  • Duration options: 4s, 8s, 12s
  • Cost-effective for rapid iteration at standard quality

Veo 3.1

  • Excels at: Cinematic quality, coherent long sequences, good physics
  • Struggles with: Sometimes overly "cinematic" when you want casual
  • Tip: For UGC, explicitly state "phone camera quality, not cinematic"
  • Duration options: 4s, 6s, 8s
  • Best-in-class for hero content and brand videos

Kling 2.1 Master

  • Excels at: Image-to-video with motion, product animations, face consistency
  • Struggles with: Can be slower, higher cost
  • Tip: Provide a high-quality reference image for best results. Use cfg_scale 0.3-0.5 for creative freedom, 0.7-0.9 for prompt adherence.
  • Duration options: 5s, 10s

Nano Banana

  • Excels at: Text rendering in images, product photography, logo fidelity
  • Struggles with: Only generates images (not video)
  • Tip: Best for generating product shots that will be used as reference images for image-to-video models.
  • Use nano-banana/edit for image-to-image modifications

Flux Pro

  • Excels at: Image editing, style transfer, multi-image compositing
  • Struggles with: Less creative freedom than pure generation models
  • Tip: Use kontext/text-to-image for generation, kontext/max/multi for editing existing images with new elements.

Seedance v1 Pro

  • Excels at: Human motion, dance movements, natural body language
  • Struggles with: Non-human subjects
  • Tip: Best for UGC and avatar-style content where natural movement matters.
  • Duration options: 5s, 10s at 480p/720p/1080p

Part 3: API Automation

These models are available through a unified Asset Generator API, which provides a single endpoint for all models.

3.1 Asset Generator — Unified Access

import requests

HEADERS = {
    "Content-Type": "application/json",
    "X-API-ID": "your-api-id",
    "X-API-KEY": "your-api-key",
}
BASE_URL = "https://api.creatify.ai/api"

def get_model_schemas(model_name=None):
    """Discover available models and their input parameters."""
    url = f"{BASE_URL}/asset_generator/schemas/"
    if model_name:
        url += f"?model_name={model_name}"
    resp = requests.get(url, headers=HEADERS)
    resp.raise_for_status()
    return resp.json()

def generate_asset(model_name, input_params, webhook_url=None):
    """Generate an image or video using any available model."""
    payload = {
        "model_name": model_name,
        "input_params": input_params,
    }
    if webhook_url:
        payload["webhook_url"] = webhook_url

    resp = requests.post(f"{BASE_URL}/asset_generator/", headers=HEADERS, json=payload)
    resp.raise_for_status()
    return resp.json()

def check_generation_status(generation_id):
    """Check status of an asset generation job."""
    resp = requests.get(f"{BASE_URL}/asset_generator/{generation_id}/", headers=HEADERS)
    resp.raise_for_status()
    return resp.json()

Don't have an API key yet? No problem — grab one in under 2 minutes:

  1. Sign up free at creatify.ai
  2. Go to Settings → API
  3. Copy your API ID and API Key — that's it. New accounts get free credits to start.

3.2 Quick Examples

Generate a product image (Nano Banana)

result = generate_asset(
    model_name="nano-banana",
    input_params={
        "prompt": "A glass bottle of amber face serum on a white marble counter, soft studio lighting, product photography, clean background, 4K detail",
    }
)

Generate B-roll video (Sora 2)

result = generate_asset(
    model_name="sora-2/text-to-video",
    input_params={
        "prompt": "Slow push-in on a coffee cup on a wooden table in a cozy cafe, morning sunlight streaming through the window, steam rising from the cup, shallow depth of field, cinematic",
        "duration": "4",
    }
)

Image-to-video product animation (Kling)

result = generate_asset(
    model_name="kling-video/v2.1/master/image-to-video",
    input_params={
        "prompt": "Slow orbit around the product, dramatic lighting, product showcase",
        "image_url": "https://example.com/product-photo.jpg",
        "duration": "5",
        "aspect_ratio": "9:16",
        "cfg_scale": 0.5,
    }
)

3.3 Credit Costs Reference

ModelTypeCost
Sora 2 (standard)text/image-to-video8-24 credits (4-12s)
Sora 2 (pro)text/image-to-video24-120 credits (4-12s)
Veo 3.1text-to-video32-64 credits (4-8s)
Veo 3.1 Fasttext-to-video16-32 credits (4-8s)
Kling 1.6 Proimage/text-to-video12-24 credits (5-10s)
Kling 2.1 Masterimage/text-to-video40-80 credits (5-10s)
Seedance v1 Proimage/text-to-video8-32 credits (5-10s)
Wan 2.5image/text-to-video8-32 credits (5-10s)
Minimax Hailuo 02image/text-to-video12-24 credits (standard), 20 (pro)
Nano Bananatext-to-image4 credits
Flux Protext/image-to-image4 credits
Seedream v4text/image-to-image4 credits

3.4 Recipe: Image → Video B-Roll Pipeline

Generate a product photo first, then animate it.

import time

def image_to_video_broll(image_prompt, video_prompt, video_model="kling-video/v1.6/pro/image-to-video"):
    """Pipeline: generate image → animate into video B-roll."""

    # Step 1: Generate product image
    image_job = generate_asset(
        model_name="nano-banana",
        input_params={"prompt": image_prompt}
    )

    # Step 2: Poll for image
    while True:
        status = check_generation_status(image_job["id"])
        if status["status"] == "done":
            image_url = status["assets"][0]["url"]
            break
        elif status["status"] in ("failed", "error"):
            raise Exception(f"Image gen failed: {status.get('failed_reason')}")
        time.sleep(5)

    # Step 3: Animate image into video
    video_job = generate_asset(
        model_name=video_model,
        input_params={
            "prompt": video_prompt,
            "image_url": image_url,
            "duration": "5",
            "aspect_ratio": "9:16",
        }
    )

    # Step 4: Poll for video
    while True:
        status = check_generation_status(video_job["id"])
        if status["status"] == "done":
            return status["assets"][0]["url"]
        elif status["status"] in ("failed", "error"):
            raise Exception(f"Video gen failed: {status.get('failed_reason')}")
        time.sleep(10)

See Also

forumUser Reviews (0)

Write a Review

Effect
Usability
Docs
Compatibility

No reviews yet

Statistics

Installs21
Rating3.5 / 5.0
Version
Updated2026年4月6日
Comparisons1

User Rating

3.5(0)
5
0%
4
0%
3
0%
2
0%
1
0%

Rate this Skill

0.0

Compatible Platforms

🔧Claude Code

Timeline

Created2026年4月6日
Last Updated2026年4月6日