首页/AI 工程/firecrawl-agent
F

firecrawl-agent

by @firecrawlv1.0.0
0.0(0)

AI-powered autonomous data extraction that navigates complex sites and returns structured JSON. Use this skill when the user wants structured data from websites, needs to extract pricing tiers, product listings, directory entries, or any data as JSON with a schema. Triggers on "extract structured da

AI AgentsFirecrawl APIAutonomous Data CollectionWeb AutomationLLM ToolingGitHub
安装方式
npx skills add firecrawl/cli --skill firecrawl-agent
compare_arrows

Before / After 效果对比

0

description 文档


name: firecrawl-agent description: | AI-powered autonomous data extraction that navigates complex sites and returns structured JSON. Use this skill when the user wants structured data from websites, needs to extract pricing tiers, product listings, directory entries, or any data as JSON with a schema. Triggers on "extract structured data", "get all the products", "pull pricing info", "extract as JSON", or when the user provides a JSON schema for website data. More powerful than simple scraping for multi-page structured extraction. allowed-tools:

  • Bash(firecrawl *)
  • Bash(npx firecrawl *)

firecrawl agent

AI-powered autonomous extraction. The agent navigates sites and extracts structured data (takes 2-5 minutes).

When to use

  • You need structured data from complex multi-page sites
  • Manual scraping would require navigating many pages
  • You want the AI to figure out where the data lives

Quick start

# Extract structured data
firecrawl agent "extract all pricing tiers" --wait -o .firecrawl/pricing.json

# With a JSON schema for structured output
firecrawl agent "extract products" --schema '{"type":"object","properties":{"name":{"type":"string"},"price":{"type":"number"}}}' --wait -o .firecrawl/products.json

# Focus on specific pages
firecrawl agent "get feature list" --urls "<url>" --wait -o .firecrawl/features.json

Options

| Option | Description | | ---------------------- | ----------------------------------------- | | --urls <urls> | Starting URLs for the agent | | --model <model> | Model to use: spark-1-mini or spark-1-pro | | --schema <json> | JSON schema for structured output | | --schema-file <path> | Path to JSON schema file | | --max-credits <n> | Credit limit for this agent run | | --wait | Wait for agent to complete | | --pretty | Pretty print JSON output | | -o, --output <path> | Output file path |

Tips

  • Always use --wait to get results inline. Without it, returns a job ID.
  • Use --schema for predictable, structured output — otherwise the agent returns freeform data.
  • Agent runs consume more credits than simple scrapes. Use --max-credits to cap spending.
  • For simple single-page extraction, prefer scrape — it's faster and cheaper.

See also

forum用户评价 (0)

发表评价

效果
易用性
文档
兼容性

暂无评价,来写第一条吧

统计数据

安装量2.7K
评分0.0 / 5.0
版本1.0.0
更新日期2026年3月16日
对比案例0 组

用户评分

0.0(0)
5
0%
4
0%
3
0%
2
0%
1
0%

为此 Skill 评分

0.0

兼容平台

🔧Claude Code

时间线

创建2026年3月16日
最后更新2026年3月16日