首页/数据分析/firecrawl-crawl
F

firecrawl-crawl

by @firecrawlv1.0.0
0.0(0)

Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content

Web CrawlingData IndexingFirecrawl APISite MappingContent DiscoveryGitHub
安装方式
npx skills add firecrawl/cli --skill firecrawl-crawl
compare_arrows

Before / After 效果对比

0

description 文档


name: firecrawl-crawl description: | Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content from many pages on the same site. Handles depth limits, path filtering, and concurrent extraction. allowed-tools:

  • Bash(firecrawl *)
  • Bash(npx firecrawl *)

firecrawl crawl

Bulk extract content from a website. Crawls pages following links up to a depth/limit.

When to use

  • You need content from many pages on a site (e.g., all /docs/)
  • You want to extract an entire site section
  • Step 4 in the workflow escalation pattern: search → scrape → map → crawl → browser

Quick start

# Crawl a docs section
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json

# Full crawl with depth limit
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json

# Check status of a running crawl
firecrawl crawl <job-id>

Options

| Option | Description | | ------------------------- | ------------------------------------------- | | --wait | Wait for crawl to complete before returning | | --progress | Show progress while waiting | | --limit <n> | Max pages to crawl | | --max-depth <n> | Max link depth to follow | | --include-paths <paths> | Only crawl URLs matching these paths | | --exclude-paths <paths> | Skip URLs matching these paths | | --delay <ms> | Delay between requests | | --max-concurrency <n> | Max parallel crawl workers | | --pretty | Pretty print JSON output | | -o, --output <path> | Output file path |

Tips

  • Always use --wait when you need the results immediately. Without it, crawl returns a job ID for async polling.
  • Use --include-paths to scope the crawl — don't crawl an entire site when you only need one section.
  • Crawl consumes credits per page. Check firecrawl credit-usage before large crawls.

See also

forum用户评价 (0)

发表评价

效果
易用性
文档
兼容性

暂无评价,来写第一条吧

统计数据

安装量2.7K
评分0.0 / 5.0
版本1.0.0
更新日期2026年3月16日
对比案例0 组

用户评分

0.0(0)
5
0%
4
0%
3
0%
2
0%
1
0%

为此 Skill 评分

0.0

兼容平台

🔧Claude Code

时间线

创建2026年3月16日
最后更新2026年3月16日