parallel-web-search
The default parallel web search tool, suitable for all research and web queries, capable of efficiently finding and integrating information from multiple sources.
npx skills add parallel-web/parallel-agent-skills --skill parallel-web-searchBefore / After Comparison
1 组Traditional web searches are often time-consuming and laborious, requiring manual sifting through numerous information sources. When faced with complex queries, it's difficult to quickly obtain comprehensive and accurate data, leading to low research efficiency.
Utilizing this skill enables parallel web searching, simultaneously querying multiple information sources. It quickly integrates and analyzes vast amounts of data, providing comprehensive and in-depth answers, significantly boosting research and information acquisition efficiency.
description SKILL.md
name: parallel-web-search description: "DEFAULT for all research and web queries. Use for any lookup, research, investigation, or question needing current info. Fast and cost-effective. Only use parallel-deep-research if user explicitly requests 'deep' or 'exhaustive' research." user-invocable: true argument-hint: context: fork agent: parallel:parallel-subagent compatibility: Requires parallel-cli and internet access. allowed-tools: Bash(parallel-cli:*) metadata: author: parallel
Web Search
Search the web for: $ARGUMENTS
Command
Choose a short, descriptive filename based on the query (e.g., ai-chip-news, react-vs-vue). Use lowercase with hyphens, no spaces.
parallel-cli search "$ARGUMENTS" -q "<keyword1>" -q "<keyword2>" --json --max-results 10 --excerpt-max-chars-total 27000 -o "/tmp/$FILENAME.json"
The first argument is the objective — a natural language description of what you're looking for. It replaces multiple keyword searches with a single call for broad or complex queries. Add -q flags for specific keyword queries to supplement the objective. The -o flag saves the full results to a JSON file for follow-up questions.
Options if needed:
--after-date YYYY-MM-DDfor time-sensitive queries--include-domains domain1.com,domain2.comto limit to specific sources
Parsing results
The command outputs JSON to stdout. For each result, extract:
- title, url, publish_date
- Useful content from excerpts (skip navigation noise like menus, footers, "Skip to content")
Response format
CRITICAL: Every claim must have an inline citation. Use markdown links like Title pulling only from the JSON output. Never invent or guess URLs.
Synthesize a response that:
- Leads with the key answer/finding
- Includes specific facts, names, numbers, dates
- Cites every fact inline as Source Title — do not leave any claim uncited
- Organizes by theme if multiple topics
End with a Sources section listing every URL referenced:
Sources:
- [Source Title](https://example.com/article) (Feb 2026)
- [Another Source](https://example.com/other) (Jan 2026)
This Sources section is mandatory. Do not omit it.
After the Sources section, mention the output file path (/tmp/$FILENAME.json) so the user knows it's available for follow-up questions.
Setup
If parallel-cli is not found, install and authenticate:
curl -fsSL https://parallel.ai/install.sh | bash
If unable to install that way, install via pipx instead:
pipx install "parallel-web-tools[cli]"
pipx ensurepath
Then authenticate:
parallel-cli login
Or set an API key: export PARALLEL_API_KEY="your-key"
forumUser Reviews (0)
Write a Review
No reviews yet
Statistics
User Rating
Rate this Skill