W

web-research

by @langchain-aiv
4.5(38)

研究計画を策定・保存し、すべての研究ファイルを専用フォルダに整理します。AIエージェントスキルとして、作業効率と自動化能力を向上させます。

Web ScrapingData CollectionInformation RetrievalOSINTData AnalysisGitHub
インストール方法
npx skills add langchain-ai/deepagents --skill web-research
compare_arrows

Before / After 効果比較

1
使用前

ネットワークリサーチを行う際、明確な計画と組織がなければ、情報過多、重複検索、有効な情報の選別が困難になりがちで、最終的に研究成果の質が低く、時間もかかります。

使用後

ネットワークリサーチスキルを活用することで、体系的に研究計画を作成し、すべての研究ファイルを専用フォルダーに整理し、研究課題を分解することができます。これにより、研究効率が向上し、高品質で体系的な研究成果を確実に得られます。

description SKILL.md

web-research

Web Research Skill

Research Process

Step 1: Create and Save Research Plan

Before delegating to subagents, you MUST:

Create a research folder - Organize all research files in a dedicated folder relative to the current working directory:

mkdir research_[topic_name]

This keeps files organized and prevents clutter in the working directory.

Analyze the research question - Break it down into distinct, non-overlapping subtopics

Write a research plan file - Use the write_file tool to create research_[topic_name]/research_plan.md containing:

The main research question

  • 2-5 specific subtopics to investigate

  • Expected information from each subtopic

  • How results will be synthesized

Planning Guidelines:

  • Simple fact-finding: 1-2 subtopics

  • Comparative analysis: 1 subtopic per comparison element (max 3)

  • Complex investigations: 3-5 subtopics

Step 2: Delegate to Research Subagents

For each subtopic in your plan:

Use the task tool to spawn a research subagent with:

Clear, specific research question (no acronyms)

  • Instructions to write findings to a file: research_[topic_name]/findings_[subtopic].md

  • Budget: 3-5 web searches maximum

Run up to 3 subagents in parallel for efficient research

Subagent Instructions Template:

Research [SPECIFIC TOPIC]. Use the web_search tool to gather information.
After completing your research, use write_file to save your findings to research_[topic_name]/findings_[subtopic].md.
Include key facts, relevant quotes, and source URLs.
Use 3-5 web searches maximum.

Step 3: Synthesize Findings

After all subagents complete:

Review the findings files that were saved locally:

First run list_files research_[topic_name] to see what files were created

  • Then use read_file with the file paths (e.g., research_[topic_name]/findings_*.md)

  • Important: Use read_file for LOCAL files only, not URLs

Synthesize the information - Create a comprehensive response that:

Directly answers the original question

  • Integrates insights from all subtopics

  • Cites specific sources with URLs (from the findings files)

  • Identifies any gaps or limitations

Write final report (optional) - Use write_file to create research_[topic_name]/research_report.md if requested

Note: If you need to fetch additional information from URLs, use the fetch_url tool, not read_file.

Best Practices

  • Plan before delegating - Always write research_plan.md first

  • Clear subtopics - Ensure each subagent has distinct, non-overlapping scope

  • File-based communication - Have subagents save findings to files, not return them directly

  • Systematic synthesis - Read all findings files before creating final response

  • Stop appropriately - Don't over-research; 3-5 searches per subtopic is usually sufficient

Weekly Installs881Repositorylangchain-ai/deepagentsGitHub Stars14.3KFirst SeenJan 22, 2026Security AuditsGen Agent Trust HubPassSocketPassSnykWarnInstalled onopencode792codex769gemini-cli764github-copilot719cursor711kimi-cli664

forumユーザーレビュー (0)

レビューを書く

効果
使いやすさ
ドキュメント
互換性

レビューなし

統計データ

インストール数1.1K
評価4.5 / 5.0
バージョン
更新日2026年3月17日
比較事例1 件

ユーザー評価

4.5(38)
5
0%
4
0%
3
0%
2
0%
1
0%

この Skill を評価

0.0

対応プラットフォーム

🔧Claude Code
🔧OpenClaw
🔧OpenCode
🔧Codex
🔧Gemini CLI
🔧GitHub Copilot
🔧Amp
🔧Kimi CLI

タイムライン

作成2026年3月17日
最終更新2026年3月17日