S

semgrep

by @trailofbitsv1.0.0
0.0(0)

运行Semgrep安全扫描,支持自动语言检测、并行执行和SARIF输出合并。

SemgrepStatic AnalysisSASTCode SecurityVulnerability ScanningGitHub
安装方式
npx skills add trailofbits/skills --skill semgrep
compare_arrows

Before / After 效果对比

1
使用前

传统的代码安全审查依赖人工或配置复杂的静态分析工具,耗时且容易产生大量误报,导致安全漏洞发现效率低下,且难以集成到CI/CD流程中。

使用后

Semgrep安全扫描技能能够通过自动语言检测和并行执行,快速对代码库进行安全扫描,并生成SARIF格式的合并输出,显著提高漏洞发现效率和准确性,同时避免数据泄露。

扫描时间0%
使用前
0
使用后
0
漏洞发现率0%
使用前
0
使用后
0
误报率0%
使用前
0
使用后
0

扫描时间

0%

00

漏洞发现率

0%

00

误报率

0%

00

description SKILL.md

semgrep

Semgrep Security Scan Run a Semgrep scan with automatic language detection, parallel execution via Task subagents, and merged SARIF output. Essential Principles Always use --metrics=off — Semgrep sends telemetry by default; --config auto also phones home. Every semgrep command must include --metrics=off to prevent data leakage during security audits. User must approve the scan plan (Step 3 is a hard gate) — The original "scan this codebase" request is NOT approval. Present exact rulesets, target, engine, and mode; wait for explicit "yes"/"proceed" before spawning scanners. Third-party rulesets are required, not optional — Trail of Bits, 0xdea, and Decurity rules catch vulnerabilities absent from the official registry. Include them whenever the detected language matches. Spawn all scan Tasks in a single message — Parallel execution is the core performance advantage. Never spawn Tasks sequentially; always emit all Task tool calls in one response. Always check for Semgrep Pro before scanning — Pro enables cross-file taint tracking and catches ~250% more true positives. Skipping the check means silently missing critical inter-file vulnerabilities. When to Use Security audit of a codebase Finding vulnerabilities before code review Scanning for known bug patterns First-pass static analysis When NOT to Use Binary analysis → Use binary analysis tools Already have Semgrep CI configured → Use existing pipeline Need cross-file analysis but no Pro license → Consider CodeQL as alternative Creating custom Semgrep rules → Use semgrep-rule-creator skill Porting existing rules to other languages → Use semgrep-rule-variant-creator skill Output Directory All scan results, SARIF files, and temporary data are stored in a single output directory. If the user specifies an output directory in their prompt, use it as OUTPUT_DIR. If not specified, default to ./static_analysis_semgrep_1. If that already exists, increment to 2, 3, etc. In both cases, always create the directory with mkdir -p before writing any files. # Resolve output directory if [ -n "$USER_SPECIFIED_DIR" ]; then OUTPUT_DIR="$USER_SPECIFIED_DIR" else BASE="static_analysis_semgrep" N=1 while [ -e "${BASE}${N}" ]; do N=$((N + 1)) done OUTPUT_DIR="${BASE}${N}" fi mkdir -p "$OUTPUT_DIR/raw" "$OUTPUT_DIR/results" The output directory is resolved once at the start of Step 1 and used throughout all subsequent steps. $OUTPUT_DIR/ ├── rulesets.txt # Approved rulesets (logged after Step 3) ├── raw/ # Per-scan raw output (unfiltered) │ ├── python-python.json │ ├── python-python.sarif │ ├── python-django.json │ ├── python-django.sarif │ └── ... └── results/ # Final merged output └── results.sarif Prerequisites Required: Semgrep CLI (semgrep --version). If not installed, see Semgrep installation docs. Optional: Semgrep Pro — enables cross-file taint tracking, inter-procedural analysis, and additional languages (Apex, C#, Elixir). Check with: semgrep --pro --validate --config p/default 2>/dev/null && echo "Pro available" || echo "OSS only" Limitations: OSS mode cannot track data flow across files. Pro mode uses -j 1 for cross-file analysis (slower per ruleset, but parallel rulesets compensate). Scan Modes Select mode in Step 2 of the workflow. Mode affects both scanner flags and post-processing. Mode Coverage Findings Reported Run all All rulesets, all severity levels Everything Important only All rulesets, pre- and post-filtered Security vulns only, medium-high confidence/impact Important only applies two filter layers: Pre-filter: --severity MEDIUM --severity HIGH --severity CRITICAL (CLI flag) Post-filter: JSON metadata — keeps only category=security, confidence∈{MEDIUM,HIGH}, impact∈{MEDIUM,HIGH} See scan-modes.md for metadata criteria and jq filter commands. Orchestration Architecture ┌──────────────────────────────────────────────────────────────────┐ │ MAIN AGENT (this skill) │ │ Step 1: Detect languages + check Pro availability │ │ Step 2: Select scan mode + rulesets (ref: rulesets.md) │ │ Step 3: Present plan + rulesets, get approval [⛔ HARD GATE] │ │ Step 4: Spawn parallel scan Tasks (approved rulesets + mode) │ │ Step 5: Merge results and report │ └──────────────────────────────────────────────────────────────────┘ │ Step 4 ▼ ┌─────────────────┐ │ Scan Tasks │ │ (parallel) │ ├─────────────────┤ │ Python scanner │ │ JS/TS scanner │ │ Go scanner │ │ Docker scanner │ └─────────────────┘ Workflow Follow the detailed workflow in scan-workflow.md. Summary: Step Action Gate Key Reference 1 Resolve output dir, detect languages + Pro availability — Use Glob, not Bash 2 Select scan mode + rulesets — rulesets.md 3 Present plan, get explicit approval ⛔ HARD AskUserQuestion 4 Spawn parallel scan Tasks — scanner-task-prompt.md 5 Merge results and report — Merge script (below) Task enforcement: On invocation, create 5 tasks with blockedBy dependencies (each step blocks the previous). Step 3 is a HARD GATE — mark complete ONLY after user explicitly approves. Merge command (Step 5): uv run {baseDir}/scripts/merge_sarif.py $OUTPUT_DIR/raw $OUTPUT_DIR/results/results.sarif Agents Agent Tools Purpose static-analysis:semgrep-scanner Bash Executes parallel semgrep scans for a language category Use subagent_type: static-analysis:semgrep-scanner in Step 4 when spawning Task subagents. Rationalizations to Reject Shortcut Why It's Wrong "User asked for scan, that's approval" Original request ≠ plan approval. Present plan, use AskUserQuestion, await explicit "yes" "Step 3 task is blocking, just mark complete" Lying about task status defeats enforcement. Only mark complete after real approval "I already know what they want" Assumptions cause scanning wrong directories/rulesets. Present plan for verification "Just use default rulesets" User must see and approve exact rulesets before scan "Add extra rulesets without asking" Modifying approved list without consent breaks trust "Third-party rulesets are optional" Trail of Bits, 0xdea, Decurity catch vulnerabilities not in official registry — REQUIRED "Use --config auto" Sends metrics; less control over rulesets "One Task at a time" Defeats parallelism; spawn all Tasks together "Pro is too slow, skip --pro" Cross-file analysis catches 250% more true positives; worth the time "Semgrep handles GitHub URLs natively" URL handling fails on repos with non-standard YAML; always clone first "Cleanup is optional" Cloned repos pollute the user's workspace and accumulate across runs "Use . or relative path as target" Subagents need absolute paths to avoid ambiguity "Let the user pick an output dir later" Output directory must be resolved at Step 1, before any files are created Reference Index File Content rulesets.md Complete ruleset catalog and selection algorithm scan-modes.md Pre/post-filter criteria and jq commands scanner-task-prompt.md Template for spawning scanner subagents Workflow Purpose scan-workflow.md Complete 5-step scan execution process Success Criteria Output directory resolved (user-specified or auto-incremented default) All generated files stored inside $OUTPUT_DIR Languages detected with file counts; Pro status checked Scan mode selected by user (run all / important only) Rulesets include third-party rules for all detected languages User explicitly approved the scan plan (Step 3 gate passed) All scan Tasks spawned in a single message and completed Every semgrep command used --metrics=off Approved rulesets logged to $OUTPUT_DIR/rulesets.txt Raw per-scan outputs stored in $OUTPUT_DIR/raw/ results.sarif exists in $OUTPUT_DIR/results/ and is valid JSON Important-only mode: post-filter applied before merge; unfiltered results preserved in raw/ Results summary reported with severity and category breakdown Cloned repos (if any) cleaned up from $OUTPUT_DIR/repos/ Weekly Installs1.1KRepositorytrailofbits/skillsGitHub Stars3.6KFirst SeenJan 19, 2026Security AuditsGen Agent Trust HubPassSocketWarnSnykWarnInstalled onclaude-code966codex871opencode820gemini-cli786cursor737github-copilot724

forum用户评价 (0)

发表评价

效果
易用性
文档
兼容性

暂无评价,来写第一条吧

统计数据

安装量0
评分0.0 / 5.0
版本1.0.0
更新日期2026年3月17日
对比案例1 组

用户评分

0.0(0)
5
0%
4
0%
3
0%
2
0%
1
0%

为此 Skill 评分

0.0

兼容平台

🔧Claude Code

时间线

创建2026年3月17日
最后更新2026年3月17日