llm-security
"Security guidelines for LLM applications based on OWASP Top 10 for LLM 2025. Use when building LLM apps, reviewing AI security, implementing RAG systems, or asking about LLM vulnerabilities like 'prompt injection' or 'check LLM security'. IMPORTANT: Always consult this skill when building chatbots,
npx skills add semgrep/skills --skill llm-securityBefore / After 效果对比
0 组description 文档
name: llm-security description: "Security guidelines for LLM applications based on OWASP Top 10 for LLM 2025. Use when building LLM apps, reviewing AI security, implementing RAG systems, or asking about LLM vulnerabilities like 'prompt injection' or 'check LLM security'. IMPORTANT: Always consult this skill when building chatbots, AI agents, RAG pipelines, tool-using LLMs, agentic systems, or any application that calls an LLM API (OpenAI, Anthropic, Gemini, etc.) — even if the user doesn't explicitly mention security. Also use when users import 'openai', 'anthropic', 'langchain', 'llamaindex', or similar LLM libraries."
LLM Security Guidelines (OWASP Top 10 for LLM 2025)
Security rules for building secure LLM applications, based on the OWASP Top 10 for LLM Applications 2025.
How to Use This Skill
Proactive mode — When building or reviewing LLM applications, automatically check for relevant security risks based on the application pattern. You don't need to wait for the user to ask about LLM security.
Reactive mode — When the user asks about LLM security, use the mapping below to find relevant rule files with detailed vulnerable/secure code examples.
Workflow
- Identify what the user is building (see "What Are You Building?" below)
- Check the priority rules for that pattern
- Read the specific rule files from
rules/for code examples - Apply the secure patterns or flag vulnerable ones
What Are You Building?
Use this to quickly identify which rules matter most for the user's task:
| Building... | Priority Rules | |-------------|---------------| | Chatbot / conversational AI | Prompt Injection (LLM01), System Prompt Leakage (LLM07), Output Handling (LLM05), Unbounded Consumption (LLM10) | | RAG system | Vector/Embedding Weaknesses (LLM08), Prompt Injection (LLM01), Sensitive Disclosure (LLM02), Misinformation (LLM09) | | AI agent with tools | Excessive Agency (LLM06), Prompt Injection (LLM01), Output Handling (LLM05), Sensitive Disclosure (LLM02) | | Fine-tuning / training | Data Poisoning (LLM04), Supply Chain (LLM03), Sensitive Disclosure (LLM02) | | LLM-powered API | Unbounded Consumption (LLM10), Prompt Injection (LLM01), Output Handling (LLM05), Sensitive Disclosure (LLM02) | | Content generation | Misinformation (LLM09), Output Handling (LLM05), Prompt Injection (LLM01) |
Categories
Critical Impact
- LLM01: Prompt Injection (
rules/prompt-injection.md) - Prevent direct and indirect prompt manipulation - LLM02: Sensitive Information Disclosure (
rules/sensitive-disclosure.md) - Protect PII, credentials, and proprietary data - LLM03: Supply Chain (
rules/supply-chain.md) - Secure model sources, training data, and dependencies - LLM04: Data and Model Poisoning (
rules/data-poisoning.md) - Prevent training data manipulation and backdoors - LLM05: Improper Output Handling (
rules/output-handling.md) - Sanitize LLM outputs before downstream use
High Impact
- LLM06: Excessive Agency (
rules/excessive-agency.md) - Limit LLM permissions, functionality, and autonomy - LLM07: System Prompt Leakage (
rules/system-prompt-leakage.md) - Protect system prompts from disclosure - LLM08: Vector and Embedding Weaknesses (
rules/vector-embedding.md) - Secure RAG systems and embeddings - LLM09: Misinformation (
rules/misinformation.md) - Mitigate hallucinations and false outputs - LLM10: Unbounded Consumption (
rules/unbounded-consumption.md) - Prevent DoS, cost attacks, and model theft
See rules/_sections.md for the full index with OWASP/MITRE references.
Quick Reference
| Vulnerability | Key Prevention | |--------------|----------------| | Prompt Injection | Input validation, output filtering, privilege separation | | Sensitive Disclosure | Data sanitization, access controls, encryption | | Supply Chain | Verify models, SBOM, trusted sources only | | Data Poisoning | Data validation, anomaly detection, sandboxing | | Output Handling | Treat LLM as untrusted, encode outputs, parameterize queries | | Excessive Agency | Least privilege, human-in-the-loop, minimize extensions | | System Prompt Leakage | No secrets in prompts, external guardrails | | Vector/Embedding | Access controls, data validation, monitoring | | Misinformation | RAG, fine-tuning, human oversight, cross-verification | | Unbounded Consumption | Rate limiting, input validation, resource monitoring |
Key Principles
- Never trust LLM output - Validate and sanitize all outputs before use
- Least privilege - Grant minimum necessary permissions to LLM systems
- Defense in depth - Layer multiple security controls
- Human oversight - Require approval for high-impact actions
- Monitor and log - Track all LLM interactions for anomaly detection
References
forum用户评价 (0)
发表评价
暂无评价,来写第一条吧
统计数据
用户评分
为此 Skill 评分