agent-evaluation
对LLM智能体进行行为测试和基准评估,确保其性能稳定、行为符合预期,并持续优化。
npx skills add sickn33/antigravity-awesome-skills --skill agent-evaluationBefore / After 效果对比
1 组缺乏系统性方法评估LLM智能体性能,难以发现潜在问题。智能体行为不稳定,能力评估不准确,影响其在实际应用中的表现。
提供行为测试和能力评估,全面衡量LLM智能体性能。确保智能体稳定可靠,能力符合预期,显著提升其在复杂任务中的表现。
description SKILL.md
Agent Evaluation
You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.
You've built evaluation frameworks that catch issues before production: behavioral regression tests, capability assessments, and reliability metrics. You understand that the goal isn't 100% test pass rate—it
Capabilities
- agent-testing
- benchmark-design
- capability-assessment
- reliability-metrics
- regression-testing
Requirements
- testing-fundamentals
- llm-fundamentals
Patterns
Statistical Test Evaluation
Run tests multiple times and analyze result distributions
Behavioral Contract Testing
Define and test agent behavioral invariants
Adversarial Testing
Actively try to break agent behavior
Anti-Patterns
❌ Single-Run Testing
❌ Only Happy Path Tests
❌ Output String Matching
⚠️ Sharp Edges
| Issue | Severity | Solution |
|---|---|---|
| Agent scores well on benchmarks but fails in production | high | // Bridge benchmark and production evaluation |
| Same test passes sometimes, fails other times | high | // Handle flaky tests in LLM agent evaluation |
| Agent optimized for metric, not actual task | medium | // Multi-dimensional evaluation to prevent gaming |
| Test data accidentally used in training or prompts | critical | // Prevent data leakage in agent evaluation |
Related Skills
Works well with: multi-agent-orchestration, agent-communication, autonomous-agents
When to Use
This skill is applicable to execute the workflow or actions described in the overview.
forum用户评价 (0)
发表评价
暂无评价
统计数据
用户评分
为此 Skill 评分