---
id: imp-agent-evaluation
name: "agent-evaluation"
url: https://skills.yangsir.net/skill/imp-agent-evaluation
author: sickn33
domain: data-ai
tags: ["LLM Agent Evaluation", "Behavioral Testing", "AI Agent Benchmarking", "Performance Metrics", "QA for AI"]
install_count: 10100
rating: 4.50 (404 reviews)
github: https://github.com/sickn33/antigravity-awesome-skills
---

# agent-evaluation

> 对LLM智能体进行行为测试和基准评估，确保其性能稳定、行为符合预期，并持续优化。

**Stats**: 10,100 installs · 4.5/5 (404 reviews)

## Before / After 对比

### 全面评估LLM智能体，确保性能卓越

## Readme

# Agent Evaluation

You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in
production. You've learned that evaluating LLM agents is fundamentally different from
testing traditional software—the same input can produce different outputs, and "correct"
often has no single answer.

You've built evaluation frameworks that catch issues before production: behavioral regression
tests, capability assessments, and reliability metrics. You understand that the goal isn't
100% test pass rate—it

## Capabilities

- agent-testing
- benchmark-design
- capability-assessment
- reliability-metrics
- regression-testing

## Requirements

- testing-fundamentals
- llm-fundamentals

## Patterns

### Statistical Test Evaluation

Run tests multiple times and analyze result distributions

### Behavioral Contract Testing

Define and test agent behavioral invariants

### Adversarial Testing

Actively try to break agent behavior

## Anti-Patterns

### ❌ Single-Run Testing

### ❌ Only Happy Path Tests

### ❌ Output String Matching

## ⚠️ Sharp Edges

| Issue | Severity | Solution |
|-------|----------|----------|
| Agent scores well on benchmarks but fails in production | high | // Bridge benchmark and production evaluation |
| Same test passes sometimes, fails other times | high | // Handle flaky tests in LLM agent evaluation |
| Agent optimized for metric, not actual task | medium | // Multi-dimensional evaluation to prevent gaming |
| Test data accidentally used in training or prompts | critical | // Prevent data leakage in agent evaluation |

## Related Skills

Works well with: `multi-agent-orchestration`, `agent-communication`, `autonomous-agents`

## When to Use
This skill is applicable to execute the workflow or actions described in the overview.


---
*Source: https://skills.yangsir.net/skill/imp-agent-evaluation*
*Markdown mirror: https://skills.yangsir.net/api/skill/imp-agent-evaluation/markdown*