首页/数据 & AI/agent-evaluation
A

agent-evaluation

by @sickn33v1.0.0
0.0(0)

"Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring\u2014where even top agents achieve less than 50% on re..."

LLM Agent EvaluationBehavioral TestingAI Agent BenchmarkingPerformance MetricsQA for AIGitHub
安装方式
npx skills add sickn33/antigravity-awesome-skills --skill agent-evaluation
compare_arrows

Before / After 效果对比

0

description 文档


name: agent-evaluation description: "Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring\u2014where even top agents achieve less than 50% on re..." risk: unknown source: "vibeship-spawner-skills (Apache 2.0)" date_added: "2026-02-27"

Agent Evaluation

You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.

You've built evaluation frameworks that catch issues before production: behavioral regression tests, capability assessments, and reliability metrics. You understand that the goal isn't 100% test pass rate—it

Capabilities

  • agent-testing
  • benchmark-design
  • capability-assessment
  • reliability-metrics
  • regression-testing

Requirements

  • testing-fundamentals
  • llm-fundamentals

Patterns

Statistical Test Evaluation

Run tests multiple times and analyze result distributions

Behavioral Contract Testing

Define and test agent behavioral invariants

Adversarial Testing

Actively try to break agent behavior

Anti-Patterns

❌ Single-Run Testing

❌ Only Happy Path Tests

❌ Output String Matching

⚠️ Sharp Edges

| Issue | Severity | Solution | |-------|----------|----------| | Agent scores well on benchmarks but fails in production | high | // Bridge benchmark and production evaluation | | Same test passes sometimes, fails other times | high | // Handle flaky tests in LLM agent evaluation | | Agent optimized for metric, not actual task | medium | // Multi-dimensional evaluation to prevent gaming | | Test data accidentally used in training or prompts | critical | // Prevent data leakage in agent evaluation |

Related Skills

Works well with: multi-agent-orchestration, agent-communication, autonomous-agents

When to Use

This skill is applicable to execute the workflow or actions described in the overview.

forum用户评价 (0)

发表评价

效果
易用性
文档
兼容性

暂无评价,来写第一条吧

统计数据

安装量10.1K
评分0.0 / 5.0
版本1.0.0
更新日期2026年3月16日
对比案例0 组

用户评分

0.0(0)
5
0%
4
0%
3
0%
2
0%
1
0%

为此 Skill 评分

0.0

兼容平台

🔧Claude Code

时间线

创建2026年3月16日
最后更新2026年3月16日