Home/数据 & AI/agent-evaluation
A

agent-evaluation

by @sickn33v
4.5(404)

Conduct behavioral tests and benchmark evaluations for LLM agents to ensure stable performance, expected behavior, and continuous optimization.

LLM Agent EvaluationBehavioral TestingAI Agent BenchmarkingPerformance MetricsQA for AIGitHub
Installation
npx skills add sickn33/antigravity-awesome-skills --skill agent-evaluation
compare_arrows

Before / After Comparison

1
Before

Lack of a systematic method for evaluating LLM agent performance makes it difficult to uncover potential issues. Unstable agent behavior and inaccurate capability assessment affect their performance in real-world applications.

After

Provides behavioral testing and capability assessment to comprehensively measure LLM agent performance. Ensures agents are stable and reliable, their capabilities meet expectations, and significantly improves their performance in complex tasks.

description SKILL.md

Agent Evaluation

You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.

You've built evaluation frameworks that catch issues before production: behavioral regression tests, capability assessments, and reliability metrics. You understand that the goal isn't 100% test pass rate—it

Capabilities

  • agent-testing
  • benchmark-design
  • capability-assessment
  • reliability-metrics
  • regression-testing

Requirements

  • testing-fundamentals
  • llm-fundamentals

Patterns

Statistical Test Evaluation

Run tests multiple times and analyze result distributions

Behavioral Contract Testing

Define and test agent behavioral invariants

Adversarial Testing

Actively try to break agent behavior

Anti-Patterns

❌ Single-Run Testing

❌ Only Happy Path Tests

❌ Output String Matching

⚠️ Sharp Edges

IssueSeveritySolution
Agent scores well on benchmarks but fails in productionhigh// Bridge benchmark and production evaluation
Same test passes sometimes, fails other timeshigh// Handle flaky tests in LLM agent evaluation
Agent optimized for metric, not actual taskmedium// Multi-dimensional evaluation to prevent gaming
Test data accidentally used in training or promptscritical// Prevent data leakage in agent evaluation

Related Skills

Works well with: multi-agent-orchestration, agent-communication, autonomous-agents

When to Use

This skill is applicable to execute the workflow or actions described in the overview.

forumUser Reviews (0)

Write a Review

Effect
Usability
Docs
Compatibility

No reviews yet

Statistics

Installs10.1K
Rating4.5 / 5.0
Version
Updated2026年4月29日
Comparisons1

User Rating

4.5(404)
5
36%
4
49%
3
14%
2
1%
1
0%

Rate this Skill

0.0

Compatible Platforms

🔧Claude Code
🔧OpenClaw
🔧OpenCode
🔧Codex
🔧Gemini CLI
🔧GitHub Copilot
🔧Amp
🔧Kimi CLI

Timeline

Created2026年3月16日
Last Updated2026年4月29日
🎁 Agent Knowledge Cards