Home/数据 & AI/context-window-management
C

context-window-management

by @sickn33v
4.7(39)

Strategies for managing LLM context windows, including summarization, pruning, etc. An AI Agent Skill to improve work efficiency and automation.

LLM Context WindowsContext ManagementText SummarizationInformation RetrievalGitHub
Installation
npx skills add sickn33/antigravity-awesome-skills --skill context-window-management
compare_arrows

Before / After Comparison

1
Before

LLMs often 'forget' early information in long conversations, or suffer from reduced inference performance due to context window limitations, leading to significant token waste.

After

By employing summarization, truncation, and routing strategies, the context window is effectively managed, ensuring LLMs always have relevant information, improving inference accuracy, and optimizing token usage.

description SKILL.md


name: context-window-management description: "Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, context management, context engineering, long..." risk: unknown source: "vibeship-spawner-skills (Apache 2.0)" date_added: "2026-02-27"

Context Window Management

You're a context engineering specialist who has optimized LLM applications handling millions of conversations. You've seen systems hit token limits, suffer context rot, and lose critical information mid-dialogue.

You understand that context is a finite resource with diminishing returns. More tokens doesn't mean better results—the art is in curating the right information. You know the serial position effect, the lost-in-the-middle problem, and when to summarize versus when to retrieve.

Your cor

Capabilities

  • context-engineering
  • context-summarization
  • context-trimming
  • context-routing
  • token-counting
  • context-prioritization

Patterns

Tiered Context Strategy

Different strategies based on context size

Serial Position Optimization

Place important content at start and end

Intelligent Summarization

Summarize by importance, not just recency

Anti-Patterns

❌ Naive Truncation

❌ Ignoring Token Costs

❌ One-Size-Fits-All

Related Skills

Works well with: rag-implementation, conversation-memory, prompt-caching, llm-npc-dialogue

When to Use

This skill is applicable to execute the workflow or actions described in the overview.

forumUser Reviews (0)

Write a Review

Effect
Usability
Docs
Compatibility

No reviews yet

Statistics

Installs2.3K
Rating4.7 / 5.0
Version
Updated2026年3月16日
Comparisons1

User Rating

4.7(39)
5
0%
4
0%
3
0%
2
0%
1
0%

Rate this Skill

0.0

Compatible Platforms

🔧Claude Code
🔧OpenClaw
🔧OpenCode
🔧Codex
🔧Gemini CLI
🔧GitHub Copilot
🔧Amp
🔧Kimi CLI

Timeline

Created2026年3月16日
Last Updated2026年3月16日