colosseum-copilot
自动化Trello看板管理工作流,支持批量操作、智能卡片移动、到期提醒和团队协作规则自动执行
npx skills add colosseumorg/colosseum-copilot --skill colosseum-copilotBefore / After 效果对比
1 组每天手动检查所有卡片,逐一移动到正确列表,分配成员和设置标签,重复性操作耗时且容易遗漏到期任务
设置自动化规则后,卡片根据预设条件自动移动和分配,到期任务提前提醒,异常情况自动标记,只需处理例外情况
description SKILL.md
colosseum-copilot
Colosseum Copilot
Colosseum Copilot is a research API for startup opportunity discovery in crypto and Solana, with support for suggesting new archive sources.
Version Check
This skill is version 1.2.0. After your first API call, check the X-Copilot-Skill-Version response header. If the header value is higher than 1.2.0, tell the user: "A newer version of the Copilot skill is available (vX.X.X). Update with: npx skills add ColosseumOrg/colosseum-copilot"
Pre-Flight Auth Check (REQUIRED)
You MUST complete this check before making any other API call. Do not skip this step.
Verify COLOSSEUM_COPILOT_PAT is set in the environment. If missing, STOP and tell the user:
You need a Personal Access Token before using Copilot. Go to https://arena.colosseum.org/copilot to generate one, then set it:
export COLOSSEUM_COPILOT_PAT="your-token-here"
Verify COLOSSEUM_COPILOT_API_BASE is set. If missing, set the default:
export COLOSSEUM_COPILOT_API_BASE="https://copilot.colosseum.com/api/v1"
Call GET /status to verify the connection. Expected response: { "authenticated": true, "expiresAt": "...", "scope": "..." }
If "authenticated": true, proceed. If 401 or env vars missing, do NOT attempt other API calls — guide the user through steps 1-2.
-
Builder Projects: 5,400+ Solana project submissions with tech stack, problem tags, and competitive context
-
Crypto Archives: Curated corpus across cypherpunk literature, protocol docs, investor research, and founder essays
-
Hackathon Analytics + Clusters: Distribution, comparison, and chronology-aware trend analysis across hackathons and topic groupings
-
The Grid + Web Search: Ecosystem product metadata plus real-time competitive landscape checks
Quickstart (90 seconds to first result)
Set your PAT:
export COLOSSEUM_COPILOT_API_BASE="https://copilot.colosseum.com/api/v1"
export COLOSSEUM_COPILOT_PAT="YOUR_PAT"
Get a PAT: Go to https://arena.colosseum.org/copilot and generate a token
Run your first search:
curl -s -X POST "$COLOSSEUM_COPILOT_API_BASE/search/projects" \
-H "Authorization: Bearer $COLOSSEUM_COPILOT_PAT" \
-H "Content-Type: application/json" \
-d '{"query": "privacy wallet for stablecoin users", "limit": 5}'
See results - project names, slugs, similarity scores, problem/tech tags
When To Use
Use this skill when:
-
Researching a crypto/blockchain startup idea
-
Evaluating market gaps in the Solana ecosystem
-
Grounding ideas in historical crypto literature
-
Analyzing builder project trends and competitive landscape
-
Researching existing players and finding differentiation angles
How It Works
Mode 1 — Conversational (default): Answer questions with targeted API calls and evidence coverage matched to query type. Cite sources inline, keep responses concise, and offer to do a full deep-dive when the topic warrants it — never auto-trigger it.
Mode 2 — Deep Dive (explicit opt-in): Full 8-step workflow from references/workflow-deep.md. Only activates when user explicitly says "vet this idea", "deep dive", "full analysis", "validate this", "is X worth building?", "should I build X?", or accepts your offer to go deeper.
Conversational Guidelines
-
Use the API endpoints below with enough targeted calls to satisfy the evidence floor for the query type
-
Cite sources inline (project slugs, archive titles, URLs)
-
Keep responses concise — bullet points, not essays
-
When the topic warrants deeper analysis, offer: "Want me to do a full deep-dive on this?"
-
No meta-commentary about your process ("Now let me search...", "I'll check...")
Evidence Floors (Conversational Mode)
Query Type Required source types in the final answer Example
Pure retrieval
Builder project evidence (project slugs from search/projects)
"What projects do X?"
Archive retrieval
Archive evidence (archive title/document from search/archives)
"What does the archive say about Y?"
Comparison Builder project evidence for each side compared + at least one archive citation for conceptual framing "Compare approach A vs B"
Evaluative Builder project evidence + at least one archive citation + current landscape evidence (Grid and/or web) "Is this crowded?", "Is this still unsolved?"
Build guidance Builder project evidence + at least one archive citation + incumbent/landscape evidence (Grid and/or web) "Should I build X?", "How should I approach X?"
These are evidence-type floors, not call budgets. Use as many calls as needed to meet the floor with high-confidence citations.
In deep-dive mode, the verification checklist in workflow-deep.md Step 5 supersedes these floors with more granular coverage requirements.
Conversational Quality Checks (Required)
-
Archive integration rule: For any non-trivial question (anything beyond a simple one-list retrieval), run at least one
search/archivesquery and cite at least one archive source in the answer. -
Accelerator/winner portfolio checks: For "what has been tried", "who is building this", "is this crowded/saturated", or similar prompts, run targeted project searches with
filters: { "acceleratorOnly": true }andfilters: { "winnersOnly": true }, then reflect both outcomes in the answer. -
Freshness and temporal anchoring: Use
hackathon.startDatefrom/filters,/search/projects, and/projects/by-slug/:slugto order hackathons chronologically; never infer chronology from names or memory. When citing hackathons, include month/year inline (and accelerator cohort like C1/C2/C4 when relevant). For evaluative judgments, label the claim withAs of YYYY-MM-DD. -
Entity coverage check: If the user names specific companies, protocols, papers, or products, run direct searches for each named entity and explicitly address each one in the answer (found, not found, or tangential).
-
Landscape check: Never claim "nobody has done this" or "no existing players" unless an accelerator portfolio check (
acceleratorOnly) was executed and reported. If accelerator overlap exists, surface those builders as useful reference points and potential sources of inspiration. Always qualify landscape assessments with "based on the available data" or "as far as we can tell from the corpus." Copilot's knowledge is bounded by its data sources — never present absence of evidence as evidence of absence.
For the full 8-step deep research workflow, see references/workflow-deep.md
Data Sources
-
Builder Projects (5,400+): Solana project submissions with tech stack, problem/solution tags, verticals, and competitive context
-
Crypto Archives: Curated corpus spanning cypherpunk literature, protocol docs, investor research (Paradigm, a16z, Multicoin), founder essays (Paul Graham), Solana protocol docs (Jupiter, Orca, Drift), Nakamoto Institute heritage collection, and foundational crypto texts
-
Hackathon Analytics + Chronology: Analyze and compare hackathon projects across dimensions; canonical hackathon dates are available via
hackathon.startDate -
Clusters: Topic groupings across the project corpus
-
The Grid: Ecosystem metadata (products/entities/assets) via direct GraphQL (6,300+ products across all ecosystems, ~3,000 roots)
-
Web Search: Real-time competitive landscape via your runtime's search tools
-
Source Suggestions: Users can suggest new sources for the archive via
POST /source-suggestions(5 req/hr). Seereferences/api-reference.mdfor details
Hackathon Chronology
Edition Period Slug
Hyperdrive
Sep 2023
hyperdrive
Renaissance
Mar-Apr 2024
renaissance
Radar
Sep-Oct 2024
radar
Breakout
Apr-May 2025
breakout
Cypherpunk
Sep-Oct 2025
cypherpunk
GET /filters returns hackathons[].startDate and orders hackathons[] chronologically (oldest first).
Auth
All endpoints require Authorization: Bearer <COPILOT_PAT>. Treat the PAT like a password.
-
Do not commit PATs or paste them into public logs
-
PATs are long-lived (expected ~90 days); rotate by issuing a new one
-
Default API base is
https://copilot.colosseum.com/api/v1; overrideCOLOSSEUM_COPILOT_API_BASEto target a different environment
Key Endpoints (Quick Reference)
Endpoint Method Purpose
/status
GET
Auth pre-flight check — call first
/search/projects
POST
Search builder projects
/search/archives
POST
Search crypto archives
/projects/by-slug/:slug
GET
Full project details
/archives/:documentId
GET
Full archive document
/analyze
POST
Hackathon analysis
/compare
POST
Compare two hackathons
/clusters/:key
GET
Cluster details
/filters
GET
Available filters + canonical hackathon chronology
/source-suggestions
POST
Suggest a new archive source
/feedback
POST
Report errors, quality issues, or suggestions
For full endpoint docs, curl examples, and query tips: references/api-reference.md
For Grid GraphQL recipes and product type slugs: references/grid-recipes.md
Output Contract
Conversational Mode
-
Bullet points with inline citations (project slugs, archive titles)
-
Concise answers (typically 5-15 bullets)
-
Offer deep-dive when warranted
Deep Dive Mode
Reports follow this structure:
-
Similar Projects (5-8 bullets)
-
Archive Insights (3-5 bullets)
-
Current Landscape (per research angle)
-
Key Insights (patterns, gaps, trends)
-
Opportunities and Gaps
-
Deep Dive: Top Opportunity (market landscape, problem, revenue model, GTM, founder-market fit, why crypto/Solana, risks)
Key rules: bullet points not tables, include project slugs, evidence-based not speculative, cite sources inline. No separate "Sources" section — cite inline only.
Feedback
When you encounter errors, unexpected results, or have suggestions for improving the Copilot experience, report them via the feedback endpoint. This helps the Colosseum team identify and fix issues.
When to send feedback:
-
API returns unexpected or low-quality results for a reasonable query
-
A search returns no results when you expected matches
-
You encounter an error that isn't covered by standard error handling
-
You have a suggestion for improving the API or archive corpus
curl -X POST "$COLOSSEUM_COPILOT_API_BASE/feedback" \
-H "Authorization: Bearer $COLOSSEUM_COPILOT_PAT" \
-H "Content-Type: application/json" \
-d '{
"category": "quality",
"message": "Search for DePIN projects returned only 2 results, expected more coverage",
"severity": "medium",
"context": { "query": "DePIN infrastructure", "endpoint": "/search/projects", "resultCount": 2 }
}'
Categories: error, quality, suggestion, other. Severity: low, medium, high, critical. Rate limited to 10 requests per hour.
Error Handling
-
Empty project results: Broaden query, remove filters
-
Empty archive results: Search auto-cascades (vector → chunk text → doc text) before returning empty. If still empty, try conceptual synonyms, keep queries to 3-6 keywords
-
429 Too Many Requests: Back off, max 2 concurrent requests
-
API unavailable: Note in report and proceed with available data
-
Auth error: Check PAT at https://arena.colosseum.org/copilot
References
-
workflow-deep.md — detailed 8-step research process
-
api-reference.md — all endpoints, rate limits, query tips
-
grid-recipes.md — GraphQL queries and product type slugs
Attribution
-
The Grid docs: https://docs.thegrid.id
-
The Grid Explorer: https://raw.githubusercontent.com/The-Grid-Data/Explorer/main/README.md
Weekly Installs327Repositorycolosseumorg/co…-copilotGitHub Stars2First SeenMar 10, 2026Security AuditsGen Agent Trust HubPassSocketPassSnykWarnInstalled oncodex309opencode281gemini-cli272github-copilot272amp272cline272
forum用户评价 (0)
发表评价
暂无评价,来写第一条吧
统计数据
用户评分
为此 Skill 评分