AI setup tailored for your codebase.
Audits your AI setup, shows you exactly what to change, and only applies with your approval. Full undo included.
The problem
Bad setup = bad agent.
Without setup
No config, stale context
- No project context for agents
- No context retained across sessions
- No learning from past AI sessions
- Missing MCPs that unlock key features
- Stale configs nobody updates
- No way to undo config changes
With setup
Full context, always up to date
- Full project context generated
- Persistent context across sessions
- Session learnings captured automatically
- Right MCPs recommended and installed
- Configs stay fresh as code changes
- Full undo for every change
The challenge
Why is it hard to have a perfect setup?
Invisible gaps in your setup
There's a better MCP for your database, a skill that cuts your deploy time in half — and you have no idea they exist.
Best practices change weekly
New tools drop, guides update, community standards shift. What was best practice last month is outdated today.
Configs rot while code evolves
You refactor daily but CLAUDE.md still references last quarter's architecture. Your agent works off a lie.
Every developer has their own setup
One dev has MCPs configured, another doesn't. Rules differ across machines. There's no single source of truth for the team.
Meet Caliber
The best playbooks and practices, generated for your codebase.
We collected and curated the best skills, configs, and MCP recommendations from research and the community — so your AI agents get the setup they deserve.
Built on research and community-curated sources
Caliber Score
Deterministic, offline, no LLM needed. Rates your setup across 5 categories.
Config files present, skills, MCPs, cross-platform parity
Benchmarked via SkillsBench — no bloat, no vague rules
References match actual project dirs, files, and deps
Documented commands and paths actually exist
Config recency, no leaked secrets, permissions set
Commands
Simple CLI. Powerful workflow.
caliber initScan project, generate all config filescaliber scoreRate your setup (0–100, no LLM needed)caliber score --compareCompare config quality across git branchescaliber regenerateRe-analyze and regenerate all configscaliber refreshUpdate docs based on recent code changescaliber learnCapture patterns from AI coding sessionscaliber hooksManage auto-refresh automationcaliber undoRevert all changes (full backup)Works with
No API key needed with Claude Code or Cursor — use your existing subscription.
Why Caliber
Built for how you actually work.
Audits first, writes second
Scores your setup, proposes diffs, lets you review interactively. Never modifies without your approval.
Learns from your sessions
Monitors AI coding sessions via hooks. Captures patterns, corrections, and gotchas into CALIBER_LEARNINGS.md.
Private. Local. Safe.
Runs on your machine with your own API key. Your code never leaves your environment.
Fully reversible
Every change is backed up. Run caliber undo to revert. Compare configs across git branches with caliber score --compare.
Get started
Try it in 30 seconds.
Then run: caliber init