AI setup tailored for your codebase.

Audits your AI setup, shows you exactly what to change, and only applies with your approval. Full undo included.

$npm install -g @rely-ai/caliber
Star us on GitHub
terminal
$

The problem

Bad setup = bad agent.

Without setup

No config, stale context

  • No project context for agents
  • No context retained across sessions
  • No learning from past AI sessions
  • Missing MCPs that unlock key features
  • Stale configs nobody updates
  • No way to undo config changes

With setup

Full context, always up to date

  • Full project context generated
  • Persistent context across sessions
  • Session learnings captured automatically
  • Right MCPs recommended and installed
  • Configs stay fresh as code changes
  • Full undo for every change

The challenge

Why is it hard to have a perfect setup?

?

Invisible gaps in your setup

There's a better MCP for your database, a skill that cuts your deploy time in half — and you have no idea they exist.

Best practices change weekly

New tools drop, guides update, community standards shift. What was best practice last month is outdated today.

Configs rot while code evolves

You refactor daily but CLAUDE.md still references last quarter's architecture. Your agent works off a lie.

Every developer has their own setup

One dev has MCPs configured, another doesn't. Rules differ across machines. There's no single source of truth for the team.

repeat forever

Meet Caliber

The best playbooks and practices, generated for your codebase.

We collected and curated the best skills, configs, and MCP recommendations from research and the community — so your AI agents get the setup they deserve.

Caliber Score

Deterministic, offline, no LLM needed. Rates your setup across 5 categories.

$ caliber score
Score: 35/100 Grade: D
Files & Setup 8/25 · Quality 10/25 · Grounding 5/20
Accuracy 7/15 · Freshness & Safety 5/10
$ caliber init && caliber score
Score: 94/100 Grade: A (+59 pts)
Files & Setup 24/25 · Quality 23/25 · Grounding 19/20
Accuracy 15/15 · Freshness & Safety 10/10
Quality scored against SkillsBench — the open benchmark for AI coding skills
Files & Setup25 pts

Config files present, skills, MCPs, cross-platform parity

Quality25 pts

Benchmarked via SkillsBench — no bloat, no vague rules

Grounding20 pts

References match actual project dirs, files, and deps

Accuracy15 pts

Documented commands and paths actually exist

Freshness & Safety10 pts

Config recency, no leaked secrets, permissions set

Commands

Simple CLI. Powerful workflow.

caliber initScan project, generate all config files
caliber scoreRate your setup (0–100, no LLM needed)
caliber score --compareCompare config quality across git branches
caliber regenerateRe-analyze and regenerate all configs
caliber refreshUpdate docs based on recent code changes
caliber learnCapture patterns from AI coding sessions
caliber hooksManage auto-refresh automation
caliber undoRevert all changes (full backup)

Works with

Claude CodeNo key needed
CursorNo key needed
OpenAI CodexAPI key

No API key needed with Claude Code or Cursor — use your existing subscription.

Why Caliber

Built for how you actually work.

Review

Audits first, writes second

Scores your setup, proposes diffs, lets you review interactively. Never modifies without your approval.

Learning

Learns from your sessions

Monitors AI coding sessions via hooks. Captures patterns, corrections, and gotchas into CALIBER_LEARNINGS.md.

Your keys

Private. Local. Safe.

Runs on your machine with your own API key. Your code never leaves your environment.

Undo

Fully reversible

Every change is backed up. Run caliber undo to revert. Compare configs across git branches with caliber score --compare.

Get started

Try it in 30 seconds.

$npm install -g @rely-ai/caliber

Then run: caliber init

CALIBER
GitHubnpmDiscordMIT License
© 2026 Rely AI