One CLI to track costs, enforce budgets, and kill pods across 7 GPU providers.
Claude Code, Codex, Aider — they launch GPU pods to train, eval, and experiment. You don't always know when. You don't always know how much they cost. And you definitely don't know when they forget to clean up.
$ npm install -g podmon
$ podmon prices --gpu H100_SXM
Provider $/hr Status
──────────────── ──────── ─────────
Vast.ai $2.04 Available
RunPod $2.49 Available
Lambda Labs $2.99 Available
Prime Intellect $3.12 Available
$ podmon create --gpu H100_SXM
Selected Vast.ai at $2.04/hr
✓ Pod created: llama-finetune (H100_SXM x1)
$ podmon ls
NAME PROVIDER GPU $/HR UPTIME
─────────────── ───────── ─────────── ────── ────────
llama-finetune vastai H100_SXM $2.04 4h 22m
embedding-gen runpod A100_80GB $1.89 12h 5m
Real-time cost tracking across every provider. See who's spending what, where, and why. Burn rate, daily trends, per-agent budgets.
Policy gates enforce rules before pods launch. Cost limits, GPU restrictions, time windows. Block or warn.
Kill runaway pods from your phone. Push notifications when budgets spike. One tap to terminate.
Works with Hyperbolic · Prime Intellect · RunPod · Vast.ai · TensorDock · Lambda Labs · Nebius