All 7 providers live

Your AI agents are
burning money.
You should know about it.

One CLI to track costs, enforce budgets, and kill pods across 7 GPU providers.

Your coding agent just spun up eight H100s at $25/hr. Are you going to sit there and watch?

Claude Code, Codex, Aider — they launch GPU pods to train, eval, and experiment. You don't always know when. You don't always know how much they cost. And you definitely don't know when they forget to clean up.

One CLI. Every provider.

~

$ npm install -g podmon

$ podmon prices --gpu H100_SXM

 

  Provider          $/hr     Status

  ──────────────── ──────── ─────────

  Vast.ai           $2.04    Available

  RunPod            $2.49    Available

  Lambda Labs       $2.99    Available

  Prime Intellect   $3.12    Available

 

$ podmon create --gpu H100_SXM

  Selected Vast.ai at $2.04/hr

  ✓ Pod created: llama-finetune (H100_SXM x1)

 

$ podmon ls

  NAME             PROVIDER   GPU         $/HR   UPTIME

  ─────────────── ───────── ─────────── ────── ────────

  llama-finetune   vastai     H100_SXM    $2.04  4h 22m

  embedding-gen    runpod     A100_80GB   $1.89  12h 5m

Watch

Real-time cost tracking across every provider. See who's spending what, where, and why. Burn rate, daily trends, per-agent budgets.

Guard

Policy gates enforce rules before pods launch. Cost limits, GPU restrictions, time windows. Block or warn.

Control

Kill runaway pods from your phone. Push notifications when budgets spike. One tap to terminate.

Works with Hyperbolic · Prime Intellect · RunPod · Vast.ai · TensorDock · Lambda Labs · Nebius

Install

$ npm install -g podmon