Track, control, and budget GPU pods across 6 cloud providers — from CLI, mobile, or web. Stop runaway costs before they happen.
Already using podmon? Log in to dashboard
# Install
npm install -g podmon
# Authenticate
podmon auth login
# Connect a provider
podmon provider add runpod --api-key rpa_...
# Launch a pod
podmon create --gpu RTX_4090 --image pytorch/pytorch:latest
# Compare prices across all providers
podmon prices --gpu H100_SXM
Manage pods across Prime Intellect, RunPod, Vast.ai, TensorDock, Lambda Labs, and Nebius from a single interface.
Live burn rate, per-provider breakdowns, agent budgets, and 14-day spend trends — updated every second.
Assign budgets and GPU limits per autonomous agent. Claude Code, Codex, Aider — each gets its own guardrails.
Block or warn on cost overruns, lifetime limits, restricted GPUs, time windows, and more. Rules enforced at creation time.
Monitor pods and kill runaway workloads from your phone. Push notifications for budget alerts and pod failures.
Create a pod with just a GPU type — podmon finds the cheapest available provider and falls back automatically.
All providers fully integrated with pod CRUD, GPU normalization, cost tracking, and availability queries.
Install the CLI and add your provider API keys. Credentials never leave your machine.
Create pods, set policies, and let the daemon track costs and health in real-time.
Web dashboard, mobile app, or CLI. Kill runaway pods, check costs, and manage agents.