Celestial Capabilities
Everything you need for AI-powered coding, from autonomous agents to fine-grained permissions.
Featherweight Runtime
~50MB RAM footprint. Pure PHP — no Node.js, Python, or Electron bloat. Static binary with zero system dependencies. Starts in under a second.
Subagent Swarm
Spawn parallel child agents with dependency chains, sequential groups, and automatic retries. Up to 10 concurrent agents.
Permission System
Guardian, Argus, and Prometheus modes. Auto-approve safe ops, approve each action, or go fully autonomous.
40+ LLM Providers
OpenAI, Anthropic, Google, DeepSeek, Groq, Ollama, xAI, Mistral, OpenRouter, StepFun, and many more through native SDKs and OpenAI-compatible APIs.
Smart Context
Importance-scored pruning, tool result deduplication, LLM-based compaction, and persistent memory extraction across sessions.
Reasoning & Thinking
Native support for extended thinking and reasoning tokens. See the model's chain-of-thought with configurable budget controls.
Terminal-Native
Full TUI with Symfony Console or pure ANSI fallback. Works in any terminal, looks stunning in modern ones.
Power Commands
20+ workflow shortcuts with unique animations: :unleash, :review, :deep-dive, :research, and more. Combinable.
How It Looks
A rich, interactive experience running entirely in your terminal.
Choose Your Trust Level
Three modes that balance safety and autonomy, from cautious to fully autonomous.
- Safe commands auto-approved
- Writes and unknowns gated
- Best for daily use (default)
- Approve every tool call
- Full audit trail
- Best for learning / exploring
- No approval required
- Maximum speed
- Best for trusted CI/CD
The Agent Hierarchy
Three specialized agent types with progressively narrower capabilities to prevent privilege escalation.
| Type | Capabilities | Can Spawn | Use Case |
|---|---|---|---|
| General | Full: read, write, edit, bash, subagent | General, Explore, Plan | Autonomous coding tasks |
| Explore | Read-only: file_read, glob, grep, bash | Explore only | Research & investigation |
| Plan | Read-only: file_read, glob, grep, bash | Explore only | Planning & architecture |
Dependency DAGs
Agents declare depends_on with automatic circular-dependency detection. Upstream results inject into downstream task prompts.
Sequential Groups
Assign a group to run agents serially within a parallel swarm. Ordered pipelines without sacrificing overall concurrency.
Await & Background
await blocks until the agent finishes. background returns immediately — results inject on the next LLM turn.
Auto-Retry with Backoff
Failed agents retry with exponential backoff + jitter. Auth errors (401/403) are never retried.
Concurrency Control
Global semaphore caps concurrent agents. Per-group semaphores enforce ordering. Configurable depth limits.
Slot Yielding
Parents yield their concurrency slot to children and reclaim it after, preventing deadlocks when the pool is full.
Stuck Detection
Rolling-window repetition detection for headless agents: nudge → final notice → force return. No infinite loops.
Watchdog Timers
Configurable idle timeout per agent. Stuck agents are killed automatically without manual intervention.
Permission Narrowing
Children can only reduce capabilities, never escalate. An Explore agent can only spawn more Explore agents.
Live Swarm Dashboard
Press ctrl+a to open the live Swarm Control overlay — progress bars, resource tracking, and per-agent stats that auto-refresh every 2 seconds.
⏺ S W A R M C O N T R O L ████████████████████████████████████████ 100.0% 28 of 28 agents completed ✓ 28 done ● 0 running ◌ 0 queued ✗ 0 failed ──── ☉ Resources ───────────────────────────────────────── Tokens 14.1M in · 453k out · 14.5M total Cost $49.07 · avg $1.75/agent Elapsed 12m 23s · rate 2.3 agents/min Esc/q close · auto-refreshes every 2s
Live Progress
Global progress bar and per-agent status. See running, done, queued, and failed counts at a glance.
Tree View
Hierarchical display with status icons — running, done, failed, waiting on dependencies. Toggle with ctrl+a.
Resource Tracking
Token usage, cost breakdown, elapsed time, and agent throughput rate — all auto-refreshing in real time.
Install & Run
Static binary, PHAR, or from source — pick whichever fits your setup.
Supports 40+ LLM providers — OpenAI, Anthropic, Google, DeepSeek, Groq, Mistral, xAI, OpenRouter, StepFun, Ollama, and any OpenAI-compatible endpoint.
How It Works
A thin orchestrator loop that delegates to specialized subsystems.
Agent Loop
A ~570-line REPL orchestrator that manages the conversation, delegates tool calls, handles streaming, and coordinates subagents.
Subagent System
Three agent types with dependency resolution, concurrency semaphores, retry policies, and stuck detection.
Session Persistence
SQLite-backed storage for sessions, messages, memories, and settings. Resume conversations and recall context.
Built With Cosmic Power
Modern PHP 8.4 with async I/O, rich terminal UI, and first-class LLM SDK support.
PHP 8.4
Strict types, enums, readonly classes, and property hooks throughout the entire codebase.
Amp / Revolt
Async HTTP streaming with non-blocking I/O. Responsive interactions even with long-running LLM requests.
Symfony TUI
Rich full-screen terminal UI with widgets, dialogs, animations, markdown rendering, and an inline editor.
Prism PHP
First-class SDK for Anthropic, OpenAI, and other native providers with structured output and tool calling.