openclawv1.8.0

Agent Swarm

@RuneweaverStudios0 stars· last commit 1mo ago· 0 open issues

LLM routing and subagent delegation for OpenClaw. Routes each task to the right model (code, creative, research, reasoning, vision) and spawns subagents so you save tokens and get better results. Supports parallel tasks and prompt-injection rejection.

7.6/10
Verified
Mar 9, 2026

// RATINGS

GitHub Stars

New / niche

🟢ProSkills ScoreAI Verified
7.6/10
📍

Not yet listed on ClawHub or SkillsMP

// README

# Agent Swarm | OpenClaw Skill > IMPORTANT: OPENROUTER IS REQUIRED > > Agent Swarm only supports `openrouter/...` models and requires a configured OpenRouter API key in OpenClaw. > Without OpenRouter, subagent delegation and `sessions_spawn` routing will fail. **LLM routing and subagent delegation.** Routes each task to the right model, spawns subagents, and reduces API costs by using cheaper models for simple tasks. **Parallel tasks:** one message can spawn multiple subagents at once (e.g. "fix the bug and write a poem" → code + creative in parallel). **v1.7.8 — Current stable release.** COMPLEX tier, absolute paths for TUI delegation. Prompt-injection rejection (v1.7.6+), OPENCLAW_HOME fully optional. **Source:** [github.com/RuneweaverStudios/agent-swarm](https://github.com/RuneweaverStudios/agent-swarm). Agent Swarm | OpenClaw Skill routes your OpenClaw tasks to the best LLM for the job and delegates work to subagents. You save API costs (orchestrator stays on a cheap model; only the task runs on the matched model) and get better results—GLM 4.7 for code, Kimi k2.5 for creative, Grok Fast for research. **Security improvements in v1.7.0+:** - Removed gateway auth token/password exposure from router output - Gateway management functionality has been removed - use the separate [gateway-guard](https://clawhub.ai/skills/gateway-guard) skill if gateway auth management is needed - FACEPALM troubleshooting integration has been removed - use the separate [FACEPALM](https://github.com/RuneweaverStudios/FACEPALM) skill if troubleshooting is needed - **v1.7.3+**: Added comprehensive input validation, config patch validation, and security documentation - **v1.7.4+**: Clarified "saves tokens" means cost savings (not token storage), removed hard-coded paths, documented file access scope - **v1.7.5+**: Declared required environment variables and credentials in metadata, enhanced requirements documentation - **v1.7.6+**: Added prompt-injection rejection logic that actively blocks known injection patterns - **v1.7.8**: OPENCLAW_HOME made fully optional (defaults to ~/.openclaw), version bump ## Why Agent Swarm With a single model, OpenClaw can feel slow: you're forced to choose between quality and cost, and every prompt pays the same price. Agent Swarm removes that tradeoff. The orchestrator stays on a fast, cheap model; only the task at hand runs on the best model for the job. No wasted prompts—efficient routing, not one-size-fits-all. With OpenRouter, replies come back faster and the conversation feels more lively and natural. ## Requirements (critical) **Platform Configuration Required:** - **OpenRouter API key**: Must be configured in OpenClaw platform settings (not provided by this skill) - **OPENCLAW_HOME** (optional): Environment variable pointing to OpenClaw workspace root. If not set, defaults to `~/.openclaw` - **openclaw.json access**: The router reads `tools.exec.host` and `tools.exec.node` from `openclaw.json` (located at `$OPENCLAW_HOME/openclaw.json` or `~/.openclaw/openclaw.json`). Only these two fields are accessed; no gateway secrets or API keys are read. **Model Requirements:** - **OpenRouter is mandatory** — All model delegation uses OpenRouter (`openrouter/...` prefix). Configure OpenClaw with an OpenRouter API key so one auth profile covers every model. - If OpenRouter is not configured in OpenClaw, delegation will fail ## Security ### Input Validation The router validates and sanitizes all inputs to prevent injection attacks: - **Task strings**: Validated for length (max 10KB), null bytes, and suspicious patterns - **Config patches**: Only allows modifications to `tools.exec.host` and `tools.exec.node` (whitelist approach) - **Labels**: Validated for length and null bytes ### Safe Execution (Critical for Orchestrators) **When calling `router.py` from orchestrator code, always use `subprocess` with a list of arguments, never shell string interpolation:** ```python # ✅ SAFE: Use subprocess with list arguments import subprocess result = subprocess.run( ["python3", "/path/to/router.py", "spawn", "--json", user_message], capture_output=True, text=True ) # ❌ UNSAFE: Shell string interpolation (vulnerable to injection) import os os.system(f'python3 router.py spawn --json "{user_message}"') # DON'T DO THIS ``` The router uses Python's `argparse`, which safely handles arguments when passed as a list. Shell string interpolation is vulnerable to command injection if the user message contains shell metacharacters (`;`, `|`, `&`, `$()`, etc.). ### Config Patch Safety The `recommended_config_patch` only modifies safe fields: - `tools.exec.host` (must be 'sandbox' or 'node') - `tools.exec.node` (only when host is 'node') All config patches are validated before being returned. The orchestrator should validate patches again before applying them to `openclaw.json`. ### Prompt Injection Rejection (v1.7.6+) The router actively rejects task strings that contain known prompt-injection patterns: - System/instruction override attempts (`ignore previous instructions`, `you are now`, etc.) - Role impersonation (`[SYSTEM]`, `<|im_start|>system`, `### Instruction:`, etc.) - Delimiter injection and safety bypass language If a prompt-injection pattern is detected, the router raises a `ValueError` and refuses to route the task. This is a defense-in-depth measure alongside: 1. The orchestrator (validating task strings) 2. The sub-agent LLM (resisting prompt injection) 3. The OpenClaw platform (sanitizing `sessions_spawn` inputs) ### File Access Scope The router reads `openclaw.json` **only** to inspect `tools.exec.host` and `tools.exec.node` configuration. This is necessary to determine the execution environment for spawned sub-agents. **Important:** - The router **does not** read gateway secrets, API keys, or any other sensitive configuration - Only `tools.exec.host` and `tools.exec.node` are accessed - No data is written to `openclaw.json` except via validated config patches (whitelisted to `tools.exec.*` only) - The router does not persist, upload, or transmit any tokens or credentials - The phrase "saves tokens" in documentation refers to **API cost savings** (using cheaper models for simple tasks), not token storage or collection ## Default behavior **Session default / orchestrator:** Gemini 2.5 Flash (`openrouter/google/gemini-2.5-flash`) — fast, cheap, reliable at tool-calling. The router delegates tasks to tier-specific sub-agents (Kimi for creative, GLM 4.7 for code, etc.) via `sessions_spawn`. Simple tasks (check, status, list) down-route to Gemini 2.5 Flash. --- ## Orchestrator flow (task delegation) The **main agent (Gemini 2.5 Flash)** does not do user tasks itself. For every user **task** (code, research, write, build, etc.): 1. Run Agent Swarm router: `python scripts/router.py spawn --json "<user message>"` and parse the JSON. 2. Call **sessions_spawn** with the `task` and `model` from the router output (use the exact `model` value). 3. Forward the sub-agent's result to the user. **Example:** ``` router: {"task":"write a poem","model":"openrouter/moonshotai/kimi-k2.5","sessionTarget":"isolated"} → sessions_spawn(task="write a poem", model="openrouter/moonshotai/kimi-k2.5", sessionTarget="isolated") → Forward Kimi k2.5's poem to user. Say "Using: Kimi k2.5". ``` **Exception:** Meta-questions ("what model are you?") you answer yourself. ### Parallel tasks For one message with multiple tasks, use **`spawn --json --multi "<message>"`**. The router splits on *and*, *then*, *;*, and *also*, classifies each part, and returns `{"parallel": true, "spawns": [{task, model, sessionTarget}, ...], "count": N}`. The orchestrator can then call `sessions_spawn` for each entry and run them in parallel; use subagent-tracker to see progress. **Example:** `spawn --json --multi "fix the login bug and write a short poem"` → two spawns (e.g. GLM 4.7 for code, Kimi for poem). --- ## Quick start ```bash npm install -g clawhub

// HOW IT'S BUILT

TECHNOLOGY STACK

Python

This skill is built with Python..

KEY FILES

README.mdSKILL.md

// REPO STATS

0 stars
0 open issues
Last commit: 1mo ago

// PROSKILLS SCORE

7.6/10

Good

BREAKDOWN

Code Quality8/10
Documentation9/10
Functionality8/10
Maintenance5/10
Security8/10
Uniqueness7/10
Usefulness8/10

// DETAILS

Categoryorchestration
Versionv1.8.0
PriceFree
Securityclean