Files
moltbot/docs/concepts/agent.md
Jason 70df2b8fe2 feat: steer mid-turn prompts by default (#77023)
Summary:
- Default active-run queueing to steer while preserving explicit followup/collect modes.
- Keep `/steer` fallback behavior and migrate retired queue steering config.
- Await Codex app-server steering acceptance so rejected/aborted steering can fall back safely.
- Route active subagent announcements through intentional acceptance-aware steering, with legacy queue helpers deprecated for delivery decisions.

Verification:
- git diff --check
- rg -n "^(<<<<<<<|=======|>>>>>>>|\|\|\|\|\|\|\|)" CHANGELOG.md docs src extensions || true
- pnpm test src/agents/subagent-announce-dispatch.test.ts src/agents/subagent-announce-delivery.test.ts src/agents/pi-embedded-runner/runs.test.ts src/agents/subagent-announce.format.e2e.test.ts src/agents/subagent-announce.test.ts
- pnpm test src/auto-reply/reply/commands-steer.test.ts src/auto-reply/reply/queue/settings.test.ts src/auto-reply/reply/queue-policy.test.ts src/auto-reply/reply/agent-runner.runreplyagent.e2e.test.ts src/auto-reply/reply/get-reply-run.media-only.test.ts extensions/codex/src/app-server/run-attempt.test.ts -- -t "queued steering|explicit all-mode steering|flushes pending default queued steering|rejects queued steering|resolveActiveRunQueueAction|resolveQueueSettings|handleSteerCommand"

Co-authored-by: fuller-stack-dev <263060202+fuller-stack-dev@users.noreply.github.com>
2026-05-13 14:00:11 +01:00

5.9 KiB

summary, read_when, title
summary read_when title
Agent runtime, workspace contract, and session bootstrap
Changing agent runtime, workspace bootstrap, or session behavior
Agent runtime

OpenClaw runs a single embedded agent runtime - one agent process per Gateway, with its own workspace, bootstrap files, and session store. This page covers that runtime contract: what the workspace must contain, which files get injected, and how sessions bootstrap against it.

Workspace (required)

OpenClaw uses a single agent workspace directory (agents.defaults.workspace) as the agent's only working directory (cwd) for tools and context.

Recommended: use openclaw setup to create ~/.openclaw/openclaw.json if missing and initialize the workspace files.

Full workspace layout + backup guide: Agent workspace

If agents.defaults.sandbox is enabled, non-main sessions can override this with per-session workspaces under agents.defaults.sandbox.workspaceRoot (see Gateway configuration).

Bootstrap files (injected)

Inside agents.defaults.workspace, OpenClaw expects these user-editable files:

  • AGENTS.md - operating instructions + "memory"
  • SOUL.md - persona, boundaries, tone
  • TOOLS.md - user-maintained tool notes (e.g. imsg, sag, conventions)
  • BOOTSTRAP.md - one-time first-run ritual (deleted after completion)
  • IDENTITY.md - agent name/vibe/emoji
  • USER.md - user profile + preferred address

On the first turn of a new session, OpenClaw injects the contents of these files into the system prompt's Project Context.

Blank files are skipped. Large files are trimmed and truncated with a marker so prompts stay lean (read the file for full content).

If a file is missing, OpenClaw injects a single "missing file" marker line (and openclaw setup will create a safe default template).

BOOTSTRAP.md is only created for a brand new workspace (no other bootstrap files present). While it is pending, OpenClaw keeps it in Project Context and adds system-prompt bootstrap guidance for the initial ritual instead of copying it into the user message. If you delete it after completing the ritual, it should not be recreated on later restarts.

To disable bootstrap file creation entirely (for pre-seeded workspaces), set:

{ agents: { defaults: { skipBootstrap: true } } }

Built-in tools

Core tools (read/exec/edit/write and related system tools) are always available, subject to tool policy. apply_patch is optional and gated by tools.exec.applyPatch. TOOLS.md does not control which tools exist; it's guidance for how you want them used.

Skills

OpenClaw loads skills from these locations (highest precedence first):

  • Workspace: <workspace>/skills
  • Project agent skills: <workspace>/.agents/skills
  • Personal agent skills: ~/.agents/skills
  • Managed/local: ~/.openclaw/skills
  • Bundled (shipped with the install)
  • Extra skill folders: skills.load.extraDirs

Skills can be gated by config/env (see skills in Gateway configuration).

Runtime boundaries

The embedded agent runtime is built on the Pi agent core (models, tools, and prompt pipeline). Session management, discovery, tool wiring, and channel delivery are OpenClaw-owned layers on top of that core.

Sessions

Session transcripts are stored as JSONL at:

  • ~/.openclaw/agents/<agentId>/sessions/<SessionId>.jsonl

The session ID is stable and chosen by OpenClaw. Legacy session folders from other tools are not read.

Steering while streaming

Inbound prompts that arrive mid-run are steered into the current run by default. Steering is delivered after the current assistant turn finishes executing its tool calls, before the next LLM call, and no longer skips remaining tool calls from the current assistant message.

/queue steer is the default active-run behavior. /queue followup and /queue collect make messages wait for a later turn instead of steering. /queue interrupt aborts the active run instead. See Queue and Steering queue for queue and boundary behavior.

Block streaming sends completed assistant blocks as soon as they finish; it is off by default (agents.defaults.blockStreamingDefault: "off"). Tune the boundary via agents.defaults.blockStreamingBreak (text_end vs message_end; defaults to text_end). Control soft block chunking with agents.defaults.blockStreamingChunk (defaults to 800-1200 chars; prefers paragraph breaks, then newlines; sentences last). Coalesce streamed chunks with agents.defaults.blockStreamingCoalesce to reduce single-line spam (idle-based merging before send). Non-Telegram channels require explicit *.blockStreaming: true to enable block replies. Verbose tool summaries are emitted at tool start (no debounce); Control UI streams tool output via agent events when available. More details: Streaming + chunking.

Model refs

Model refs in config (for example agents.defaults.model and agents.defaults.models) are parsed by splitting on the first /.

  • Use provider/model when configuring models.
  • If the model ID itself contains / (OpenRouter-style), include the provider prefix (example: openrouter/moonshotai/kimi-k2).
  • If you omit the provider, OpenClaw tries an alias first, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider. If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default.

Configuration (minimal)

At minimum, set:

  • agents.defaults.workspace
  • channels.whatsapp.allowFrom (strongly recommended)

Next: Group Chats 🦞