docs: clarify strict-agentic and codex modes

This commit is contained in:
Peter Steinberger
2026-04-11 17:13:35 +01:00
parent 899a1b7565
commit e1b2ae235a
2 changed files with 47 additions and 0 deletions

View File

@@ -133,6 +133,25 @@ OpenClaw requires Codex app-server `0.118.0` or newer. The Codex plugin checks
the app-server initialize handshake and blocks older or unversioned servers so
OpenClaw only runs against the protocol surface it has been tested with.
### Native Codex harness mode
The bundled `codex` harness is the native Codex mode for embedded OpenClaw
agent turns. Enable the bundled `codex` plugin first, and include `codex` in
`plugins.allow` if your config uses a restrictive allowlist. It is different
from `openai-codex/*`:
- `openai-codex/*` uses ChatGPT/Codex OAuth through the normal OpenClaw provider
path.
- `codex/*` uses the bundled Codex provider and routes the turn through Codex
app-server.
When this mode runs, Codex owns the native thread id, resume behavior,
compaction, and app-server execution. OpenClaw still owns the chat channel,
visible transcript mirror, tool policy, approvals, media delivery, and session
selection. Use `embeddedHarness.runtime: "codex"` with
`embeddedHarness.fallback: "none"` when you need to prove that the Codex
app-server path is used and PI fallback is not hiding a broken native harness.
## Disable PI fallback
By default, OpenClaw runs embedded agents with `agents.defaults.embeddedHarness`

View File

@@ -3,6 +3,7 @@ summary: "Use OpenAI via API keys or Codex subscription in OpenClaw"
read_when:
- You want to use OpenAI models in OpenClaw
- You want Codex subscription auth instead of API keys
- You need stricter GPT-5 agent execution behavior
title: "OpenAI"
---
@@ -477,6 +478,33 @@ behavior, but it does not receive the hidden OpenAI/Codex attribution headers.
This preserves current native OpenAI Responses behavior without forcing older
OpenAI-compatible shims onto third-party `/v1` backends.
### Strict-agentic GPT mode
For `openai/*` and `openai-codex/*` GPT-5-family runs, OpenClaw can use a
stricter embedded Pi execution contract:
```json5
{
agents: {
defaults: {
embeddedPi: {
executionContract: "strict-agentic",
},
},
},
}
```
With `strict-agentic`, OpenClaw no longer treats a plan-only assistant turn as
successful progress when a concrete tool action is available. It retries the
turn with an act-now steer, auto-enables the structured `update_plan` tool for
substantial work, and surfaces an explicit blocked state if the model keeps
planning without acting.
The mode is scoped to OpenAI and OpenAI Codex GPT-5-family runs. Other providers
and older model families keep the default embedded Pi behavior unless you opt
them into other runtime settings.
### OpenAI Responses server-side compaction
For direct OpenAI Responses models (`openai/*` using `api: "openai-responses"` with