mirror of
https://github.com/moltbot/moltbot.git
synced 2026-05-13 23:56:07 +00:00
docs: clarify strict-agentic and codex modes
This commit is contained in:
@@ -133,6 +133,25 @@ OpenClaw requires Codex app-server `0.118.0` or newer. The Codex plugin checks
|
||||
the app-server initialize handshake and blocks older or unversioned servers so
|
||||
OpenClaw only runs against the protocol surface it has been tested with.
|
||||
|
||||
### Native Codex harness mode
|
||||
|
||||
The bundled `codex` harness is the native Codex mode for embedded OpenClaw
|
||||
agent turns. Enable the bundled `codex` plugin first, and include `codex` in
|
||||
`plugins.allow` if your config uses a restrictive allowlist. It is different
|
||||
from `openai-codex/*`:
|
||||
|
||||
- `openai-codex/*` uses ChatGPT/Codex OAuth through the normal OpenClaw provider
|
||||
path.
|
||||
- `codex/*` uses the bundled Codex provider and routes the turn through Codex
|
||||
app-server.
|
||||
|
||||
When this mode runs, Codex owns the native thread id, resume behavior,
|
||||
compaction, and app-server execution. OpenClaw still owns the chat channel,
|
||||
visible transcript mirror, tool policy, approvals, media delivery, and session
|
||||
selection. Use `embeddedHarness.runtime: "codex"` with
|
||||
`embeddedHarness.fallback: "none"` when you need to prove that the Codex
|
||||
app-server path is used and PI fallback is not hiding a broken native harness.
|
||||
|
||||
## Disable PI fallback
|
||||
|
||||
By default, OpenClaw runs embedded agents with `agents.defaults.embeddedHarness`
|
||||
|
||||
@@ -3,6 +3,7 @@ summary: "Use OpenAI via API keys or Codex subscription in OpenClaw"
|
||||
read_when:
|
||||
- You want to use OpenAI models in OpenClaw
|
||||
- You want Codex subscription auth instead of API keys
|
||||
- You need stricter GPT-5 agent execution behavior
|
||||
title: "OpenAI"
|
||||
---
|
||||
|
||||
@@ -477,6 +478,33 @@ behavior, but it does not receive the hidden OpenAI/Codex attribution headers.
|
||||
This preserves current native OpenAI Responses behavior without forcing older
|
||||
OpenAI-compatible shims onto third-party `/v1` backends.
|
||||
|
||||
### Strict-agentic GPT mode
|
||||
|
||||
For `openai/*` and `openai-codex/*` GPT-5-family runs, OpenClaw can use a
|
||||
stricter embedded Pi execution contract:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
embeddedPi: {
|
||||
executionContract: "strict-agentic",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
With `strict-agentic`, OpenClaw no longer treats a plan-only assistant turn as
|
||||
successful progress when a concrete tool action is available. It retries the
|
||||
turn with an act-now steer, auto-enables the structured `update_plan` tool for
|
||||
substantial work, and surfaces an explicit blocked state if the model keeps
|
||||
planning without acting.
|
||||
|
||||
The mode is scoped to OpenAI and OpenAI Codex GPT-5-family runs. Other providers
|
||||
and older model families keep the default embedded Pi behavior unless you opt
|
||||
them into other runtime settings.
|
||||
|
||||
### OpenAI Responses server-side compaction
|
||||
|
||||
For direct OpenAI Responses models (`openai/*` using `api: "openai-responses"` with
|
||||
|
||||
Reference in New Issue
Block a user