fix(auth): accept friendly OpenAI order for Codex profiles

This commit is contained in:
pashpashpash
2026-05-10 19:26:36 -07:00
committed by Peter Steinberger
parent a947e8fae0
commit 517566e39a
7 changed files with 233 additions and 42 deletions

View File

@@ -107,6 +107,7 @@ Docs: https://docs.openclaw.ai
- Context: render `/context map` only from actual run context and persist Codex app-server run reports without counting deferred tool-search schemas as prompt-loaded tool schemas.
- Codex app-server: report Codex-native tool execution to diagnostics so long-running native `bash`, web, file, and MCP tools no longer look like stale embedded runs to the watchdog. (#80217)
- Codex app-server: refresh Codex account rate limits after subscription usage-limit failures so Discord and other channel replies can show the next reset time instead of saying Codex returned none. Thanks @pashpashpash.
- Agents/auth: let Codex-backed OpenAI agent turns use `auth.order.openai` entries for Codex-compatible OAuth and API-key profiles while keeping existing `openai-codex` profile ordering valid.
- Tasks: route group and channel task completions through the requester session so the parent agent can send the visible summary instead of stopping at a generic task-status line. Fixes #77251. (#77365) Thanks @funmerlin.
- Telegram: preserve blank lines between manually indented bullet blocks and following numbered sections in rendered replies. Fixes #76998. Thanks @evgyur.
- Agents/sandbox: allow read-only sandbox sessions to read the `/agent` workspace mount while keeping write/edit/apply_patch workspace-only guarded, restoring `read /agent/...` for `workspaceAccess: "ro"`. Fixes #39497. Thanks @stainlu and @teosborne.

View File

@@ -17,9 +17,9 @@ selection, OpenClaw dynamic tools, approvals, media delivery, and the visible
transcript mirror.
The normal setup uses canonical OpenAI model refs such as `openai/gpt-5.5`.
Do not configure `openai-codex/gpt-*` model refs. `openai-codex` is the auth
profile provider for Codex OAuth or Codex API-key profiles, not the model
provider prefix for new agent config.
Do not configure `openai-codex/gpt-*` model refs. Put OpenAI agent auth order
under `auth.order.openai`; older `openai-codex:*` profiles and
`auth.order.openai-codex` entries remain supported for existing installs.
OpenClaw starts Codex app-server threads with Codex native code mode and
code-mode-only enabled. That keeps deferred/searchable OpenClaw dynamic tools
@@ -113,9 +113,10 @@ harness options in OpenClaw config, and use the CLI only for Codex auth:
| Enable native Codex plugin apps | `plugins.entries.codex.config.codexPlugins.*` | Codex plugin config |
| Enable Codex Computer Use | `plugins.entries.codex.config.computerUse.*` | Codex plugin config |
Use `openai/gpt-*` model refs for Codex-backed OpenAI agent turns.
`openai-codex` is only the auth-profile provider name for Codex OAuth and
Codex API-key profiles. Do not write new `openai-codex/gpt-*` model refs.
Use `openai/gpt-*` model refs for Codex-backed OpenAI agent turns. Prefer
`auth.order.openai` for subscription-first/API-key-backup ordering. Existing
`openai-codex:*` auth profiles and `auth.order.openai-codex` remain valid, but
do not write new `openai-codex/gpt-*` model refs.
The rest of this page covers common variants users must choose between:
deployment shape, fail-closed routing, guardian approval policy, native Codex

View File

@@ -17,8 +17,8 @@ default; direct OpenAI API-key auth remains available for non-agent OpenAI
surfaces such as images, embeddings, speech, and realtime.
- **Agent models** - `openai/*` models through the Codex runtime; sign in with
`openai-codex` auth for ChatGPT/Codex subscription use, or configure an
`openai-codex` API-key profile when you intentionally want API-key auth.
Codex auth for ChatGPT/Codex subscription use, or configure a Codex-compatible
OpenAI API-key backup when you intentionally want API-key auth.
- **Non-agent OpenAI APIs** - direct OpenAI Platform access with usage-based
billing through `OPENAI_API_KEY` or OpenAI API-key onboarding.
- **Legacy config** - `openai-codex/*` model refs are repaired by
@@ -32,32 +32,34 @@ changing config.
## Quick choice
| Goal | Use | Notes |
| ---------------------------------------------------- | ------------------------------------------------------- | --------------------------------------------------------------------- |
| ChatGPT/Codex subscription with native Codex runtime | `openai/gpt-5.5` | Default OpenAI agent setup. Sign in with `openai-codex` auth. |
| Direct API-key billing for agent models | `openai/gpt-5.5` plus an `openai-codex` API-key profile | Use `auth.order.openai-codex` to prefer that profile. |
| Direct API-key billing through explicit PI | `openai/gpt-5.5` plus provider/model runtime `pi` | Select a normal `openai` API-key profile. |
| Latest ChatGPT Instant API alias | `openai/chat-latest` | Direct API-key only. Moving alias for experiments, not the default. |
| ChatGPT/Codex subscription auth through explicit PI | `openai/gpt-5.5` plus provider/model runtime `pi` | Select an `openai-codex` auth profile for the compatibility route. |
| Image generation or editing | `openai/gpt-image-2` | Works with either `OPENAI_API_KEY` or OpenAI Codex OAuth. |
| Transparent-background images | `openai/gpt-image-1.5` | Use `outputFormat=png` or `webp` and `openai.background=transparent`. |
| Goal | Use | Notes |
| ---------------------------------------------------- | -------------------------------------------------------- | --------------------------------------------------------------------- |
| ChatGPT/Codex subscription with native Codex runtime | `openai/gpt-5.5` | Default OpenAI agent setup. Sign in with Codex auth. |
| Direct API-key billing for agent models | `openai/gpt-5.5` plus a Codex-compatible API-key profile | Use `auth.order.openai` to place the backup after subscription auth. |
| Direct API-key billing through explicit PI | `openai/gpt-5.5` plus provider/model runtime `pi` | Select a normal `openai` API-key profile. |
| Latest ChatGPT Instant API alias | `openai/chat-latest` | Direct API-key only. Moving alias for experiments, not the default. |
| ChatGPT/Codex subscription auth through explicit PI | `openai/gpt-5.5` plus provider/model runtime `pi` | Select an `openai-codex` auth profile for the compatibility route. |
| Image generation or editing | `openai/gpt-image-2` | Works with either `OPENAI_API_KEY` or OpenAI Codex OAuth. |
| Transparent-background images | `openai/gpt-image-1.5` | Use `outputFormat=png` or `webp` and `openai.background=transparent`. |
## Naming map
The names are similar but not interchangeable:
| Name you see | Layer | Meaning |
| --------------------------------------- | ------------------- | ------------------------------------------------------------------------------------------------- |
| `openai` | Provider prefix | Canonical OpenAI model route; agent turns use the Codex runtime. |
| `openai-codex` | Auth/profile prefix | OpenAI Codex OAuth/subscription auth profile provider. |
| `codex` plugin | Plugin | Bundled OpenClaw plugin that provides native Codex app-server runtime and `/codex` chat controls. |
| provider/model `agentRuntime.id: codex` | Agent runtime | Force the native Codex app-server harness for matching embedded turns. |
| `/codex ...` | Chat command set | Bind/control Codex app-server threads from a conversation. |
| `runtime: "acp", agentId: "codex"` | ACP session route | Explicit fallback path that runs Codex through ACP/acpx. |
| Name you see | Layer | Meaning |
| --------------------------------------- | -------------------------- | -------------------------------------------------------------------------------------------------------------------- |
| `openai` | Provider prefix | Canonical OpenAI model route; agent turns use the Codex runtime. |
| `openai-codex` | Legacy auth/profile prefix | Older OpenAI Codex OAuth/subscription profile namespace. Existing profiles and `auth.order.openai-codex` still work. |
| `codex` plugin | Plugin | Bundled OpenClaw plugin that provides native Codex app-server runtime and `/codex` chat controls. |
| provider/model `agentRuntime.id: codex` | Agent runtime | Force the native Codex app-server harness for matching embedded turns. |
| `/codex ...` | Chat command set | Bind/control Codex app-server threads from a conversation. |
| `runtime: "acp", agentId: "codex"` | ACP session route | Explicit fallback path that runs Codex through ACP/acpx. |
This means a config can intentionally contain both `openai/*` model refs and
`openai-codex` auth profiles. `openclaw doctor --fix` rewrites legacy
`openai-codex/*` model refs to the canonical OpenAI model route.
This means a config can intentionally contain `openai/*` model refs while auth
profiles still point at Codex-compatible credentials. Prefer `auth.order.openai`
for new config; existing `openai-codex:*` profiles and `auth.order.openai-codex`
remain supported. `openclaw doctor --fix` rewrites legacy `openai-codex/*` model
refs to the canonical OpenAI model route.
<Note>
GPT-5.5 is available through both direct OpenAI Platform API-key access and
@@ -152,15 +154,16 @@ Choose your preferred auth method and follow the setup steps.
| Model ref | Runtime config | Route | Auth |
| ---------------------- | -------------------------- | --------------------------- | ---------------- |
| `openai/gpt-5.5` | omitted / provider/model `agentRuntime.id: "codex"` | Codex app-server harness | `openai-codex` profile |
| `openai/gpt-5.4-mini` | omitted / provider/model `agentRuntime.id: "codex"` | Codex app-server harness | `openai-codex` profile |
| `openai/gpt-5.5` | omitted / provider/model `agentRuntime.id: "codex"` | Codex app-server harness | Codex-compatible OpenAI profile |
| `openai/gpt-5.4-mini` | omitted / provider/model `agentRuntime.id: "codex"` | Codex app-server harness | Codex-compatible OpenAI profile |
| `openai/gpt-5.5` | provider/model `agentRuntime.id: "pi"` | PI embedded runtime | `openai` profile or selected `openai-codex` profile |
<Note>
`openai/*` agent models use the Codex app-server harness. To use API-key
auth for an agent model, create an `openai-codex` API-key profile and order
it with `auth.order.openai-codex`; `OPENAI_API_KEY` remains the direct
fallback for non-agent OpenAI API surfaces.
auth for an agent model, create a Codex-compatible API-key profile and order
it with `auth.order.openai`; `OPENAI_API_KEY` remains the direct fallback for
non-agent OpenAI API surfaces. Older `auth.order.openai-codex` entries still
work.
</Note>
### Config example
@@ -251,10 +254,11 @@ Choose your preferred auth method and follow the setup steps.
</Warning>
<Note>
Keep using the `openai-codex` provider id for auth/profile commands. The
`openai-codex/*` model prefix is legacy config repaired by doctor. For the
common subscription plus native runtime setup, sign in with `openai-codex`
but keep the model ref as `openai/gpt-5.5`.
The `openai-codex/*` model prefix is legacy config repaired by doctor. For
the common subscription plus native runtime setup, sign in with Codex auth
but keep the model ref as `openai/gpt-5.5`. New config should put OpenAI
agent auth order under `auth.order.openai`; older `auth.order.openai-codex`
entries remain valid.
</Note>
### Config example
@@ -309,8 +313,9 @@ Choose your preferred auth method and follow the setup steps.
openclaw models status --probe --probe-provider openai-codex
```
`openai-codex` remains the auth/profile provider id. `openai/*` is the
model route for OpenAI agent turns through Codex.
`openai/*` is the model route for OpenAI agent turns through Codex. The
`openai-codex` auth/profile provider id remains accepted for existing
profiles and CLI listing.
### Status indicator

View File

@@ -221,6 +221,84 @@ describe("resolveAuthProfileOrder", () => {
expect(order).toStrictEqual([]);
});
it("lets Codex auth use friendly OpenAI auth order entries", async () => {
const { resolveAuthProfileOrder } = await importAuthProfileModulesWithAliasRegistry();
const store: AuthProfileStore = {
version: 1,
profiles: {
"openai:personal": {
type: "oauth",
provider: "openai-codex",
access: "access",
refresh: "refresh",
expires: Date.now() + 60_000,
},
"openai:backup": {
type: "api_key",
provider: "openai-codex",
key: "sk-backup",
},
"openai:platform": {
type: "api_key",
provider: "openai",
key: "sk-platform",
},
},
};
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: {
openai: ["openai:personal", "openai:backup", "openai:platform"],
},
},
},
store,
provider: "openai-codex",
});
expect(order).toEqual(["openai:personal", "openai:backup"]);
});
it("keeps direct OpenAI Codex auth order ahead of the friendly OpenAI alias", async () => {
const { resolveAuthProfileOrder } = await importAuthProfileModulesWithAliasRegistry();
const store: AuthProfileStore = {
version: 1,
profiles: {
"openai:personal": {
type: "oauth",
provider: "openai-codex",
access: "access",
refresh: "refresh",
expires: Date.now() + 60_000,
},
"openai-codex:legacy": {
type: "oauth",
provider: "openai-codex",
access: "legacy-access",
refresh: "legacy-refresh",
expires: Date.now() + 60_000,
},
},
};
const order = resolveAuthProfileOrder({
cfg: {
auth: {
order: {
openai: ["openai:personal"],
"openai-codex": ["openai-codex:legacy"],
},
},
},
store,
provider: "openai-codex",
});
expect(order).toEqual(["openai-codex:legacy"]);
});
it("marks profile success with one canonical last-good and usage update", async () => {
const { markAuthProfileSuccess } = await importAuthProfileModulesWithAliasRegistry();
const agentDir = await mkdtemp(path.join(os.tmpdir(), "openclaw-auth-profile-success-"));

View File

@@ -24,6 +24,9 @@ export type AuthProfileEligibility = {
reasonCode: AuthProfileEligibilityReasonCode;
};
const OPENAI_PROVIDER_ID = "openai";
const OPENAI_CODEX_PROVIDER_ID = "openai-codex";
function resolveProviderAuthMode(
cfg: OpenClawConfig | undefined,
provider: string,
@@ -126,11 +129,22 @@ export function resolveAuthProfileOrder(params: {
// get a fresh error count and are not immediately re-penalized on the
// next transient failure. See #3604.
clearExpiredCooldowns(store, now);
const openAIOrderAliasProvider =
providerAuthKey === OPENAI_CODEX_PROVIDER_ID || providerKey === OPENAI_CODEX_PROVIDER_ID
? OPENAI_PROVIDER_ID
: undefined;
const storedOrder =
resolveAuthOrder(store.order, providerAuthKey) ?? resolveAuthOrder(store.order, providerKey);
resolveAuthOrder(store.order, providerAuthKey) ??
resolveAuthOrder(store.order, providerKey) ??
(openAIOrderAliasProvider
? resolveAuthOrder(store.order, openAIOrderAliasProvider)
: undefined);
const configuredOrder =
resolveAuthOrder(cfg?.auth?.order, providerAuthKey) ??
resolveAuthOrder(cfg?.auth?.order, providerKey);
resolveAuthOrder(cfg?.auth?.order, providerKey) ??
(openAIOrderAliasProvider
? resolveAuthOrder(cfg?.auth?.order, openAIOrderAliasProvider)
: undefined);
const explicitOrder = storedOrder ?? configuredOrder;
const explicitProfiles = cfg?.auth?.profiles
? Object.entries(cfg.auth.profiles)

View File

@@ -608,6 +608,97 @@ describe("runEmbeddedPiAgent overflow compaction trigger routing", () => {
expect(harnessParams?.runtimePlan).toBe(runtimePlan);
});
it("auto-selects friendly OpenAI-named Codex auth profiles for forced codex harness runs", async () => {
const { clearAgentHarnesses, registerAgentHarness } = await import("../harness/registry.js");
const pluginRunAttempt = vi.fn<AgentHarness["runAttempt"]>(async () =>
makeAttemptResult({ assistantTexts: ["ok"] }),
);
const runtimePlan = makeForwardedRuntimePlan({
resolvedRef: {
provider: "openai",
modelId: "gpt-5.5",
harnessId: "codex",
},
auth: {
providerForAuth: "openai",
harnessAuthProvider: "openai-codex",
forwardedAuthProfileId: "openai:personal",
},
});
clearAgentHarnesses();
registerAgentHarness({
id: "codex",
label: "Codex",
supports: () => ({ supported: false }),
runAttempt: pluginRunAttempt,
});
mockedBuildAgentRuntimePlan.mockReturnValueOnce(runtimePlan);
mockedGetApiKeyForModel.mockRejectedValueOnce(new Error("generic auth should be skipped"));
mockedResolveAuthProfileOrder.mockReturnValueOnce(["openai:personal"]);
mockedEnsureAuthProfileStoreWithoutExternalProfiles.mockReturnValue({
version: 1,
profiles: {
"openai:personal": {
type: "oauth",
provider: "openai-codex",
access: "access",
refresh: "refresh",
expires: Date.now() + 60_000,
},
},
});
try {
await runEmbeddedPiAgent({
...overflowBaseRunParams,
provider: "openai",
model: "gpt-5.5",
config: {
agents: {
defaults: {
agentRuntime: { id: "codex" },
},
},
},
runId: "forced-codex-harness-auto-selects-friendly-openai-auth",
});
} finally {
clearAgentHarnesses();
}
expect(mockedGetApiKeyForModel).not.toHaveBeenCalled();
expectMockCallFields(mockedResolveAuthProfileOrder, {
provider: "openai-codex",
});
expect(mockedBuildAgentRuntimePlan).toHaveBeenCalledTimes(1);
expect(pluginRunAttempt).toHaveBeenCalledTimes(1);
const pluginParams = expectMockCallFields(pluginRunAttempt, {
provider: "openai",
authProfileId: "openai:personal",
authProfileIdSource: "auto",
});
expectRuntimePlanFields(pluginParams.runtimePlan, {
resolvedRef: {
provider: "openai",
modelId: "gpt-5.5",
harnessId: "codex",
},
auth: {
providerForAuth: "openai",
harnessAuthProvider: "openai-codex",
forwardedAuthProfileId: "openai:personal",
},
});
const harnessParams = pluginRunAttempt.mock.calls[0]?.[0];
expect(harnessParams?.runtimePlan).toBe(runtimePlan);
expect(Object.keys(harnessParams?.authProfileStore.profiles ?? {})).toEqual([
"openai:personal",
]);
expectRecordFields(harnessParams?.authProfileStore.profiles["openai:personal"], {
provider: "openai-codex",
});
});
it("blocks undersized models before dispatching a provider attempt", async () => {
mockedResolveContextWindowInfo.mockReturnValue({
tokens: 800,

View File

@@ -645,9 +645,10 @@ export async function runEmbeddedPiAgent(
if (!pluginHarnessOwnsTransport || !profileId) {
return false;
}
const profileCredentialProvider = attemptAuthProfileStore.profiles?.[profileId]?.provider;
const runtimeAuthPlan = buildAgentRuntimeAuthPlan({
provider,
authProfileProvider: profileId.split(":", 1)[0],
authProfileProvider: profileCredentialProvider ?? profileId.split(":", 1)[0],
sessionAuthProfileId: profileId,
config: params.config,
workspaceDir: resolvedWorkspace,