* feat(container): add isImageCurrent + getLogs + tailLogs + runOneShot to ManagedContainer
Four base-class additions ahead of the OpenClaw runtime migration so
the upcoming subclass doesn't have to re-implement them:
- isImageCurrent() — pure predicate comparing the existing container's
image ref to descriptor.defaultImage. Treats SHA-pinned variants as
matches. start() is unchanged; subclasses + service layers compose
the predicate where they want short-circuit behaviour.
- getLogs(tail) and tailLogs(onLine) — generic log primitives, thin
pass-throughs to ContainerCli.
- runOneShot(argv, opts) — sibling-container helper that spawns a
<name>-setup container with the same image+mounts+env (no ports/
health/restart), runs argv, force-removes after. Includes the
retry-on-name-collision behaviour previously bespoke to OpenClaw.
Hermes inherits unused surface only — no behavioural change. The
in-flight base-class tests cover all four primitives.
* fix(container): tighten getLogs error path + close runOneShot timeout-onLog leak; trim docstrings
- getLogs now distinguishes a missing container (returns []) from
other CLI failures (throws). Previously nerdctl's stderr ("Error:
no such container: …") leaked into the lines array as if it were
log output. isNoSuchContainer is exported from container-cli to
share the predicate.
- runWithOptionalTimeout wraps the caller's onLog so post-timeout
lines from the abandoned runCommand promise become no-ops; before
this, callers could see onLog fire after runOneShot had already
rejected, hitting state the caller may have torn down on the
timeout error.
- Tightens the new docstrings to one short line per the project
convention; drops a restating comment in the test file.
* feat(runtime): add ClaudeRuntime + CodexRuntime + factories
* refactor(host-adapters): switch wire-up + dispatch + health to runtime registry
main.ts registers ClaudeRuntime + CodexRuntime alongside Hermes. ACP
runtime resolves all three via the registry; legacy host-process
spawn is preserved as a fallback so unit tests that don't bootstrap
runtimes keep working. AdapterHealthChecker now reads runtime
snapshots through the registry — the embedded execAsync probe,
ADAPTER_HEALTH_COMMANDS table, and friendlyProbeFailure mapper
delete. As a side-effect this also fixes the Hermes "Unavailable"
chip (Hermes was missing from ADAPTER_HEALTH_COMMANDS).
Drops the standalone claude-code/prepare.ts and codex/prepare.ts
modules (their bodies are exported from the runtime files now).
* test(runtime): cover ClaudeRuntime + CodexRuntime descriptor + prep + factory
* fix(runtime): coalesce concurrent host-process probes; expose probedAt on snapshot
* fix(runtime): preserve acpx-core npx-wrapped spawn for claude + codex
The host-process runtimes were resolving the ACP spawn command
through their own getAcpExecSpec, which returned argv [claude] /
[codex] — bare binaries. acpx-core's built-in registry actually
resolves these adapters to npx wrappers around the official
ACP-aware packages (claude-agent-acp, codex-acp), and the package
version range is owned by acpx-core. The bare-binary spawn would
fail because either the binary is missing or doesn't speak ACP.
Spawn dispatch now goes through registry.resolve() + wrapCommandWithEnv
for claude/codex (matching pre-#967 behaviour). The runtime
registrations still drive health probing and per-turn prep — only
the spawn-command source-of-truth stays in acpx-core. Drops the
misleading getAcpExecSpec from the host-process runtime classes.
Regression test asserts the spawn command contains the npx package
name (claude-agent-acp / codex-acp) for each adapter.
* feat(runtime): introduce AgentRuntime types + interface + registry
Foundation for the unified agent-runtime abstraction. No adapter
migrates yet; the existing acpx-runtime, per-adapter prepare
modules, OpenClawService, HermesContainerService, and
adapter-health.ts all keep working unchanged.
This commit adds the data layer of the abstraction:
- `RuntimeDescriptor` discriminates the two kinds we ship today
(`'container'` | `'host-process'`). UI components route on this.
- `RuntimeState` is the union of both kinds' states — container
flow `not_installed → installing → installed → starting →
running → stopped`, host flow `cli_missing | cli_present |
cli_unhealthy`, plus the shared `errored` and
`unsupported_platform` terminals.
- `RuntimeStatusSnapshot` carries a single `isReady: boolean` so
the harness has one bit to read before spawning turns.
- `RuntimeAction` is a typed discriminated union — required args
(e.g. `agentId` for `'reset-wipe-agent'`) are compile-time
enforced, removing the previous footgun of optional args on a
string-keyed dispatch.
- `RuntimeCapability` lists every action a runtime can advertise;
`getCapabilities()` is the single switchboard the UI uses to
decide which buttons to render.
`AgentRuntime` interface declares the contract every runtime
implements: status snapshot + subscriber, capability list,
`executeAction(action)`, `buildExecArgv(spec)`, and per-agent home
dir. `prepareTurnContext` is intentionally absent until the first
adapter migrates so callers can't depend on a method that has no
implementation.
`AgentRuntimeRegistry` is a small class + module-level singleton —
adapters register themselves at boot, the harness/UI look up by
`adapterId`. `resetAgentRuntimeRegistry()` is for tests only.
Two error classes round it out: `ActionNotSupportedError`
(capability gate, mapped to HTTP 405 in a later phase) and
`RuntimeNotReadyError` (state gate at the runtime layer, distinct
from the container-layer's `ContainerNotReadyError`).
* feat(runtime): add ContainerAgentRuntime + HostProcessAgentRuntime abstract bases
* test(runtime): cover state translation, action dispatch, registry
* fix(runtime): gate host-process executeAction on capabilities; only stamp probe cache after probe resolves
* feat(container): add waitForContainerRunning primitive + typed error
Adds `ContainerCli.waitForContainerRunning(name, opts)` polling
`inspectContainer().running === true` until either the container
reports running or the timeout expires. Distinct from the existing
`waitForContainerNameRelease` (which waits for *deletion*).
Used by the upcoming managed-container layer between
`nerdctl create + start` and "container is ready for exec" so the
harness never spawns a turn against a half-started container —
which is the root cause of the silent first-turn failure on Hermes
today (`hermes-container.ts:130-160` returns immediately after
start).
Defaults sized for cold-start: 30s budget at 500ms cadence.
Throws `ContainerNotRunningError` (new, in `lib/vm/errors.ts`) on
timeout — distinct from `ContainerNameReleaseTimeoutError` so
callers can branch on "didn't come up" vs "didn't get cleaned up".
* feat(container): add ManagedContainer abstract base + state machine
Introduces the abstract base every container-backed agent adapter
will subclass. Owns the canonical state machine (not_installed |
installing | installed | starting | running | stopped | errored),
the lifecycle lock (per-process promise chain + cross-process file
lock), the gated `execute*` family, and the host↔container path
translator.
Subclasses provide only what's actually adapter-specific:
- `descriptor` (image, container name, supported platforms)
- `buildContainerSpec()` for the `nerdctl create` args
- `readinessProbe()` after the container reaches running
- `mountRoots()` for the path translator
Three execute methods, all sharing one invariant — every entry
point gates on state == running:
- `execProcess(spec)` spawns a long-lived child process via Bun,
waits through `starting` up to 60s, throws typed
`ContainerNotReadyError` if the container is not_installed /
stopped / errored / timed out.
- `execOneShot(spec)` is a buffered convenience wrapper.
- `buildExecArgv(spec)` is the pure builder for callers (acpx-core)
that need a shell-command string. Single source of truth for the
`env LIMA_HOME=… limactl shell <vm> -- nerdctl exec -i …` chain
that today's ACP runtime hand-rolls in two places (`acpx-runtime
.ts:780-820` and `:823-870`).
`reset(level)` is on the API surface but throws
`ResetNotSupportedError` so the next PR can wire soft / wipe-agent
/ hard without revving the abstract class.
Path translator uses lexical containment against declared mount
roots; the realpath-based symlink-escape check lives one layer up
(in the file-attribution code that already shipped) since the
translator itself never reads from disk.
* feat(container): HermesContainer subclass + wrapper-service bridge
`HermesContainer` (lib/container/managed/) is the first concrete
adapter on the new `ManagedContainer` base. Provides the four bits
that are actually adapter-specific:
- `descriptor`: image, container name, supported platforms,
readiness-probe tuning.
- `mountRoots()`: host↔container path mapping for the harness dir.
- `buildContainerSpec()`: nerdctl create args (env, mounts,
add-hosts, entrypoint override).
- `readinessProbe()`: execs `hermes --version` inside the
freshly-started container; bypasses the state gate via
`cli.exec` since we're in `starting`, not `running`, when the
probe runs.
`HermesContainerService` (api/services/hermes/) is rewritten as a
thin wrapper that delegates `prewarm` / `start` / `stop` /
`restart` / `shutdown` to the underlying `HermesContainer`. Public
surface is preserved so `main.ts`, `server.ts`, and
`agent-harness-service` compile unchanged in this PR; `getAccessor()`
still returns the structural `HermesAccessor` the ACP runtime
expects today (the runtime swap is the next commit). The wrapper
also exposes `getContainer(): HermesContainer | null` for callers
that want the richer surface.
The user-visible bug — Hermes silent first-turn failure — is fixed
as a side effect: `start()` now waits through
`cli.waitForContainerRunning` and runs the `hermes --version`
readiness probe before transitioning to `running`. Subsequent
chat turns are gated on the container actually being ready, not
just on `nerdctl create + start` having returned.
* feat(agent): ACP runtime spawns Hermes via ManagedContainer.buildExecArgv
`resolveHermesAcpCommand` no longer hand-rolls the
`env LIMA_HOME=… limactl shell <vm> -- nerdctl exec -i …` chain.
It now delegates to `gateway.buildExecArgv`, which the wrapper
service routes to the underlying `ManagedContainer.buildExecArgv`.
The structural `HermesGatewayAccessor` type gains one method
(`buildExecArgv`) — keeps the existing four getters so any
test/legacy caller still works. The wrapper's `getAccessor()`
delegates `buildExecArgv` to its `HermesContainer`. Net effect:
the `limactl shell ... -- nerdctl exec ...` argv chain has
exactly one owner (`ManagedContainer.buildExecArgv` in the
container layer) instead of being duplicated across `acpx-runtime`
and the now-deleted hand-built chain.
The OpenClaw branch (`resolveOpenclawAcpCommand`) is untouched —
its migration to ManagedContainer is a separate, larger PR that
also has to model the gateway / control-plane surfaces.
Tests: the existing acpx-runtime test suite expected the four
old getters; updated the Hermes-container fixture to also
provide `buildExecArgv` (mirrors the production builder inline so
the test stays independent of the production class wiring). All
320 server tests pass.
* test(container): managed-container + hermes-container coverage
20 cases across two files in `tests/lib/container/managed/`.
ManagedContainer base (14 cases):
- State machine: start() walks installing → starting → running;
probe-false lands errored with lastError populated; stop()
force-transitions to stopped even from errored.
- execProcess gating: rejects ContainerNotReadyError with
reason='not_installed' when never started; reason='errored'
when in errored state (preserving lastError); resolves once
state flips to running while waiting; reason='timeout' when
starting never resolves.
- buildExecArgv: snapshot test pinning the exact canonical
`env LIMA_HOME=… limactl shell <vm> -- nerdctl exec -i …` string
for the Hermes-shaped invocation; -e flags omitted when env is
empty.
- reset(level): throws ResetNotSupportedError for all three
levels (Phase 1 stub).
- Path translation: round-trip host ↔ container under a declared
mount; mount-root itself translates without suffix; rejects
PathOutsideMountsError for /etc/passwd / /proc/cpuinfo.
- subscribeState fires every transition, stops after unsubscribe.
HermesContainer subclass (6 cases):
- Descriptor declares adapterId='hermes', the canonical container
name, image, and darwin platform support.
- start() happy path reaches running + invokes the
`hermes --version` probe via cli.exec.
- Probe-non-zero start() lands errored with the right error.
- ContainerSpec built with idle entrypoint, harness bind-mount
(source = /mnt/browseros/vm/hermes/harness, target =
HERMES_CONTAINER_HARNESS_DIR), and host.containers.internal
add-host pointing at the VM gateway.
- toContainerPath maps host harness paths to /data/agents/harness.
- buildExecArgv produces the canonical Hermes ACP spawn string
with LIMA_HOME, container name, hermes binary path, and -e env.
Pre-existing test in tests/lib/container/container-cli.test.ts
(`waits until a container name is no longer resolvable`) flakes
under parallel test load on dev; passes solo. Last touched in
fd5aba24, well before this branch.
* chore: tidy comments
* fix(hermes): use provider:custom for openai + openai-compatible
Hermes (v2026.4.x) does not have a provider key called "openai" —
its `PROVIDER_REGISTRY` enumerates 33 named providers (anthropic,
deepseek, gemini, kimi-coding, etc.) and "openai" is not one of
them. Per the upstream docs, the canonical shape for any
OpenAI-compatible endpoint with an API key is:
model:
provider: custom
base_url: "<endpoint>"
When `base_url` is set, Hermes ignores provider lookup and calls
the URL directly using OPENAI_API_KEY (or the configured api_key).
Today's mapping wrote `provider: "openai"` for both BrowserOS
provider types — Hermes' main-model loader rejected that with
`unknown provider 'openai'`, and the harness surfaced an opaque
"Internal error" on every first chat for any Hermes agent backed
by a Fireworks / Together / Groq / OpenAI provider.
Fix:
- `openai` and `openai-compatible` BrowserOS types now both map
to `hermesProvider: 'custom'`.
- HermesProviderMapping gains an optional `defaultBaseUrl` field
used when `provider: 'custom'` is set with no caller-supplied
baseUrl (BrowserOS' `openai` type doesn't require base_url at
the API edge, but Hermes' `custom` always does — so we fall
back to https://api.openai.com/v1).
- writeHermesPerAgentProvider rejects `provider: 'custom'` with
no base_url so a future regression fails loudly instead of
silently writing an unusable config.yaml.
Tests updated: the existing openai-compatible case now asserts
`provider: "custom"` instead of `"openai"`, plus a new case
covering the openai-default-base-url fallback path.
Note: the `openrouter` mapping is left untouched because its
fix is unverified (Hermes' PROVIDER_REGISTRY doesn't appear to
contain "openrouter" either, but the auxiliary fallback chain
recognises it). Worth a separate follow-up — out of scope for
this fix which targets the user-reported reproduction.
* fix(container): install() must ensure VM is ready before image pull
Image operations run inside the Lima VM, so `nerdctl pull` fails
on a cold-boot run if the VM hasn't been started yet.
`HermesContainerService.prewarm()` (the original wrapper) always
called `vm.ensureReady()` before `ensureImageLoaded()` — the
wrapper-bridge introduced earlier in this PR delegated `prewarm()`
to `container.install()` and dropped the VM-ensure step.
`start()` does ensure VM, but on cold boot `prewarm()` and
`start()` race for the lifecycle lock and there is no guarantee
which one wins. When `prewarm()` lands first, the image pull
crashes against an unstarted VM and Hermes never comes up.
Fix: `install()` now awaits `deps.vm.ensureReady()` before
transitioning to `installing`. Errors land in `errored` exactly
as before. New regression test pins the call order
(`vm.ensureReady` → `loader.ensureImageLoaded`) so a future edit
can't silently re-introduce the gap.
* fix(agent): offset main content by collapsed sidebar width to prevent overlap
Add pl-14 (56px = w-14) to both main branches in SidebarLayout so the
content is always offset to the right of the fixed overlay sidebar.
Previously, on viewports narrower than ~1300px the expanded sidebar
would visually overlap the left edge of the centered content.
* fix(agent): DRY up sidebar offset — hoist pl-14 to parent div
Move pl-14 from the two <main> branches to their shared parent div
so any future layout branch gets the rail offset automatically.
Functionally equivalent; verified NewTabChat uses absolute inset-0
relative to its own <main>, so the chat layout is unaffected.
* feat(agent): add Hermes as a 4th ACPX adapter (Phase A)
Adds Hermes Agent (NousResearch/hermes-agent) as a host-process ACPX
adapter, mirroring the Claude Code pattern.
- agent-types.ts: extend AgentAdapter union with 'hermes'
- agent-catalog.ts: add Hermes catalog entry
- lib/agents/hermes/prepare.ts (new): minimal prepare using prepareBrowserosManagedContext
- acpx-agent-adapter.ts: register the adapter
- acpx-runtime.ts: add 'hermes' branch returning 'hermes acp' (host)
- AdapterIcon.tsx: add Hermes icon
- db schema + supporting frontend types/literals updated for the new adapter
Phase A scope: host-process only. Phase A.5 swaps to nerdctl exec
into a Hermes container.
OpenClaw is untouched. Verified by all 6 POC spikes
(plans/features/claude-browseros-hermes-poc/findings.md).
* fix(agent): address Hermes adapter review issues
- NewAgentDialog: add 'hermes' to onValueChange guard so the dropdown
option actually wires through onRuntimeChange/onHarnessAdapterChange
(was a no-op before — selecting Hermes silently kept previous value)
- tests/acpx-runtime: add coverage for the new 'hermes' registry branch
- tests/acpx-agent-adapter: fold hermes prepare test into existing file,
matching the pattern used for claude/codex/openclaw
- Delete tests/lib/agents/hermes-prepare.test.ts (now redundant)
- Reconcile install-mechanism comment between acpx-runtime.ts and
agent-catalog.ts
* fix(agent): make Hermes adapter actually work end-to-end
Two surgical fixes uncovered while running the Phase A smoke test
through the BrowserOS chat HTTP API:
1. lib/agents/hermes/prepare.ts — seed per-agent HERMES_HOME from
the user's global ~/.hermes/ on first use. ensureAgentHome only
writes SOUL.md and MEMORY.md; without seeding config.yaml, .env,
and auth.json, hermes acp comes up unconfigured and either hangs
or errors with "No LLM provider configured." Copy is idempotent
(skip if dest exists) so subsequent prepare calls don't clobber
per-agent edits.
2. lib/agents/acpx-runtime.ts — wrap the hermes spawn in
`bash -c "exec hermes acp | tee /dev/null"` to bridge Bun's
socketpair-based child stdio with Python's asyncio.connect_write_pipe
(which only drains correctly to a real pipe(2)). Without it, hermes'
stdout never reaches the harness — verified by inspecting hermes
process FDs: Bun gives the child unix sockets, asyncio queues writes
that never become readable on Bun's end. With tee in the middle,
hermes writes to a real pipe and tee bridges the bytes through the
socket. Verified 2026-05-06 against hermes-agent 0.12.0 on macOS
arm64 + Bun 1.3.6.
Smoke-test result with both fixes:
- ACP session created end-to-end
- BrowserOS MCP wired (96 browser tools registered with hermes)
- Reasoning + text streamed back through /agents/:id/sidepanel/chat
- Final stream: text-delta "PONG", finishReason "stop"
Updates the existing acpx-runtime test to assert the new spawn shape
(bash -c, tee /dev/null bridge) so the workaround can't silently regress.
* feat(agent): run Hermes adapter in Lima container (Phase A.5)
Move Hermes ACPX adapter from host-process spawn to running inside
docker.io/nousresearch/hermes-agent:v2026.4.30 in the existing
BrowserOS Lima VM, mirroring the OpenClaw container pattern.
Container lifecycle (api/services/hermes/hermes-container.ts):
- prewarm: ensure VM ready, pull image (or skip if already in
containerd), start an idle container with /bin/sh -c "exec sleep
infinity" so the harness can nerdctl exec into it per turn
- Tini bypassed — tini 0.19.0 in upstream image getopt-parses any
-x token even after PROGRAM, breaking /bin/sh -c
- --add-host host.containers.internal:<vm-gateway> so hermes inside
the container can reach the BrowserOS HTTP MCP endpoint
- Bind-mount <browserosDir>/vm/hermes/harness onto /data/agents/harness
so per-agent HERMES_HOME dirs are visible to the container
Spawn (acpx-runtime.ts):
- HermesGatewayAccessor interface (mirrors OpenclawGatewayAccessor)
- resolveHermesAcpCommand builds:
env LIMA_HOME=... limactl shell --workdir / browseros-vm --
nerdctl exec -i -e PYTHONUNBUFFERED=1 -e HERMES_HOME=... <container>
/opt/hermes/.venv/bin/hermes acp
- Absolute path /opt/hermes/.venv/bin/hermes (not bare "hermes") since
upstream image's PATH is set by its entrypoint script which we
override to keep the container idle
- Falls back to host-process spawn when no HermesGatewayAccessor wired
(test path / dev fallback)
- Drops the host-mode bash+tee workaround — limactl/SSH/nerdctl pipe
chain is sufficient for asyncio's pipe writer
MCP wiring:
- New PreparedAcpxAgentContext.browserosMcpHost field threads through
prepare → getRuntime → createBrowserosMcpServers
- Hermes prepare sets browserosMcpHost='host.containers.internal' so
the URL injected into newSession.mcpServers resolves from inside
the container; other adapters keep '127.0.0.1' default
Per-agent home (lib/agents/hermes/prepare.ts):
- HERMES_HOME points at /data/agents/harness/<agentId>/home (in-container)
- Host-side seedHermesHomeFromGlobal still copies ~/.hermes/{config.yaml,
.env, auth.json} into the per-agent home; the volume mount makes them
visible inside the container
- New api/services/hermes/hermes-paths.ts holds host/container path helpers
End-to-end smoke tests against the dev server (clean Lima state):
- Plain text: PONG round-trip via /sidepanel/chat ✓
- Multi-turn context: RUBY-7421 stored + recalled ✓
- Multi-agent isolation: agent 2 doesn't see agent 1's secret ✓
- MCP tool execution: mcp_browseros_browseros_info fires ✓
- Image attachment via /chat: model identifies "Red" from a 128x128 PNG ✓
- Concurrent turns + 409 attachUrl: full attach streams the in-flight
Pacific Ocean essay turn cleanly ✓
- Cancel midstream + recovery turn: ALIVE response ✓
- Persistence across server restart: agents survive ✓
Companion knowledge doc:
plans/features/claude-browseros-hermes-acp-knowledge.md
* feat(agent): per-agent provider/key for Hermes adapter
Lets users create multiple Hermes agents each with its own provider,
model, and API key. NewAgentDialog now shows provider/model/key fields
inline when 'Hermes' is selected. On submit, the harness writes the
per-agent <browserosDir>/vm/hermes/harness/<agentId>/home/{config.yaml,
.env} directly so the agent has the right config from turn 1 — no
dependency on the user having run `hermes setup` outside BrowserOS.
The existing seedHermesHomeFromGlobal flow remains as a fallback for
agents created without provider fields (e.g. via direct API or with
an existing ~/.hermes/ install).
Backend:
- shared/constants/hermes.ts: HERMES_SUPPORTED_PROVIDERS registry
(openrouter, anthropic, openai, custom — bedrock follow-up)
- api/services/hermes/hermes-paths.ts: writeHermesPerAgentProvider
- agent-harness-service: writes per-agent config.yaml + .env in
createAgent when adapter=hermes and apiKey present
- routes/agents.ts: relax modelId catalog validation for adapter=hermes
(catalog has empty models[] by design; per-agent modelId is free-form)
- tests/agent-harness-service: cover write + skip paths
Frontend:
- HermesProviderFields.tsx (new): provider dropdown, model field, API
key + optional baseUrl when provider=custom
- NewAgentDialog: render the new fields when adapter=hermes
- agents-page-actions: thread fields through createHarnessAgent
- AgentsPage / agent-harness-types: minor pass-through edits
Smoke-tested end-to-end against the dev server (clean Hermes per-agent
home, no ~/.hermes/ seed): create agent with apiKey + modelId, files
written at the per-agent path with mode 0600, first chat returns the
expected response, all without touching ~/.hermes/.
* feat(agent): source Hermes provider config from BrowserOS LLM providers
Replace the Hermes-specific provider/model/API-key form in New Agent
with a chooser that pulls from the same global LLM providers OpenClaw
uses (Settings → BrowserOS AI). Backend rejects creation with a 400
when the selected provider is missing required fields (apiKey, modelId,
plus baseUrl for openai-compatible) or is not in the Hermes-supported
set; the ~/.hermes/ fallback is removed so Hermes agents always carry
their own per-agent config.
* refactor(openclaw): TKT-788 cleanup — bump image, lock no-auth, delete observer + image bypass
Re-lands the openclaw-only changes from #934 (reverted in #953 because the
original PR's working tree had stale rollback content for
`packages/browseros/tools/patch/`). This commit is the same openclaw
diff with zero changes outside `packages/browseros-agent/`.
What changes (TKT-788 work-streams A + B + C):
WS-A — bundled gateway no-auth:
- Bump image from `ghcr.io/openclaw/openclaw:2026.4.12` to
`ghcr.io/browseros-ai/openclaw:2026.5.2-browseros.1` (BrowserOS-
pinned variant with the no-auth contract baked in).
- Configure gateway with `auth.mode: 'none'`; remove the device-auth
bootstrap dance that the older binary required.
- Delete the per-call token plumbing the http-client / observer / chat-
client carried (340 LOC). The harness still passes a stable token in
headers for backwards-compat with code that hasn't been re-pointed yet,
but it is no longer required by the gateway.
WS-C — delete the image-attachment bypass:
- The HTTP `/v1/chat/completions` carve-out for OpenClaw image turns
is gone. Image attachments now ride through ACP as image content
blocks (which acpx 0.6.x supports natively for openclaw, claude, codex).
- Delete `openclaw-gateway-chat-client.ts` (211 LOC) and `image-turn.ts`
(219 LOC).
- Drop `maybeHandleTurn` from the `AcpxAgentAdapter` interface and
the openclaw entry. `AcpxAdapterTurnInput` removed.
- Drop the corresponding 'diverts OpenClaw image turns to the gateway
chat client' test from `acpx-runtime.test.ts`.
WS-B — replace the WS observer with harness events:
- Delete `openclaw-observer.ts` (276 LOC) — no more parallel WS
subscription, no more `new OpenClawObserver`, no more
`ensureObserverConnected` / `observer.disconnect()` plumbing.
- Wire `AgentHarnessService` to receive turn-lifecycle events from
the runtime stream itself (`turnLifecycleListeners`) and feed
ClawSession from those, preserving the dashboard SSE shape.
Net: 314 insertions / 1144 deletions, all under
`packages/browseros-agent/`. Typecheck clean across all 6 packages.
946 server tests pass (1 unrelated CDP-dependent test skipped — same
state as origin/dev).
Reference: TKT-788. The patch-CLI rollback that was in the squash of
#934 is intentionally NOT in this commit.
* fix(openclaw): handle 2026.5.4 acp-cli envelope shapes (media + injected timestamp) + bump image
OpenClaw 2026.5.4 (the BrowserOS-pinned image variant with the no-auth
handshake bypass needed for cron tool calls from inside ACP) introduced
two new envelope prefix shapes that the post-bypass-deletion path now
surfaces in user-message text:
[media attached: <internal-path> (<mime>)]
[<weekday> <YYYY-MM-DD HH:MM> <TZ>] [Working directory: <path>]
<BrowserOS role envelope>
The previous cleaner only matched a leading [Working directory: ...]
\n\n line. With media + timestamp prefixes ahead of it the anchor
no longer matched, so image-attachment user turns rendered with
8+ lines of envelope leak in the chat panel.
Replaces the single OPENCLAW_WORKDIR_PREFIX with three content-shape-
anchored patterns chained through stripOpenClawAcpCliEnvelope():
1. [media attached: <path> (<mime>)] ← repeats per attachment
2. [<weekday> <YYYY-MM-DD HH:MM> <TZ>] ← injectTimestamp
3. [Working directory: <path>] ← acp-cli prefixCwd
Each is anchored on its content shape (media attached:, weekday
abbrev + ISO date, Working directory:) rather than just '[…]', so
user-typed lines that happen to start with brackets are not eaten.
Also bumps OPENCLAW_IMAGE from 2026.5.2-browseros.1 to
2026.5.4-browseros.1. The 5.2 image refused tool-side WS connections
with 'device identity required' even though gateway auth.mode=none —
PR #6 in browseros-ai/openclaw added the OPENCLAW_GATEWAY_PRIVATE_INGRESS_NO_AUTH
bypass that ships in 5.4. Without 5.4, the cron tool (and any other
tool that opens a fresh gateway WS from inside the embedded runner)
fails with 1008.
Verified end-to-end with the BrowserOS chat endpoint:
- Plain text turn: clean
- Image attachment turn: clean (was leaking 8 envelope lines pre-fix)
- One-shot kind:at cron fires, PING fire renders clean
- Second openclaw agent creates, runs, history isolated
15/15 history-mapper unit tests pass; typecheck clean across all
packages.
* fix: disable bundled OpenClaw gateway auth
* refactor(openclaw): delete token plumbing now that auth is locked off
Builds on the cherry-picked spike (#933). With gateway.auth.mode=none
locked in as the only path the bundled gateway runs, the BrowserOS-side
token machinery becomes dead weight. This commit deletes:
- OpenClawService: token field, tokenLoaded, gatewayAuthMode state
machine, getGatewayToken(), getGatewayHttpToken(),
ensureTokenLoaded(), refreshGatewayAuthToken(),
loadTokenFromConfig() and all six lifecycle call sites.
- OpenclawGatewayAccessor.getGatewayToken interface field.
- OpenClawHttpClient / OpenClawGatewayChatClient: optional getToken
constructor arg and authHeaders() helpers.
- OpenClawObserver: gatewayToken field/parameter and the auth.token
branch in the connect frame.
- GatewayContainerSpec.gatewayToken and the
OPENCLAW_GATEWAY_TOKEN env wiring; the
OPENCLAW_GATEWAY_PRIVATE_INGRESS_NO_AUTH=1 env is now always set
rather than conditional.
Test suites: dropped bearer-token assertions and the two persisted-token
tests in openclaw-service that asserted deleted behavior.
Net: -310 LOC across src + tests, with 118 openclaw + acpx tests still
green. Typecheck and biome clean.
Reference: TKT-788 (move OpenClaw integration to ACPX runtime), WS-A.
* refactor(openclaw): delete gateway image bypass, route image turns via ACP (TKT-788 WS-C) (#935)
* refactor(openclaw): delete gateway image bypass, route image turns through ACP
The browseros-ai/openclaw ACP bridge accepts image content blocks
natively (extractAttachmentsFromPrompt at openclaw/src/acp/event-mapper.ts:92,
forwarded via chat.send attachments at translator.ts:295), so the
BrowserOS-side carve-out that diverted image-bearing turns to the
gateway HTTP /v1/chat/completions endpoint is no longer needed.
Deletes:
- apps/server/src/api/services/openclaw/openclaw-gateway-chat-client.ts
- The corresponding test file
- AcpxRuntime.sendOpenclawViaGateway, persistGatewayTurn,
recordToOpenAIMessages helpers
- The image-attachment carve-out branch in AcpxRuntime.send
- openclawGatewayChat option from AcpxRuntime + AgentHarnessService
+ agent routes ctor wiring
- The randomUUID import (only the deleted helper used it)
- The acpx-runtime test for the deleted carve-out
Net: 614 LOC removed, 0 added, all 142 openclaw + acpx + agent tests
still green.
Reference: TKT-788, WS-C. Stacked on WS-A (#934).
* refactor(openclaw): delete WS observer, feed ClawSession from harness events (#936)
The openclaw-observer.ts WebSocket observer was a second tap on the
same gateway events the AcpxRuntime already sees as ACP session/update
notifications. Replace it with a pull from the AgentHarnessService's
turn lifecycle stream — keeping ClawSession and the /openclaw/dashboard
SSE endpoint shape unchanged for the BrowserOS UI.
Changes:
- AgentHarnessService: emit `turn_started` / `turn_event` / `turn_ended`
to subscribers via a new `onTurnLifecycle(listener)` API. Wired around
the existing `notifyTurnStarted/Ended` calls and inside the
per-event read loop.
- agents route: forward an optional `onTurnLifecycle` dep into the
service it constructs.
- server.ts: subscribe and route OpenClaw-adapter events to
`OpenClawService.recordAgentTurnEvent(agentId, sessionKey, event)`.
- OpenClawService: new `recordAgentTurnEvent` method that maps stream
events to ClawSession transitions (working/idle/error + currentTool
from `tool_call` events). Keeps the existing
`onAgentStatusChange` / `getAgentState` / `getDashboard` API.
- Delete `openclaw-observer.ts` (276 LOC) and all observer wiring
(`new OpenClawObserver`, `ensureObserverConnected`, three
`observer.disconnect()` call sites, the import).
Net: 276 LOC removed from the observer; ~130 LOC added across harness
event plumbing + recorder method. -146 LOC overall, all 141 tests still
green, typecheck clean, biome clean.
Reference: TKT-788, WS-B (Path 1: keep ClawSession + dashboard SSE shape).
Independent of WS-A (#934) and WS-C (#935); will rebase on top of
whichever lands first.
---------
Co-authored-by: Nikhil Sonti <nikhilsv92@gmail.com>
* fix(openclaw): drop BrowserOS-envelope regexes in history mapper
Replace the four BrowserOS-side regex strips (`<role>`,
`<user_request>`, `<system-reminder>`, `[Working directory:]`)
in history-mapper with a single call to
`unwrapBrowserosAcpUserMessage`. That helper is the same exact-string
matcher acpx-runtime already uses for non-OpenClaw history paths
(chat history endpoint, listing's `lastUserMessage`); it anchors on
the exact constants `buildBrowserosAcpPrompt` writes, so matcher and
wrapper travel together.
Also drops two patterns that were defensive-only with no emit site in
the codebase (`[Working directory:]` prefix and trailing
`<system-reminder>` block), and updates the corresponding tests to
use the realistic envelope shape `buildBrowserosAcpPrompt` actually
produces.
The OpenClaw-injected scaffolding patterns (cron prefix, queued-
marker, subagent context) stay in place for now — replacing those
needs either a side-channel cache keyed on cron job id or a structured
`trigger` field on the gateway's history schema, tracked as a
follow-up.
* fix(openclaw): strip acp-cli's [Working directory:] prefix before BrowserOS unwrap
The previous commit incorrectly removed the workdir-prefix strip on the
assumption it was speculative defensive code with no live emit site.
Actually emitted by OpenClaw's acp-cli (`/app/dist/acp-cli-*.js` line
1361, `prefixCwd ? \`[Working directory: ${displayCwd}]\\n\\n...` style),
so live history rendering regressed: every user message surfaced with
a `[Working directory: /Users/...]\\n\\n<role>...` envelope intact.
Restore the strip as an exact-shape line match (`^\\[Working directory:
[^\\]]*\\]\\n\\n`) anchored on the closing bracket + double-newline so
path content is consumed without a content-shape regex. Apply it
ahead of `unwrapBrowserosAcpUserMessage` so the BrowserOS unwrap's
`^<role>` anchor can match the now-leading envelope.
Also fix the test fixture: the BrowserOS unwrap performs exact-prefix
match against the full `BROWSEROS_ACP_AGENT_INSTRUCTIONS` constant —
truncated `<role>...` test bodies didn't match. Tests now use the
verbatim constant text via a shared `ROLE_BLOCK` helper.
Verified live: 8/8 history entries render with no envelope leaks.
* feat(openclaw): aggregate sub-session history into agent's main session
Cron-triggered (and hook/channel-triggered) runs land in their own
ephemeral session files under the parent agent's directory:
/home/node/.openclaw/agents/<agentId>/sessions/<runId>.jsonl
The chat panel reads agent:<id>:main, so autonomous runs were invisible
in history even though they fired and persisted on disk.
This change makes `getSessionHistory(agent:<id>:main)` enumerate every
session under that agent (via the existing `sessions.list` gateway RPC)
and merge their messages into one chronological response. Each merged
message is tagged with `source` (main / cron / hook / channel) and the
sub-session's key, so the UI can render section markers without
re-parsing.
Filesystem isolation is enforced upstream — `sessions.list({ agentId })`
resolves to that agent's directory only (browseros-ai/openclaw
src/config/sessions/combined-store-gateway.ts:90), so no cross-agent
leakage is possible.
Behavior:
- Main session keys (`^agent:[^:]+:main$`) → aggregate
- Any other key → existing single-session behavior
- Sub-session fetch failures → logged + dropped (partial timeline
preferable to a hard failure that hides main)
- `limit` applied post-merge across the unified timeline
- Streaming variant (`Accept: text/event-stream`) unchanged for now
Reuses the pre-existing `cliClient.listSessions` and
`httpClient.getSessionHistory` — no new gateway integration.
Validation:
- bun typecheck clean
- bunx biome check clean
- 44 openclaw service + route tests pass
* feat(openclaw): wire chat panel history through gateway aggregation
Adds the missing seam between the chat panel's history fetch and
OpenClawService's aggregated history.
Before this change:
- Chat panel calls GET /agents/<id>/sessions/main/history
- AgentHarnessService.getHistory delegates to AcpxRuntime.getHistory
- AcpxRuntime reads ~/.browseros-dev/agents/acpx/sessions/<key>.json
- That local file is only written by AcpxRuntime.send (user turns)
- Cron / hook / channel turns persist on the gateway side instead
- Panel sees user turns only; autonomous turns are invisible
After this change:
- OpenClawProvisioner gains optional getAgentHistory(agentId) method
- AgentHarnessService.getHistory branches on adapter — for openclaw,
routes through the provisioner instead of the runtime
- server.ts wires the provisioner method to call
OpenClawService.getSessionHistory("agent:<id>:main") which already
aggregates main + every sub-session
- New history-mapper.ts converts OpenClaw rich content blocks
(text/thinking/toolCall/toolResult) into AgentHistoryEntry shape
the chat panel consumes
Layering preserved:
- AcpxRuntime untouched, still generic, zero services/openclaw imports
- AgentHarnessService still talks only to abstract OpenClawProvisioner
- server.ts is the single concrete-binding seam (same place that
wires createAgent, removeAgent, getStatus)
- Other adapters (claude, codex) keep their existing local-file
history path — no behavior change for them
Tool-call pairing: assistant `toolCall` blocks are stored by
toolCallId; subsequent `toolResult` (role: 'tool') messages mutate the
same AgentHistoryToolCall reference to attach output / error, so the
UI renders complete tool entries instead of orphan inputs.
Net: +240 LOC, 1 new file, AcpxRuntime untouched, 117 tests still pass.
* feat(openclaw): paginate aggregated history + strip prompt scaffolding
Two follow-ups on the aggregation work, both required for the chat
panel to render OpenClaw history cleanly.
1. Compound-cursor pagination across sub-sessions
The previous aggregation always returned the full merged window with
cursor=null/hasMore=false, which broke "load more" in the chat panel
once an agent's history grew beyond a single page (every cron job
spawns a sub-session, so this hits quickly).
Per-session cursor support already exists on the gateway HTTP endpoint
(`session-history-state.ts:paginateSessionMessages`). The aggregator
now threads each session's cursor through and emits a compound cursor
encoding `{<sessionKey>: messageSeq | null}`, base64url JSON. A `null`
slot means the session is exhausted; subsequent pages skip it.
The gateway records the per-session monotonic seq inside the
`__openclaw.seq` extension envelope rather than the top-level
`messageSeq` field; the cursor reads from there. The wire-shape type
gains an optional `__openclaw?: { id?, seq? }` field reflecting that.
2. Strip OpenClaw + BrowserOS scaffolding from history user messages
Cron-fired user messages on the gateway side carry an OpenClaw
template:
[cron:<uuid> <name>] <payload>
Current time: ...
Use the message tool if you need to notify the user directly with an
explicit target. ...
BrowserOS-initiated turns carry the ACP system prefix:
[Working directory: ...]
<role>...</role>
<user_request>
<actual user text>
</user_request>
<system-reminder>...</system-reminder>
Both surface verbatim in the chat panel today. Add
`cleanHistoryUserText` (in history-mapper) which extracts:
- the cron payload (and drops the trailer)
- the user_request body (and drops the role / working-dir / system-
reminder envelopes)
Non-matching text falls through unchanged so future patterns we don't
recognize stay visible rather than getting silently dropped.
Verified end-to-end:
- /agents history endpoint now returns clean text per item
- Pagination cursor advances across pages with correct seq ordering
- Chat panel renders messages as `print('hello')`, `hey`, etc.
(no leaked envelopes or trailers)
- 8 new unit tests for cleanHistoryUserText + the converter, +
86 existing openclaw tests still pass
* feat(openclaw): handle queued-marker concatenation in history cleaner
When multiple cron prompts (or any prompts) arrive while a turn is
still active, BrowserOS's harness queue concatenates them into a
single user message joined by a marker line:
[Queued user message that arrived while the previous turn was still active]
That blob renders as one wall of text in the chat panel — and worse,
the cron-prompt cleaner doesn't fire because the message no longer
*starts* with `[cron:...]`. cleanHistoryUserText now splits on the
queued-marker line and runs each chunk through the per-message cleaner
(cron-prompt extraction or BrowserOS-prefix unwrap), then joins the
non-empty results with single newlines so each prompt renders as its
own visually distinct line.
Verified live: a 6926-char queued blob containing five concatenated
[cron:...] prompts now renders as five short `print('hello')` lines.
+ 2 unit tests covering split + leading-marker edge case.
* feat(openclaw): drop subagent context + reasoning-only assistant turns
Two new patterns surfaced during e2e cron testing.
1. [Subagent Context] prefix: when an OpenClaw agent invokes a nested
subagent, the subagent's session is seeded with a user message:
[Subagent Context] You are running as a subagent (depth N/M). ...
Begin. Your assigned task is in the system prompt under **Your Role**.
The actual task lives in the subagent's system prompt; the user
message body is pure scaffolding. cleanHistoryUserText now returns
empty for these so the converter drops the entry — no empty bubble.
2. Reasoning-only assistant turns: MiniMax with thinking:minimal often
returns content with only `thinking` blocks and no `text` block on
trivial prompts ("Print hello"). The empty text bubble plus dangling
reasoning collapsible reads as a broken UI. The converter now skips
any entry where text is empty AND there are no tool calls (regardless
of reasoning).
Trade-off: reasoning-only turns lose their reasoning collapsible. The
alternative (empty-bubble cards) is worse. If we want to preserve the
reasoning, surface it as the bubble's text — separate UI decision for
later.
+ 3 unit tests covering both patterns.
* feat(server): foundation for OpenClaw agent file-output attribution
Phase 1 of TKT-762 — surface files OpenClaw agents produce as
artifacts inline in chat + a per-agent Outputs rail. This commit
lays the storage + I/O foundation only; turn-lifecycle wiring,
HTTP routes, and UI follow in subsequent phases.
- New `produced_files` Drizzle table (FK→agent_definitions with
cascade, unique on (agent, path) so re-modifications upsert).
Migration 0002_chemical_whirlwind.sql. Adapter-agnostic schema
— V1 only enables the watcher for openclaw, V2 can plug Claude
/ Codex into the same table without migrating.
- `ProducedFilesStore` — snapshot/finalize-turn diff API plus
by-turn / by-agent queries and a path-resolver that enforces
workspace-root containment for the download / preview routes.
- `walkWorkspace` — bounded recursive workspace walker; skips
symlinks (no host-fs smuggling), excludes node_modules / .git /
.cache, hard-capped at 50k entries / depth 16.
- `file-preview` helper — extension + magic-byte MIME detection,
bounded text-snippet reader (1 MB cap), inline image base64
reader (4 MB cap). Streaming download path lives in the route
layer (next phase) — this module only handles the small
in-memory reads the preview UX needs.
* feat(server): attribute openclaw turn outputs to the harness layer
Phase 2 of TKT-762 — wire the per-turn workspace diff into the
single dispatch path that owns every turn's lifecycle. Two prior
wiring points the original plan named (the OpenClaw HTTP chat
route + OutboundQueueService.tryDispatch) were collapsed in dev
into agent-harness-service.runDetachedTurn — both direct sends
and queued sends route through it now, so a single hook covers
both. The old `OutboundQueueService` is gone; its successor
`message-queue.ts` re-enters runDetachedTurn for the queued
case, so we still only need to bracket once.
Changes:
- New `produced_files` variant on `AgentStreamEvent` so the
inline artifact card has a wire-format hook independent of the
REST API.
- `ProducedFilesStore` gains `resolveAgentDefinitionId` to bridge
gateway-side openclaw agent names to the harness's
`agent_definitions.id`, handling both the reconciled-row shape
(id == openclaw name) and the BrowserOS-created shape
(id = oc-<uuid>, name = openclaw display name).
- `AgentHarnessService.runDetachedTurn`: snapshot the openclaw
workspace before `runtime.send(...)`, finalize the diff in the
outer finally, push the resulting rows as a `produced_files`
event. Adapter-gated to openclaw only — Claude / Codex agents
write to the user's own filesystem and don't need
attribution.
- Skip attribution on user-cancel (`abort.signal.aborted`) so
the side effects of an aborted turn don't get surfaced as
"outputs you asked for." On runtime errors we still attribute,
because partial outputs are what the user is most likely to
want to recover.
- Lazy-init the store via `tryGetProducedFilesStore()` so tests
that swap in a fake `agentStore` don't trip the
process-wide `getDb()` initialisation guard.
- File attribution extracted into `attributeTurnFiles` helper to
keep `runDetachedTurn`'s cognitive complexity under the lint
ceiling.
Verifications:
- Server tsgo --noEmit clean for changed files.
- 162/162 server-api tests pass.
- Biome lint clean on all three changed files.
* feat(server): expose produced-files HTTP API for /agents
Phase 3 of TKT-762 — surface the rows Phase 2 attributes via four
read-only endpoints under the existing `/agents` router. Mounted
where the agents page already polls so the rail UI doesn't add
a second router/origin to its trust boundary.
Routes:
- GET /agents/:agentId/files
Outputs-rail data, grouped by the assistant turn that
produced each batch, newest first. `?limit=` clamps to N
rows server-side (default 200).
- GET /agents/:agentId/files/turn/:turnId
Per-turn refresh — used by the inline-card consumer to
rebuild metadata after the SSE `produced_files` event lands,
and by direct fetches that missed the live event.
- GET /agents/files/:fileId/preview
Discriminated `FilePreview` JSON: text snippet (≤1MB),
base64 image (≤4MB), pdf metadata, or `binary` placeholder
when neither preview path applies. 404 when the file id is
unknown OR the on-disk file disappeared after attribution.
- GET /agents/files/:fileId/download
Streams raw bytes via `Bun.file().stream()` with
`Content-Disposition: attachment` and the detected MIME
type. The fileId is opaque — the server resolves the agent
and on-disk path; the client never sees a path, so traversal
is impossible by construction.
Service layer:
- `AgentHarnessService` gains `listAgentFiles`,
`listAgentFilesForTurn`, `previewProducedFile`, and
`resolveProducedFileForDownload`. All four are no-ops for
claude / codex adapters (they return null/[]) so the route
contract stays uniform across adapters even though only
openclaw produces rows in v1.
- New `ProducedFileEntry` and `ProducedFilesRailGroup` DTOs —
trimmed wire shapes that strip `agentDefinitionId` and
`sessionKey` from the on-disk row.
Verifications:
- Server tsgo --noEmit clean for changed files (only pre-
existing `Bun` global warning).
- 162/162 server-api tests pass.
- Biome clean on both changed files.
Smoke-test instructions for the route shape live in the plan
under §6 and §8; full end-to-end smoke happens in Phase 6.
* feat(agent): client-side hooks + types for agent file outputs
Phase 4 of TKT-762 — frontend foundation for the inline artifact
card and the per-agent Outputs rail. UI components themselves
land in Phase 5; this commit only adds types, hooks, and shared
helpers so the wiring is in place when the components arrive.
New module: `apps/agent/lib/agent-files/`
- `types.ts` — `ProducedFile`, `ProducedFilesRailGroup`, and the
discriminated `FilePreview` union, mirrored from the server-side
DTOs in `apps/server/src/api/services/agents/agent-harness-service.ts`.
The `agentDefinitionId` / `sessionKey` columns on the on-disk
rows deliberately do NOT exist at the type boundary — clients
refer to files by opaque `id`.
- `file-helpers.ts` — pure helpers: `inferFileKind` (icon
routing), `formatFileSize`, `extensionOf`, `basenameOf`,
`buildFileDownloadUrl`. No React, no fetch, no DOM — anything
stateful belongs in the hooks.
- `useAgentOutputs.ts` — `useAgentOutputs(agentId)` for the rail,
`useAgentTurnFiles(agentId, turnId)` for the inline card,
`useInvalidateAgentOutputs()` for the chat-stream-completion
hook (Phase 5 will plumb this), and `useRefreshAgentOutputs()`
for the rail's manual refresh button.
- `useFilePreview.ts` — `useFilePreview(fileId)` with
`staleTime: Infinity` (previews are immutable for a given id;
no point refetching on focus). Always opt-in (`enabled`) — the
preview only loads when the user clicks a row.
- `index.ts` — barrel re-export so consumers import from one path.
Touched in `apps/agent/entrypoints/app/agents/`:
- `agent-harness-types.ts` — added `produced_files` variant + the
`HarnessProducedFile` type to `AgentHarnessStreamEvent`. Mirrors
the server-side change from Phase 2 so the client SSE consumer
type-narrows correctly.
- `useAgents.ts` — exported the previously-private `agentsFetch`
helper and the `AGENT_QUERY_KEYS` registry so the agent-files
hooks reuse them without duplicating fetch / key conventions.
Three new keys added: `agentOutputs`, `agentTurnFiles`,
`filePreview`.
Verifications:
- Agent tsgo --noEmit clean.
- Biome clean on all touched files.
* feat(agent): inline artifact card + per-agent outputs rail
Wires the chat surface to the produced-files API shipped earlier:
- Inline artifact card under each assistant turn that produced files,
populated by the live `produced_files` SSE event (resumes also stamp
`turnId` so a missed live event can fall back to the per-turn fetch).
- Collapsible right-side Outputs rail on the agent conversation page,
grouped by turn, with Refresh + per-agent open/close persistence in
localStorage. Gated to openclaw adapters in v1.
- Shared file preview Sheet branches on the FilePreview union: text
snippet (markdown for `.md`/`.mdx`, otherwise pre+code), image data
URL, and download-only fallback for pdf/binary/missing.
- Conversation hook invalidates the rail's React Query cache from its
finally block so newly attributed files appear without a manual
refresh.
* feat(agent-files): polish — symlink-safe paths + toast on failures
- `resolveFilePath` now rejects symlink-escapes from the workspace
by realpath-resolving both endpoints and re-checking containment.
Lexical traversal (`..` segments) still fails fast without
touching the filesystem.
- Added `produced-files-store.test.ts` with 6 path-resolution cases
including a symlink whose target lives outside the workspace
root — the prior string-only check would have allowed this.
- File preview Sheet: surfaces preview-load failures in a toast
(in addition to the inline error block, which is easy to miss
when the body has scrolled). Download button now intercepts the
click so a missing baseUrl shows a toast instead of silently
hiding the button.
- Outputs rail: refresh failures fire `toast.error` with the
underlying message.
* fix(agent-files): drop duplicate `/agents` prefix from client paths
`agentsFetch` / `buildAgentApiUrl` already prepend `/agents`, but
the file-output hooks were passing fully-qualified paths
(`/agents/<id>/files`, `/agents/files/<id>/preview`, etc.) which
resolved to `/agents/agents/...` and 404'd. Fixed the four call
sites to pass paths relative to the `/agents` root.
* fix(agents): strip openclaw role envelope from chat history
PR #924 introduced a second `<role>…</role>` prefix for openclaw
turns — a single-line block distinct from the multi-line BrowserOS
role TKT-774 wired the unwrap against. Because TKT-774's
`stripOuterRoleEnvelope` matched the BrowserOS prefix exactly, the
openclaw envelope sailed through unstripped and user messages on
openclaw agents rendered the full preamble in /sessions/main/history
responses.
Make the strip adapter-agnostic: any
`<role …>…</role>\n\n<user_request>\n…\n</user_request>` shape gets
unwrapped. Drops the now-unused BROWSEROS_ACP_AGENT_INSTRUCTIONS
constant and adds a regression test that uses the openclaw form
verbatim.
* feat(agent-files): inline file-card strip with rail deep-link
Replaces Phase 5's row-list ArtifactCard with a horizontal strip
of small file cards under any assistant turn that produced files.
Click a card → opens the FilePreviewSheet directly (preview +
download). Click View / +N → opens the per-agent Outputs rail and
scrolls / expands the matching turn group.
The card strip:
- Caps at 4 visible cards; remainder collapses into a +N pill that
shares the View handler.
- Owns its own FilePreviewSheet instance (parallel to the
deprecated ArtifactCard) so the per-card preview path doesn't
fight with the rail's Sheet.
- Hidden during streaming and absent when producedFiles is empty.
- Adapter-gated upstream: AgentCommandConversation only passes the
open-rail callback when adapter==='openclaw', so claude / codex
agents render no rail-opening affordance.
Rail changes:
- Accepts focusTurnId + onFocusTurnConsumed; the matching
RailTurnGroup expands and scrollIntoView's on focus, then fires
the consumed callback so the parent can drop the URL state.
- ?outputsTurn=<turnId> deep-links work: external nav opens the
rail, sets focusTurnId, and clears the param after consumption.
ArtifactCard is marked @deprecated; remove in a follow-up once
nothing imports it.
* fix(agent-files): keep file-card strip visible after history reload
After Phase 7 the inline FileCardStrip vanished as soon as a turn
finished: `filterTurnsPersistedInHistory` dropped the optimistic
turn once history reloaded, and history items don't carry
`producedFiles`. So the user could see a file produced inside an
assistant message but no card to open it.
Two fixes in tandem so the strip survives both the just-finished
case AND a fresh page load:
- New `selectStripOnlyTurns` keeps persisted turns that still
carry `producedFiles`. `ConversationMessage` learns a
`stripOnly` mode that renders only the trailing strip (no
duplicate user/assistant bubbles, since those are rendered by
`ClawChatMessage`).
- `AgentCommandConversation` now also calls `useAgentOutputs` and
passes `tailStripGroups` to `ClawChat`. Each rail group not
already covered by a live or strip-only turn renders as its own
tail `FileCardStrip` after history. Dedup keys on `turnId` so
the same turn never doubles up.
Adapter-gated upstream — claude / codex agents skip the
useAgentOutputs fetch entirely. The card click still opens the
preview Sheet directly; View / +N still deep-link to the rail at
the matching turn group.
* fix(agent-files): per-turn association + cache invalidation
Two fixes for the inline file-card strip:
1. Strips were stacking at the conversation tail because every
produced-files group rendered as a tail strip after history.
New `mapHistoryToProducedFilesGroups` matches each group to
the assistant history message that came from its turn — by
`group.turnPrompt` vs the first non-blank line of the
preceding user message — and ClawChat renders the strip
directly under that bubble. Groups that don't match any
history pair (orphans) still fall through to the tail.
2. `useInvalidateAgentOutputs` was passing `undefined` as the
baseUrl placeholder to `invalidateQueries({ queryKey })` —
react-query's positional partial-match doesn't treat
undefined as a wildcard, so the cache stayed stale until the
query refetched on its own (e.g. window focus). Switched to
predicate-based invalidation that matches by [agentOutputs
marker, agentId] regardless of baseUrl. Same for the per-turn
files key.
Net effect: send a turn that produces files → strip appears
under the just-finished assistant message; reload the page →
strips still appear under the right bubbles, not bunched at
the bottom.
* fix(agent-files): review feedback — name guard, RFC 5987, limit cap
Three review-flagged issues:
1. Path traversal via agent display name — `getHostWorkspaceDir`
accepted any string and `path.join`'d it, so a name like
`../../tmp` escaped `.openclaw`. The pre-turn snapshot would
then walk that escaped directory and attribute every file to
the new turn; resolveSafeWorkspacePath's containment check is
relative to the same escaped root so it would later serve
arbitrary host paths. Added `isAgentWorkspaceNameSafe` (rejects
`..`, separators, control chars, leading dots, empty); the
builder now throws on unsafe names plus a defensive
realpath-style containment check after the join. Harness
wraps the call so the path-traversal trip just disables file
attribution for the turn instead of failing the whole send.
Six-case regression test pinned.
2. `encodeRfc6266Filename` JSDoc claimed an RFC 5987
`filename*=UTF-8''<percent-encoded>` fallback but the impl
only stripped CRLFs/quotes. Now actually emits the fallback
when non-ASCII is present; helper returns the full
`filename="…"; filename*=UTF-8''…` attribute pair so the call
site doesn't have to wrap in quotes.
3. `/agents/:agentId/files` `?limit=` was forwarded to the DB
uncapped — extracted `parseAgentFilesLimit` that clamps to
[1, 500] before forwarding.
Also extracted `resolveSafeWorkspaceDir` + `snapshotWorkspaceForTurn`
helpers off `runDetachedTurn` so the new safety branch doesn't
push it past biome's cognitive-complexity cap.
* feat(agent): calm composer + redesigned hero on /home
Adopt the Variant A redesign aesthetic on /home — hero text and
composer styling only. shadcn primitives and CSS variables
unchanged; conversation-screen composer untouched.
Hero:
- Larger display title (clamp 36→56px, weight 600, tighter
letter-spacing, balanced wrap).
- Italic muted span around "work on" — small typographic accent
that makes the hero read as designed rather than default.
Composer (variant="home" only):
- Internal dashed divider between the typing area and the footer
chip row. The visual cornerstone of the calm aesthetic.
- Footer chips become 24px pill-shaped (rounded-full), ghost-on-
idle / muted-bg-on-hover. Workspace and Tabs show muted trailing
values inline (none / 0).
- Agent selector on the far left of the footer gets a filled-pill
trigger variant (bordered, accent/40 background, mono name) to
visually anchor the row. AgentSelector exposes a triggerVariant
prop (ghost | pill); chat surface keeps the existing ghost.
- Subtle 1px vertical divider between the agent pill and the rest.
- Right-aligned keyboard hint (↵ to run · ⇧↵ new line) using kbd
elements with the existing accent/border tokens.
- Outer shell gains a soft accent-orange focus-within ring.
Out of scope (future PRs): TRY suggestion chips, eyebrow strip,
recent-agents redesign, activity log.
* fix(agent): textarea bg leaks in dark mode
* style(agent): paint hero italic span in accent orange
* feat(agent): adopt calm composer aesthetic on chat-screen too
Bring the calm-composer footer (dashed divider, pill chips,
keyboard hint) over from /home to /agents/:agentId so both
surfaces share one design language.
- Rename HomeContextControls → CalmContextControls; the agent
selector is conditional via showAgentSelector, so chat hides it
while home keeps the filled agent pill on the left.
- Drop the legacy ContextControls function entirely (~140 LOC) and
collapse the variant branching at the call site to a single
CalmContextControls render.
- Add the same focus-within accent ring to ConversationShell that
HomeShell already has, so the focus signal is consistent.
The chat composer's Stop button (between textarea and voice mic)
is unchanged — it lives outside the footer chip row and only
surfaces while streaming.
---------
Co-authored-by: DaniAkash <DaniAkash@users.noreply.function>
* feat(agent): /home composer parity with image attachments
The /home composer used the same ConversationInput component as the
chat screen but passed attachmentsEnabled={false}, and the home →
chat handoff was a URL search param `?q=<text>` that physically
can't carry binary attachments. Pasting a screenshot at /home did
nothing.
Add a small in-memory registry (pending-initial-message.ts) as the
rich-data side channel for the same navigation: the home composer
writes { agentId, text, attachments } there before navigating; the
chat screen consumes it on mount and replays through the existing
harness send() path that already supports attachments. URL `?q=`
stays for shareable text-only prompts; the registry wins when both
are present. Module-scope, 10s TTL, destructive consume.
Net: home is now flagged attachmentsEnabled={true}; users can paste,
drag, or pick image files at /home and they survive the navigation
into the chat screen with previews intact.
* docs(agent): clarify why initial-message ref reset is safe post-registry-fire
* feat(agent): rich rail + header on /agents/:agentId chat
Replace the chat screen's legacy AgentEntry rail and binary READY
header with the same rich data the /agents page already exposes:
adapter glyph, liveness dot, pin star, status badge, adapter · model ·
reasoning chip line, last-used time, lifetime tokens, queue count,
and the Adapter Unavailable warning. Source of truth flips from the
merged AgentEntry list to useHarnessAgents() directly.
Sort order matches /agents (pinned → recency) — not /home
(active-first → recency) — because chat is index-shaped and shuffling
rows every 5s as turns transition would be jarring while reading.
Lift the inline pin-then-recency comparator out of /agents
AgentList.tsx into a shared agents-list-order.ts so both surfaces
stay on identical sort semantics.
* fix(agent): chat header height + composer sticking to bottom
Header was clipping descenders because the strip was vertical-content
sized at min-h-14 with tight py-2.5; bump padding and lean on natural
content height. Drop the AgentTile glyph (the rail row already shows
adapter identity) and the cwd path (too long, pushed the meta line
off-screen). Header is now name + pin star + status pill, then
adapter · model · reasoning, then last-used · tokens · queued.
Composer was floating mid-screen on short chats because the chat
grid had no grid-template-rows — the implicit auto row collapsed to
content height, so the right-column flex wrapper never received the
full container height. Add grid-rows-[minmax(0,1fr)] so the single
row claims 100% and ClawChat's flex-1 expands to push the composer
flush to the bottom.
* fix(agent): composer flush to bottom on short chats
Match the sidepanel chat's nested-flex pattern. The right-column
wrapper got h-full so it expands to the grid row; the conversation
controller's root added flex-1 so ClawChat's existing flex-1 has
something to actually fill against. Without these, the grid cell
stretched but the inner flex columns shrank to content height,
leaving the composer floating mid-screen.
* fix(agent): align rail header with chat header in shared top band
Pull the rail's "Agents" + back-button into the same horizontal strip
as the agent identity header. The two halves now sit on a single row
that spans both columns, so they can't drift in height as the chat
header gains/loses meta lines (last-used, tokens, queued).
The rail below the band keeps its scrollable list only; the chat
column below holds the conversation + composer. Border-bottom moves
from ConversationHeader to the band wrapper so we don't get a
double-rule on the boundary.
* fix(agent): reserve header height to prevent layout shift on data load
The chat header grew from a single line to three lines once the
useHarnessAgents() poll resolved (adapter chips + meta line populate
asynchronously), shoving the rail and conversation body downward.
Lock min-h-[84px] on both the band's left "Agents" cell and the
ConversationHeader root, and always render the meta line slot
(non-breaking space when empty) so the typographic frame is stable
regardless of data state.
* refactor(agent): pull status pill + meta to right side of chat header
Two-column header layout instead of three stacked rows: name + pin
star + adapter chips on the left, status pill stacked on top of the
last-used / tokens / queued meta line on the right. Drops min-h
from 84px → 60px so the band reclaims ~24px of vertical space and
the chat body starts higher on screen. Band's left "Agents" cell
matches the new height.
* fix(agents): hide BrowserOS ACP envelope from chat history payloads (TKT-774)
The user-message text persisted on the wire carried two nested
envelopes — the outer `<role>You are BrowserOS…</role>` +
`<user_request>…</user_request>` block from buildBrowserosAcpPrompt
and the inner `## Browser Context` + `<selected_text>` +
`<USER_QUERY>` block from formatUserMessage. PR #856 had unwrapped
only the outer envelope on history reads, so the user bubble in
the agent rail still rendered the inner envelope, and the LLM
chat-service path leaked the wrapper all the way back to the
sidepanel client through AI SDK's stream sync.
Two surgical fixes, both server-only:
1) ACP path (acpx-runtime.ts) — replace unwrapBrowserosAcpPrompt
with a comprehensive unwrapBrowserosAcpUserMessage that strips
both layers and decodes the </>/& escapes the server
applied via escapePromptTagText. Each step is independently
defensive (anchors that don't match are skipped) so the helper
is idempotent and tolerates partial / older / future-shape
envelopes. Applied in userContentToText (history mapper) and
inherited by extractLastUserMessage (listing's lastUserMessage).
2) LLM chat path (chat-service.ts) — split the persisted user
message from the prompt-time copy. session.agent.appendUserMessage
now stores the raw user text; a transient promptUiMessages array
is built with the wrapped (formatUserMessage + context-change
prefix) form and passed to createAgentUIStreamResponse for the
model. onFinish restores the raw form before persisting, so the
user-visible message and any future history reads see only the
user's typed text.
Tests:
- acpx-runtime.test.ts: new dedicated unwrapBrowserosAcpUserMessage
suite covering fully-wrapped messages, only-outer / only-inner
inputs, selected_text blocks with attribute strings, idempotency,
literal user-typed angle-bracket round-trip, and an integration
test that round-trips the real formatUserMessage output through
the unwrap to pin the writer/reader contract.
- chat-service.test.ts: existing 'rebuilds a managed-app session'
test updated for the new behaviour — asserts the persisted user
message is the raw text and the prompt copy passed to the agent
carries the Klavis context-change notice.
* fix(agents): decode entity escapes before stripping inner envelope (TKT-774)
The unwrap was running its inner-envelope strips against the
literal-tag form (<USER_QUERY>, <selected_text>) but the persisted
payload has those tags entity-escaped (<USER_QUERY>,
<selected_text>) — buildBrowserosAcpPrompt runs
escapePromptTagText over the entire formatUserMessage payload
before adding the outer <role>+<user_request> envelope, so the
inner anchors never matched against the on-disk text and the user
was still seeing <USER_QUERY> in /agents/:id/sessions/main/history
responses.
Reorder unwrapBrowserosAcpUserMessage to: outer-strip → decode
entities → inner-strips. Test fixtures updated to reflect the
actual on-wire form (escaped inner tags); the round-trip test
duplicates the escape rule inline so the contract between
buildBrowserosAcpPrompt and the unwrap is pinned end-to-end.
Without a token on actions/checkout, the action falls back to
GITHUB_TOKEN, which has no access to the private internal-docs
repo. Submodule clone fails with "repository not found".
PAT is back on checkout. PR ops still use GITHUB_TOKEN via the
GH_TOKEN env var on the run step. The bot-branch git push uses
the credential helper set up by checkout (the PAT, which has
Contents: Read and write).
Direct push to dev fails the dev ruleset's "Require pull request"
rule. Open a tiny PR from a bot branch and enable auto-merge
(squash, 0 approvals required) instead. No bypass actor needed —
the rule stays strict for everyone, including the bot.
PR ops use GITHUB_TOKEN with explicit pull-requests: write
permission. The cross-repo PAT is only used to rewrite the SSH
submodule URL so internal-docs can be cloned over HTTPS.
Mounts browseros-ai/internal-docs at .internal-docs/, tracking main.
This activates the /document-internal and /ask-internal skills (which
early-exit if the submodule is missing) and lets the sync-internal-docs
workflow start bumping the pointer on its 4-hourly schedule.
Team members: after this lands, run once from a fresh dev pull:
git submodule update --init .internal-docs
* feat(internal-docs): scaffold private docs submodule, skills, sync action
Adds the OSS-side scaffolding for the internal-docs system:
- /document-internal skill — drafts a 1-page feature/architecture/design
doc from the current branch's diff, asks four sharp questions, enforces
voice rules (no em dashes, banned filler words, 60-line cap on feature
notes), then opens a PR to browseros-ai/internal-docs via a tmp clone.
- /ask-internal skill — answers team-internal questions by greping
internal-docs and the codebase, synthesizing with file:line citations,
optionally executing surfaced commands with per-command confirmation,
and drafting a new doc + PR if grep returns nothing useful.
- .github/workflows/sync-internal-docs.yml — every 4 hours, bumps the
submodule pointer on dev directly (no PR; relies on dev branch
protection blocking force-push). Skips silently until the submodule
is configured. Uses url.insteadOf to rewrite the SSH submodule URL
to HTTPS-with-token for the bot, while keeping SSH the local default.
- .claude/skills/document-internal/seeds/ — root README and three
templates (feature-note, architecture-note, design-spec) ready to
copy into the new internal-docs repo on rollout.
Design spec: .llm/superpowers/specs/2026-04-30-internal-docs-submodule-design.md
Manual prereqs (NOT in this PR — handled out-of-band):
1. Create private repo browseros-ai/internal-docs with branch protection on main.
2. Seed it with the contents of .claude/skills/document-internal/seeds/.
3. Create a bot account, mark as bypass actor on dev branch protection.
4. Add INTERNAL_DOCS_SYNC_TOKEN secret with repo + read access to internal-docs.
5. Once internal-docs exists, on a follow-up branch:
git submodule add -b main git@github.com:browseros-ai/internal-docs.git .internal-docs
6. Send the team the one-time init snippet for their existing checkouts:
git submodule update --init .internal-docs
* fix(internal-docs): address Greptile review feedback
- Workflow: rebase onto dev before push to handle non-fast-forward race;
bump fetch-depth 1->50 so rebase has merge-base history.
- Workflow: move INTERNAL_DOCS_SYNC_TOKEN into step env: per Actions
credential-injection pattern, instead of inlining in the script body.
- Skill (BASE bug): suppress git rev-parse stdout so SHA does not get
captured into BASE alongside the literal 'dev'. Was breaking every
downstream git log/diff call.
- Skill (tmp clone): trap 'rm -rf "$TMP" EXIT after mktemp so cleanup
always runs, even if any subsequent step fails.
* feat(agents): durable per-agent chat message queue + composer Stop button
* fix(agents): tighten queue UI — smaller Stop, drop empty indicator, live drain attach
User feedback round 1 on the message-queue UX:
1) The Stop button matched the send/voice mics at h-10 w-10 with a
solid destructive fill, which read as alarming. Shrunk to h-8 w-8,
ghost variant with a soft destructive/10 background, smaller
filled square glyph. Reads as a calm 'stop' affordance instead of
a panic button.
2) The QueueItem's leading <QueueItemIndicator> dot was decorative
only — no state, no interaction. Dropped it from QueuePanel along
with the import; queue items now render as a clean preview line
with the trailing X remove action.
3) When the server drained the queue and started the next turn, the
chat panel didn't pick up the live stream until the user
navigated away and back. The hook's resume effect previously
only fired on agent change, not on listing-observed activeTurnId
change. Surface activeTurnId from useHarnessAgents into
useAgentConversation; effect now re-runs when the id changes,
calls /chat/active, and attaches to the new turn — so a queued
message starts streaming the moment the server drain pops it.
* fix(agents): don't reset streaming state from the resume effect's no-op paths
The Stop button was disappearing while the agent was actively
streaming, even though events were still flowing into the chat. Root
cause: the resume effect's `finally` block reset `streaming`,
`turnIdRef`, and `lastSeqRef` unconditionally — including on the
early-return paths (no active turn, or another mechanism already
owns the stream).
Sequence that triggered it:
1) User sends a message → send() sets streamAbortRef + streaming=true
and starts consuming the SSE.
2) User enqueues another message → enqueue mutation invalidates the
listing query.
3) Listing refetches with the live activeTurnId → the resume
effect re-fires (deps include activeTurnIdDep).
4) attemptResume hits `if (streamAbortRef.current) return` because
send() owns it.
5) The finally clause fires anyway and calls setStreaming(false),
clobbering the live state set by send(). The SSE consumer keeps
running (refs are intact) so text keeps streaming, but the React
flag is wrong, so the Stop button gates off.
Fix: track whether *this* run actually started a stream
(`weStartedStream`). The finally only resets state when it does.
Early-return / no-active-turn paths now leave streaming/turnIdRef/
lastSeqRef alone for whoever does own them.
Also widens the Stop button's visibility (`canStop` prop on
ConversationInput) so it stays steady across the brief gap between
turns when a queue drain is mid-flight; the parent computes
`streaming || activeTurnId !== null || queue.length > 0`. The
visibility widening is independent of the streaming-state fix above
— both are now in place.
* revert: drop canStop widening — Stop only shows while streaming
Reverts the canStop prop on ConversationInput and the OR-with-queue
visibility from AgentCommandConversation. Stop is gated solely on
`streaming` again. Between turns (queue draining) the button stays
hidden — only the actively-streaming turn is interruptible from the
composer, which matches what the user actually expects.
* fix(agents): persist the kicking-off prompt on active turns so the resume placeholder isn't empty
When a queued message drained and started a new turn, the chat
panel's resume effect staged a placeholder turn with userText: ''
because the hook had no way to know what message kicked off the
turn — only the agent-side stream was visible, and the user bubble
above it was blank until the user navigated away and back (at which
point the session record's history loaded normally).
Fix: ActiveTurnRegistry.register now accepts an optional `prompt`
that's stashed on the turn and surfaced via describe() / the
ActiveTurnInfo response. AgentHarnessService.startTurn passes the
incoming message into register. /chat/active returns it. The chat
hook's resume effect uses active.prompt as the placeholder
turn's userText, so the user bubble shows the queued message text
the moment streaming begins. Falls back to '' for older clients
that haven't been refetched yet.
* fix(agents): always release streamAbortRef on resume cleanup, even when cancelled
Greptile P1 follow-up. The previous `weStartedStream` guard correctly
stopped the resume effect's no-op early-returns from clobbering an
in-flight `send()` stream — but it also stopped a *cancelled*
mid-stream resume from clearing its own `streamAbortRef`. When the
cleanup fires (e.g. the 5s listing poll captures a new queue-drain
turn id while the SSE for the prior turn is still finishing), the
next effect run hits the `if (streamAbortRef.current) return` guard
against the now-aborted controller and never reattaches, leaving
`streaming === true` with no live stream until the user navigates
away.
Split the finally block: always release `streamAbortRef` when we
owned the controller (so the next run can take over), but only
reset the streaming flag / turn id / lastSeq on a clean exit (the
new run will set those itself, so resetting on cancel would just
flicker).
* feat(agents): rich-info command center rows + pin/PATCH/adapter-health backbone
Splits AgentRowCard from a 271-line monolith into a shallow tree of
single-responsibility sub-components under `agent-row/`:
AgentTile, AdapterHealthDot, PinToggle, AgentTitleRow,
AgentSparkline, AgentSummaryChips, AgentLastMessage, CwdChip,
AgentTokenSummary, AgentMetaRow, AgentErrorPanel, AgentActions
Adds the data each row consumes:
- pinned: boolean field on AgentDefinition + FileAgentStore.update
+ new PATCH /agents/:id route. useUpdateHarnessAgent mutation
optimistically updates the listing cache so the star flips
instantly; rolls back on error.
- Listing payload extended with lastUserMessage, cwd, tokens
(cumulative + last7d shape — last7d zero-filled until the
activity ledger lands), turnsByDay/failedByDay (zero-filled),
lastError/lastErrorAt, activeTurnId. AcpxRuntime grows a
getRowSnapshot() that reads cwd + cumulative tokens + last user
message from the session record in one pass.
- Adapter health: in-memory AdapterHealthChecker probes
`claude --version` / `codex --version` with a 2s timeout and
caches results for 5 min. /adapters response carries
{ healthy, reason?, checkedAt }. Tile-corner dot exposes the
state via HoverCard; openclaw inherits health from the gateway
snapshot already on the page.
Sub-components are pure: card itself owns no state. Sort order
becomes pinned-first, then recency. HoverCard is the workhorse for
keeping rows compact while exposing depth (full message, token
breakdown, daily turn list, error stack, adapter reason).
* refactor(agents): tighten command-center row design + cut redundant affordances
User feedback round 1:
1) Two green dots on the tile (health + liveness) was confusing. Health
moves out of the tile entirely and surfaces as an inline 'Unavailable'
chip in the model line — silent when the adapter is healthy, with a
warning amber chip + HoverCard reason when not. The tile now shows
one signal: liveness.
2) The last-user-message HoverCard wasn't telegraphing intent. Drop the
HoverCard. The line is informational, italic, with a leading quote
glyph so the row reads like a conversation snippet. To see the full
message the user opens the chat (which is the action they want next
anyway).
3) Resume + Chat were duplicate CTAs. Single primary action per row:
Resume (filled, accent-orange, with a pulsing dot) replaces Chat
when there's an active turn. Both navigate to /agents/:id but the
row tells the user which action they're taking.
4) Tokens weren't visible because the row gated on last7d.requestCount,
which is zero until the activity ledger ships. Switch to lifetime
tokens (which we have today). Drop the '7d stats:' framing — talking
about a window we can't compute would be misleading. The HoverCard
surfaces input/output split + a footnote that per-window stats land
in a follow-up.
5) CWD was rendering the server's own running directory, which is
meaningless to users. Hide it from the row entirely. The cwd field
still rides in the listing payload for future surfaces (chat panel,
debug view) — only the row stops rendering it.
Aesthetic refinements while we're here:
- Whole card carries state, not just the tile: working rows get an
accent-orange tinted border with a soft glow, error rows tint
destructive, idle rows lift on hover.
- Pin star fades in on hover (group-hover) when unpinned and stays
solid amber when pinned — keeps the rail calm by default.
- Tabular-nums on token figures so columns visually align across rows.
- Drop CwdChip and AdapterHealthDot files: no callers left.
* fix(agents): align row title flush-left whether pinned or not
Pin star moved from leading the title to trailing the badges, and
hidden from layout entirely (`hidden group-hover:inline-flex`) when
unpinned. The previous `opacity-0` rule kept the star reserving its
`size-6` slot, which left every unpinned title indented relative to
the model / preview / meta lines underneath it. Title now flushes
left in both states; pinned star stays solid amber so the signal
isn't hidden, and unpinned reveals an outline star on row hover for
the toggle affordance.
* fix(agents): keep pin-toggle slot reserved so row height is constant
Switching the unpinned star from `hidden group-hover:inline-flex`
to `opacity-0 group-hover:opacity-100`. The hidden/show variant was
collapsing the title row's height when the star wasn't rendered,
which made every card below visibly shift on hover. Always rendering
the button (with opacity-only visibility) keeps the row's vertical
metrics constant; the title still flushes left because the slot is
trailing, not leading.
Card hover effect (-translate-y + shadow-md) restored — the layout
shift wasn't coming from the card hover; it was the pin slot
appearing and disappearing.
* fix(agents): quieten row hover — border-tint only, no lift, no shadow
Drop the `-translate-y-px` and `hover:shadow-md` from the row card
plus the working-state inner ring. The translate + shadow grow
combination was visibly noisy as the cursor moved through the rail —
each row 'lifted' as you passed over it. Hover now just tints the
border in accent-orange/30; working and error states keep their
distinct border colours but no inner ring. Card height and shadow
stay constant in every state, so the rail reads as a calm vertical
list of cards.
* feat(home): rich Recent Agents grid + dead-code sweep
The /home Recent Agents grid was a placeholder shell. Every 'rich'
field on the card (lastMessage, lastMessageTimestamp, activitySummary,
currentTool, costUsd) was wired to undefined because AgentCommandHome
called `buildAgentCardData(agents, status?.status, undefined)` — the
dashboard arg has been hard-coded undefined since the harness
migration. Repointing the grid at `useHarnessAgents` + `useAgentAdapters`
gives every card the same enriched data the rail uses.
What the new card shows per agent:
• Adapter glyph tile + liveness dot (working pulses; asleep is
hollow; error is red)
• Name + Working pill (when active)
• Adapter · model · reasoning summary line, with an inline
Unavailable chip + HoverCard reason when the adapter binary
isn't on $PATH
• Italic last-user-message preview (line-clamp-2, leading quote
glyph) — same visual language as the rail
• Footer: 'X ago' + state chip (Asleep / Attention) OR a Resume
button (orange, with pulsing dot) when activeTurnId is non-null
Sort on the home grid is active-turn → recency. Pinning is NOT a
sort key here (and there's no pin indicator on the card) — pinning
belongs to the rail at /agents; the home page is action-oriented
and trusts active-turn + recency to surface the right agent.
Dead code removed:
• useAgentDashboard.ts (96 lines, no callers; subscribed to the
dead /claw/dashboard/stream from the OpenClaw-only era)
• useAgentCardData.ts (the dashboard-merge shim; passed undefined
every call so all enriched fields landed as undefined)
• AgentCard.tsx (AgentCardExpanded replaced by HomeAgentCard;
AgentCardCompact had no callers — the dock's compact mode was
never used)
• AgentCardData interface dropped from lib/agent-conversations/
types.ts; the new card consumes HarnessAgent directly
Visual language stays continuous between rail and grid: same
<AgentTile>, same <LivenessDot>, same italic-quote message
preview, same orange Resume button with a pulsing dot.
* chore(eval): instrument server startup to root-cause dev CI health-check timeouts
Three diagnostics + one config swap to investigate why the eval-weekly
workflow has been failing on dev since 2026-04-25 with "Server health
check timed out" (every worker, every retry).
Background:
- Last successful weekly eval on dev: 2026-04-18 (sha f5a2b73)
- Since then, ~30 server commits landed including Lima/VM runtime,
OpenClaw service, ACL system, ACP SDK — 108 server files changed,
~13K LOC added.
- Server process spawns cleanly in CI (PID logged) but never binds
/health within the 30s eval-side timeout. Static analysis finds no
obvious blocker; we need runtime evidence.
Changes:
1. apps/server/package.json — add `start:ci` script (no `--watch`).
The default `start` uses `bun --watch` which forks a child process
that watches every file in the import graph. Dev's graph is ~108
files larger than main's; on a cold CI runner the watcher setup is a
plausible source of multi-second startup overhead.
2. apps/eval/src/runner/browseros-app-manager.ts:
- Use `start:ci` when `process.env.CI` is set (true on
GitHub-hosted runners by default), else `start`.
- Capture per-worker server stderr to /tmp/browseros-server-logs/
instead of ignoring it. Without this we have no visibility into
why the server is hung pre-/health.
- Bump SERVER_HEALTH_TIMEOUT_MS 30s -> 90s. Dev's larger module
graph may simply need more cold-start time on CI.
3. .github/workflows/eval-weekly.yml — upload the server logs dir as a
workflow artifact (always, not just on success) so we can post-mortem
any startup failure on the next run.
4. configs/agisdk-real-smoke.json — swap K2.5 from OpenRouter ->
Fireworks (bypasses the OpenRouter per-key spend cap that has been
eating recent runs) and drop num_workers 10 -> 4 (well below the
Fireworks per-account TPM threshold that overwhelmed the original
2026-04-23 run).
Plan: trigger the eval-weekly workflow on this branch with the agisdk
config and observe (a) whether it gets past server startup, and
(b) if it doesn't, what the captured server stderr says.
* fix(eval): capture stdout too — pino logger writes to stdout, not stderr
Previous diagnostic patch only redirected stderr; the captured per-worker
log files came back as 0 bytes because the server uses pino which writes
all log output to stdout (fd 1), not stderr (fd 2). Capture both into
the same file.
* fix(server): catch sync throw from OpenClaw constructor on Linux
The container runtime constructor in OpenClawService throws synchronously
on non-darwin platforms, e.g. GitHub Actions Linux runners. The existing
.catch() on tryAutoStart() only handles async throws inside auto-start —
the sync throw from configureOpenClawService(...) itself propagates up
through Application.start() and crashes the process via index.ts:48
(process.exit(EXIT_CODES.GENERAL_ERROR)).
This is what's been killing dev's eval-weekly CI: the server crashes in
milliseconds, the eval client polls /health, gets nothing, times out.
Fix: wrap the configureOpenClawService call in try/catch matching the
existing .catch() intent (best-effort, don't crash). Server continues
without OpenClaw on platforms where it can't initialize.
Verified by reading captured server stdout from run 25123195126:
Failed to start server: error: browseros-vm currently supports macOS only
at buildContainerRuntime (container-runtime-factory.ts:54:11)
at new OpenClawService (openclaw-service.ts:652:15)
at configureOpenClawService (openclaw-service.ts:1527:19)
at start (main.ts:127:5)
* fix(server): defer OpenClaw chat client port lookup to request time
apps/server/src/api/server.ts:149 was calling getOpenClawService().getPort()
synchronously when constructing the OpenClawGatewayChatClient inside the
createHttpServer object literal. On non-darwin platforms this throws via
the OpenClawService constructor → buildContainerRuntime, escaping the
try/catch added in 5cf7b765 (which only protected the configureOpenClawService
call further down in main.ts).
Every other getOpenClawService() reference in server.ts is already wrapped
in an arrow function. This was the lone holdout. Make it lazy too: change
the chat client constructor to take getHostPort: () => number instead of
hostPort: number, evaluate it inside streamTurn at request time. Behavior
on darwin is unchanged.
This unblocks dev's eval-weekly CI on Linux runners where OpenClaw isn't
available — the chat endpoint isn't exercised by the eval, so a deferred
throw is acceptable.
* fix(server): allow Linux to skip OpenClaw via BROWSEROS_SKIP_OPENCLAW=1
Earlier surgical fixes (try/catch in main.ts, lazy chat client port) didn't
unblock dev's Linux CI — same throw kept reproducing. Whether this is bun
caching stale stack frames or a missed eager call site, the safer move is
to fix it at the root: make buildContainerRuntime never throw on Linux
when the runner has explicitly opted out.
Adds BROWSEROS_SKIP_OPENCLAW env check alongside the existing NODE_ENV=test
escape hatch in container-runtime-factory.ts. When set, returns the existing
UnsupportedPlatformTestRuntime stub — server boots normally, /health binds,
any actual OpenClaw API call still fails loudly at request time.
eval-weekly.yml sets the flag for the Linux runner. Darwin behavior and
non-CI Linux behavior unchanged (without the flag they still throw).
* feat(eval): align Clado action executor with new endpoint contract
David Shan shared the updated Clado BrowserOS Action Model spec.
Changes to match it:
- Bump endpoint URL + model id to the 000159-merged checkpoint
(clado-ai--clado-browseros-action-000159-merged-actionmod-f4a6ef)
in browseros-oe-clado-weekly.json and the README example.
- CLADO_REQUEST_TIMEOUT_MS 120s → 360s. Cold start can take ~5 min;
the 2-min ceiling was failing every cold-start request.
- Treat HTTP 200 with action=null / parse_error as an INVALID step
instead of aborting the executor loop. The model can self-correct
on the next call. Cap consecutive parse failures at 3 to avoid
infinite loops.
- Capture final_answer from end actions. Surface it in the observation
back to the orchestrator so its task answer can use the model's
declared result.
- Add macOS Cmd-* key mappings (M-a, M-c, M-v, M-x → Meta+A/C/V/X).
- Switch screenshot format from webp → png to match the documented
"PNG or JPEG" contract.
* chore(eval): refresh test-clado-api script for new Clado contract
Updated the local smoke-test to match the new Clado endpoint and
response contract:
- New action + health URLs (000159-merged checkpoint).
- Drop the grounding-model branch (orchestrator-executor doesn't
use it; the README David shared only documents the action model).
- Health-check waits up to 6 minutes for cold start with a 30s
warning so the operator knows it's spinning up.
- Print every documented response field (action, x/y, text, key,
direction, amount, drag start/end, time, final_answer, thinking,
parse_error, inference_time_seconds).
- Three-step run that exercises a click, a typing continuation
with formatted history, and an end+final_answer probe.
* chore(eval): point clado weekly config at agisdk-real
Switches the orchestrator-executor + Clado weekly config to run on the
AGI SDK / REAL Bench task set with the deterministic agisdk_state_diff
grader. Matches the orchestrator-executor smoke target (Fireworks K2.5
orchestrator + Clado action executor) we want to track week-over-week.
* chore(eval): run clado weekly headless
Default to headless so the weekly job (and local repros) don't pop ten
visible Chrome windows. Set headless=false locally if you need to watch
a worker.
* fix(eval): address Greptile P1+P2 on server log fd handling
P1: openSync was outside the mkdirSync try/catch, so a swallowed mkdir
failure (e.g. unwritable custom BROWSEROS_SERVER_LOG_DIR) would leave the
log directory missing and crash the server spawn with ENOENT. Move openSync
into the same try block; fall back to /dev/null so spawn always succeeds.
P2: the log fd was opened on every server start but never closed. Each
restart attempt leaked one fd across all workers — over a long eval run
that could exhaust the process fd limit. Track the fd on the manager and
closeSync it in killApp() right after the server process exits (the child's
dup keeps the file open until it exits, so we don't truncate output).
* feat(agent): list created agents in sidepanel target catalog
* feat(agent): show created agents in sidepanel selector
* feat(server): add sidepanel chat route for created agents
* feat(agent): route sidepanel agent sends by agent id
* chore(agent): retire virtual sidepanel acp targets
* fix: address review feedback for PR #865
* chore(eval): pin agisdk version to prevent silent dataset drift
`pip install agisdk` previously fetched whatever version pip resolved at
CI time. If agisdk publishes a new version with changed task definitions
or grader behavior, the weekly eval silently shifts under our feet —
making "did the score move because of code or data?" unanswerable.
Pin to agisdk==0.3.5 (the version we currently develop against). Bump
intentionally with a documented re-baseline run.
* fix(eval): exclude 4 more tasks identified by 8-trial never-passing audit
After 8 trials across K2.5 + Opus 4.6 (Phase 1 and Phase 2), 5 tasks
never passed. Per-task root-cause investigation via parallel deep-dive
subagents flagged 4 of them as fundamentally unfixable in the eval
pipeline as it stands; the 5th (`dashdish-5`) is a prompt-rule fix
that stays in.
- gocalendar-7: goal/grader contradiction. Goal says "move event to
July 19, 10 AM"; grader expects `eventsDiff.updated.*.start ==
"2024-07-18T17:00Z"` (= July 18, 10 AM PDT — same day, 1 hour shift).
Even after the Phase 2 HTML5 dnd dispatch fix correctly populates
`eventsDiff.updated`, the values are July 19 (matching the goal),
which the grader rejects.
- staynb-5: grader hardcodes literal `'Oct 13 2025'` and `'Oct 23 2025'`
year strings. The staynb date picker interprets bare "Oct 13" as the
most-recent-past instance (currently 2024 since today is 2026), not
2025. No agent can produce a persisted date string containing 2025.
- staynb-9: under-specified task. Goal says "maximum number of guests
supported"; grader requires the very specific string "32 Guests, 16
Infants" — encoding UI knowledge (Adults+Children=Guests display,
Infants render separately, per-category cap=16, Pets excluded) that
isn't in the prompt. Even Opus 4.6 stopped at 16 across 3 trials.
- opendining-3: grader requires `contains(booking.date, '2024-07-20')`
but the React-controlled date textbox flakily no-ops on `fill`. 3/8
trial pass rate is essentially coin-flip noise driven by tool-fidelity
variance rather than agent capability. Removing to reduce score noise;
Phase 2 fill post-validate warning helps when it does work, but the
task's signal-to-noise is too low for the eval set.
Dataset goes from 40 -> 36 tasks. Total EXCLUDED_TASKS now 11 entries.
Validated by 8-trial pass-record audit; deep-dive notes saved to
plans/audits/.
* feat(agents): decouple chat turn lifecycle from SSE response
Introduce a per-process ActiveTurnRegistry that owns each agent turn's
lifecycle and a ring-buffered event stream, so chat tabs that close,
refresh, or navigate away no longer cancel the in-flight turn. New
endpoints:
POST /agents/:id/chat starts a turn (now returns 409 when
one is already running, with the
active turnId for attaching)
GET /agents/:id/chat/active reports the running turn for a UI
that just mounted
GET /agents/:id/chat/stream subscribes to a turn; supports
Last-Event-ID resume via per-event
seq ids
POST /agents/:id/chat/cancel explicit cancel — fetch abort no
longer affects the underlying turn
The chat hook now captures X-Turn-Id, tracks lastSeq from SSE id lines,
re-attaches on mount when the server still has an active turn, and
routes Stop through the cancel endpoint. The runtime call uses the
registry's per-turn AbortController instead of the HTTP request signal,
which is the core decoupling that lets turns outlive their initiator.
* feat(agents): add ActiveTurnRegistry primitive backing the new chat lifecycle
The previous commit referenced these files in tests and the harness
service but global gitignore swallowed them on the first add.
The registry owns the per-turn ring buffer (drop-oldest, terminal frame
preserved), the per-turn AbortController, and subscriber fan-out used
by /chat/stream resume.
* feat(agents): redesign agent rail to match the rest of the app
Reshape the `/agents` page so it reads as a sibling of `/scheduled`
and `/soul` and adapts to the multi-adapter world (OpenClaw, Claude
Code, Codex). Visual scaffolding only in this commit — per-agent
liveness state ships as `unknown` until the server-side activity
tracker lands.
- New `AgentsHeader` mirrors `SoulHeader`/`ScheduledTasksHeader`:
accent bot tile, title, descriptive subtitle, "+ New Agent"
button. Replaces the loose top toolbar that mixed page-level and
OpenClaw-lifecycle controls.
- New `GatewayStatusBar` collects the OpenClaw lifecycle pills
(running, control plane connected) plus the Terminal/Refresh
affordances into a single labeled bar that only renders when the
gateway is running AND there is at least one OpenClaw agent in
the merged list.
- New `AgentRowCard` per agent: adapter tile with liveness dot,
name + status badge, adapter/model/reasoning chips, last-used
relative time + truncated workspace path, primary "Chat" button,
overflow menu (Copy id / Rename* / Reset history* / Delete).
Rename + Reset are disabled with "coming soon" tooltips until
the corresponding endpoints ship; Delete is hidden for the
protected `main` agent.
- New `AgentsEmptyState` mirrors the scheduled-tasks empty card.
- New `AdapterIcon` + `LivenessDot` + `agent-display.helpers.ts`
keep the row card focused on layout; helpers cover display name
fallbacks for legacy `oc-<uuid>` titles, workspace label rules,
and a tiny relative-time formatter.
- `AgentList` now sorts by `lastUsedAt` desc with `null`s falling
to the bottom; the gateway's `main` agent is pinned to the top
only while it has zero turns so a fresh install has an obvious
starting point. The list also threads a per-agent activity map
so future commits can light up working/idle/asleep without
reshuffling the API.
- `AgentsPage` swaps to the standard `fade-in slide-in-from-bottom-5
animate-in space-y-6 duration-500` shell and threads a
`harnessAgentLookup` Map down to the row card so adapter chips
and reasoning effort render correctly without a re-fetch.
* feat(agents): wire per-agent liveness end-to-end into the rail
Closes the placeholder `unknown` dot from the redesign's first
commit. The rail now shows real working / idle / asleep / error
states per agent, with `lastUsedAt` driving the recency sort.
Server side:
- `AgentHarnessService` keeps an in-memory activity tracker keyed
by agentId. `notifyTurnStarted` flips an entry to `working`,
`notifyTurnEnded({ok})` either drops it (success) or pins it to
`error` (failure / error event).
- `send()` wraps the runtime stream so the lifecycle hook fires
exactly once on natural close, error event, downstream cancel,
or thrown setup. The runtime itself stays unchanged — fork is
contained at the harness layer.
- New `listAgentsWithActivity()` method enriches every agent with
`{ status, lastUsedAt }`. lastUsedAt is read from the acpx
session record's last persisted item via `runtime.getHistory`,
so it survives server restart even though the activity map
doesn't.
- Status derivation: `working`/`error` take precedence; otherwise
timestamp-based — `idle` until 15 min of silence, then `asleep`.
Never-used agents resolve to `idle` (asleep implies "was active,
went quiet").
- `GET /agents` returns the enriched shape.
Client side:
- `HarnessAgent` UI type extended with optional `status` +
`lastUsedAt` so older deployments still typecheck.
- `useHarnessAgents` flips on `refetchInterval: 5_000` (with
`refetchIntervalInBackground: false` so hidden tabs go quiet)
so the per-row dots and last-used copy stay fresh without a
websocket.
- `AgentsPage` builds an activity map from the harness listing
response and threads it into `AgentList` → `AgentRowCard`. The
sort by `lastUsedAt` desc (already in the row card) now has
real data to operate on.
Tests:
- New `marks an agent working while a turn streams and idle once
it ends` exercises the wrap; uses a held upstream stream so
the in-flight `working` state is observable.
- New `flips to error when a turn emits an error event`.
* fix(agents): dedupe agent rail when /claw/agents and /agents share an id
The agents page was rendering every OpenClaw agent twice — once from
the legacy `/claw/agents` listing (`useOpenClawAgents`) and once from
the harness `/agents` listing (`useHarnessAgents`). Post Step 9
backfill the harness store contains every gateway agent, so the
overlap is the rule, not the exception.
Mirror the dedup the chat-panel layout already does: when a gateway
agent's id appears in the harness listing, drop the legacy entry and
keep the harness one (it has adapter/model/reasoning/status/lastUsedAt
the chat path actually consumes).
* feat(agents): swap GatewayStatusBar refresh icon for a Restart Gateway button + tooltips
The manual refresh became redundant once `useHarnessAgents` and
`useOpenClawStatus` started polling on a 5s interval — every visible
field self-refreshes within seconds. The previous AgentsPageHeader
had a real Restart action that the redesign dropped; reinstate it on
the bar so a wedged gateway is one click away again.
- GatewayStatusBar: dropped the `RotateCcw` refresh icon and the
`onRefresh` prop. Added `onRestart` + `actionInProgress` props;
the button shows a spinner while a gateway lifecycle mutation is
in flight.
- Both Terminal and Restart Gateway buttons get tooltips explaining
what they do — Terminal as a power-user shell escape hatch,
Restart for unsticking a wedged gateway or after manual config
edits.
- AgentsPage: drop the now-unused `refreshAll` helper and the
`refetchStatus`/`refetchAdapters`/`refetchOpenClawAgents`
destructures it depended on. Wire `restartOpenClaw` (already
pulled from `useOpenClawMutations`) through
`runWithPageErrorHandling` like the legacy header did.
* feat(agents): consolidate gateway status into the /agents listing
Folds the gateway lifecycle snapshot into the harness listing so the
agents page polls one endpoint instead of two. Drops the dead
`/claw/status` call from the command center while keeping every UI
affordance the page already shipped (Running / Control plane
connected pills, GatewayStateCards setup/start prompts,
ControlPlaneAlert for degraded states).
Server side:
- `OpenClawProvisioner.getStatus()` (optional) — when wired, returns
the same `GatewayStatusSnapshot` shape `/claw/status` does.
- `AgentHarnessService.getGatewayStatus()` — best-effort wrapper
around the provisioner method; logs and swallows errors so a
transient gateway issue doesn't 500 the listing endpoint.
- `GET /agents` now returns `{agents, gateway}` in a single
`Promise.all`. Both fields are independent — agents enrichment
succeeds even if the gateway snapshot is null.
- `server.ts` wires `getOpenClawService().getStatus()` into the
provisioner accessor object alongside `createAgent` /
`removeAgent` / `listAgents`.
Client side:
- `useHarnessAgents` returns `{harnessAgents, gateway}` (plus the
legacy `agents` mapping). Same 5s `refetchInterval` as before —
one round-trip drives the per-row liveness AND the gateway pills.
- `AgentsPage` drops `useOpenClawStatus` entirely; `status` comes
from the harness query. Loader + error/lifecycle plumbing
rewired around the harness query's loading/error.
- `agents-page-utils.getInlineError` and `getAgentsLoading` lose
the now-redundant `statusError` / `statusLoading` /
`openClawAgentsEnabled` params.
The chat-panel layout (`agent-command-layout.tsx`) still consumes
`useOpenClawStatus(5000)` for now — left intact per the user's "only
the command center" scope. Folding that one in is a separate,
smaller pass once we're sure no regression slipped here.
* test(agents): teach the route fake service about the new listing shape
PR #861 CI surfaced two failures in tests/api/routes/agents.test.ts:
both call \`GET /agents\` and the route handler now invokes
\`service.listAgentsWithActivity()\` + \`service.getGatewayStatus()\`
which the fake created here didn't implement. Add both methods to
the fake (returning idle / null) and update the empty-list assertion
to expect the new \`{agents, gateway}\` envelope.
* chore(acp): smoke-test ACP capabilities against running gateway
Adds apps/server/scripts/acp-smoke.ts which spawns `openclaw acp`
inside the gateway container and exercises every method we plan to
depend on: initialize, newSession, prompt (text + image), cancel,
listSessions, loadSession.
SDK pinned to 0.19.1 (Bun's minimum-release-age policy blocks 0.20+
which were released < 7 days ago).
Findings (full notes in plan outcomes):
- promptCapabilities advertises image:true but the model does NOT see
image bytes — silently dropped at the bridge.
- sessionCapabilities advertises {list:{}} but session/list throws
"Method not found": stale capability advertising.
- loadSession works; replays user/assistant/thought text and
session_info/usage/commands updates. No tool_call replay, as
documented.
- cancel works end-to-end: stopReason=cancelled.
- closeSession/resumeSession are not on ClientSideConnection in
0.19.1; kill child to close, use loadSession for rebind.
Plan revisions triggered by spike are recorded in
plans/browseros-ai/BrowserOS/features/2026-04-28-2310-claude-code-acp-implementation-roadmap.md.
* chore(acp): re-run smoke on SDK 0.21.0 and add mode/config/auth scenarios
After bypassing Bun's minimum-release-age and upgrading the SDK to
0.21.0, restore the previously-skipped resume/close paths and add
three new scenarios: mode (setSessionMode), config (setSessionConfigOption,
correct configId field), and auth (authenticate noop).
Findings, all bridge-side (independent of SDK):
- session/list, session/resume, session/close all throw -32601 on
OpenClaw 2026.4.12 — capability advertising is stale.
- Image content blocks silently dropped; model never sees the bytes.
- setSessionMode and setSessionConfigOption work; latter requires
`configId` (not `optionId`) per the schema.
- loadSession replays user/assistant/thought text + session_info +
usage + available_commands; no tool_call replay (documented).
- authenticate is a noop on OpenClaw (no authMethods advertised).
Plan outcomes updated with full method-support matrix.
* chore(deps): promote @agentclientprotocol/sdk to a runtime dependency
The smoke script in apps/server/scripts/acp-smoke.ts used the SDK as
devDependency. The upcoming ACP bridge (apps/server/src/api/services/acp/)
needs it at runtime, not just for tooling. Move the entry from
devDependencies to dependencies, alphabetically first under @a*.
Pinned to 0.21.0 — same version the smoke script validated against.
README gains a small Dependencies note pointing at the future bridge
location.
No code changes yet. The bridge wiring lands in subsequent commits.
* fix(openclaw): wire LlmProvider.supportsImages through to OpenClaw model config
When BrowserOS sets up a custom OpenAI-compat provider on the gateway,
the agent UI's "Supports Image" flag (LlmProviderConfig.supportsImages)
was being dropped on the floor. As a result the persisted model entry
had no `input` field, OpenClaw defaulted it to ['text'], and image_url
content parts were silently stripped before the model saw them.
Fix:
- Extend OpenClawSetupInput / OpenClawAgentMutationInput on the agent
side (useOpenClaw.ts) and the route body schema + SetupInput +
createAgent input on the server side with `supportsImages?: boolean`.
- AgentsPage forwards `llmOption?.supportsImages` from the selected
LlmProviderConfig in both handleSetup and handleCreate.
- provider-map.resolveSupportedOpenClawProvider emits
`input: ['text', 'image']` on the model entry when the flag is
truthy; otherwise emits the explicit `['text']` so the value is
always pinned (avoids relying on OpenClaw's implicit default).
- applyBrowserosConfig adds `tools.media.image.enabled = true` to the
bootstrap batch so the gateway's image-understanding pipeline is
always wired up — per-model `input` still gates which models see
images, this just enables the global path.
ACP image content blocks are still dropped by the OpenClaw bridge —
that's a separate bridge bug, not addressed here. This commit
restores image support for the OpenAI-compat /v1/chat/completions
path that the upcoming ACP chat panel will use as a carve-out for
image-bearing prompts.
Existing custom-provider configs are NOT auto-migrated; users will
re-acquire image support either by re-running setup or by editing
their model entries' `input` field manually. A migration pass for
legacy installs is not in scope for this commit because the
"supportsImages" intent isn't recoverable from the persisted config
alone — the source of truth is the LlmProvider record on the agent
side.
* feat(agents): add OpenClaw to AgentAdapter union and catalog
Extends AgentAdapter to 'claude' | 'codex' | 'openclaw' and adds the
OpenClaw entry to AGENT_ADAPTER_CATALOG. The new entry has:
- defaultModelId: 'default' — OpenClaw's ACP bridge does not surface
per-session model selection (verified during the ACP spike), so
models live in the OpenClawService config, not in the adapter
catalog. AgentDefinition.modelId carries the gateway-side model
name for display only.
- models: [] — empty list signals "no per-session model picker" in
the UI; isSupportedAgentModel('openclaw', undefined|'default')
returns true via the existing fallback path.
- reasoningEfforts mirror OpenClaw's session-level `thought_level`
config option (off / minimal / low / medium / high / adaptive).
Also extends:
- isAgentAdapter type guard recognizes 'openclaw'
- HarnessAgentAdapter union on the extension side
- agents.test.ts createAgent fake type
- agent-catalog.test.ts asserts on the new entry, empty model list
passthrough behavior, and OpenClaw's reasoning effort set
Lockfile delta is the workspace SDK pin reconciling 0.20.0 (taken
from dev's lock) up to our package.json's 0.21.0 (added in
c1d987ea). acpx still uses 0.20.0 transitively — both are present.
No runtime wiring yet — the registry override and AcpxRuntime
plumbing land in subsequent commits.
* feat(agents): plumb OpenClaw gateway accessors into AcpxRuntime
Adds an optional `openclawGateway` accessor to AcpxRuntime so the
upcoming registry override (Step 4) can spawn `openclaw acp` inside
the gateway container with the right port, token, and container/VM
identity. All accessors are getter-shaped so values stay live across
gateway restarts (port can change, token can rotate).
The accessor is threaded:
server.ts → createAgentRoutes → AgentHarnessService → AcpxRuntime
↘ sidepanel lazy AcpxRuntime
Also adds OpenClawService.getGatewayToken() returning the in-memory
token string. We pass it via OPENCLAW_GATEWAY_TOKEN env var on the
spawn (per OpenClaw's documented env-var precedence) instead of via
`--token` flag (which leaks to ps aux) or `--token-file` path (no
discrete token file lives inside the container — the token is nested
inside openclaw.json).
Wiring is dormant — the registry override that consumes these
accessors lands in Step 4. Typecheck + existing acpx/harness/routes
tests pass unchanged.
* refactor(agents): scrub local plan-step references from code comments
Replaces forward-looking comments that referenced internal plan
steps (e.g. "Step 4 wires this into…") with comments that justify
the code on its own merits. Plan files live locally on the
contributor's machine, so cross-references are noise to the rest of
the project.
No behavior change.
* feat(agents): spawn openclaw ACP adapter inside the gateway container
When the harness resolves the `openclaw` adapter, it now returns a
command that runs `openclaw acp` inside the bundled gateway container
via `limactl shell <vm> -- nerdctl exec -i ... openclaw acp --url
ws://127.0.0.1:<port>`. This reuses the openclaw binary already
installed alongside the gateway — no host-side openclaw install is
required.
Auth: the gateway token is injected via OPENCLAW_GATEWAY_TOKEN on
the container exec rather than `--token` on the openclaw CLI, so
the secret never appears in `ps aux`.
Banner output: OPENCLAW_HIDE_BANNER=1 and OPENCLAW_SUPPRESS_NOTES=1
keep stdout JSON-RPC-clean.
LIMA_HOME: prefixed via `env LIMA_HOME=<path>` on the resolved
command so the spawned limactl finds the BrowserOS-owned VM (the
server doesn't set LIMA_HOME on its own process env).
When the gateway accessor is absent, falls through to acpx's
built-in openclaw adapter which assumes a host-side install — that
branch will fail at spawn time with a descriptive error.
Verified end-to-end via the existing acp-smoke script during the
Step 0 spike.
* feat(agents): dual-create OpenClaw harness agents on the gateway
When the harness creates an `openclaw` adapter agent, it now also
provisions a matching agent on the OpenClaw gateway via the existing
CLI path (OpenClawService.createAgent). Symmetric on delete: gateway
removeAgent runs alongside the harness-store delete.
- Adds an OpenClawProvisioner interface (decoupled from OpenClawService
for testability) and injects it through AgentHarnessService.
- createAgent rolls back the harness record if gateway provisioning
fails; deleteAgent tolerates gateway-side failures so harness
identity stays consistent with the user-facing UI.
- New OpenClawProvisionerUnavailableError surfaces as a 503 when an
openclaw create request lands on a harness with no provisioner
wired in (instead of a generic 500).
- FileAgentStore mints openclaw agent ids with an 'oc-' prefix so
the id satisfies the gateway's `^[a-z][a-z0-9-]*$` agent name
pattern. Other adapters keep raw UUIDs to preserve compatibility.
- POST /agents body schema accepts providerType / providerName /
baseUrl / apiKey / supportsImages, forwarded to the provisioner
when adapter='openclaw'.
The agents-page UI still routes openclaw create through the legacy
/claw/agents flow; switching that path to the harness is a separate
UI cutover.
Tests cover: gateway dual-create on success, rollback on gateway
failure, 503 when provisioner is missing, and tolerant delete on
gateway-side failure.
* fix(agents): skip catalog model validation for OpenClaw adapter
OpenClaw agents resolve their model from the gateway-side provider
config (set at agent-create time via OpenClawService) rather than
from the harness catalog, which has an empty `models: []` entry by
design. Without this carve-out, every OpenClaw create body fails
parsing with "Invalid modelId" because no concrete model id can
satisfy isSupportedAgentModel('openclaw', ...).
The reasoning-effort check still runs against the catalog (those
values map directly to OpenClaw's session `thought_level` config
option).
* fix(agents): pass --session to openclaw bridge so newSession routes correctly
acpx's AcpClient.createSession calls connection.newSession with cwd
and mcpServers but never forwards the sessionKey. Without it, the
openclaw bridge falls back to a synthetic acp:<uuid> session that
doesn't resolve to any provisioned gateway agent — every harness
chat returns a generic "Internal error" from -32603.
Fix: bake `--session <key>` into the resolved spawn command. The
bridge then uses that as the default session key for any newSession
the bridge receives, routing the turn to the matching gateway agent.
Per-session keying means each openclaw agent gets its own
AcpxCoreRuntime instance (cached by sessionKey on top of the
existing cwd/permissionMode key). This adds one extra runtime per
active openclaw session — claude/codex are unaffected.
Test asserts the resolved command includes the right --session arg.
* fix(agents): suppress BrowserOS MCP for openclaw bridge
The openclaw ACP bridge rejects newSession when mcpServers is non-empty
because its provider tooling comes from the gateway, not from ACP-side
MCP servers. Forwarding the BrowserOS HTTP MCP made every harness chat
fail with a JSON-RPC -32603 "Internal error" before the session was even
opened. Claude/codex still need the BrowserOS MCP for browser tooling,
so the carve-out is keyed off whether the runtime is for an openclaw
session.
* feat(agents): route OpenClaw chat through the harness behind a flag
Adds the `feature.useAcpxForOpenClaw` extension storage flag. When on,
OpenClaw agents in the agent-command chat panel use the harness
/agents/<id>/chat SSE and harness history hook instead of the legacy
/claw/agents/<id>/chat. When off, behavior is unchanged.
Also dedupes the agent rail when the same id appears in both stores
(dual-created agents from /claw/agents and /agents) by preferring the
harness entry — without this, every dual-created OpenClaw agent shows
up twice after Step 5.
Image attachments are temporarily disabled when the harness path is
active; the carve-out lands in the next commit.
* fix(agents): keep legacy OpenClaw agents on ClawChat
The previous commit's flag-gated branch routed every `source='openclaw'`
agent through `/agents/<id>/chat` when the flag was on, but the layout
dedup means the only agents that ever reach that branch are legacy
gateway-only entries (`main`, orphan agents from rolled-back creates) —
which by definition have no harness record, so the harness path 404s
and chat is unusable. Source is the only routing signal again: harness
agents go through the harness, legacy agents stay on ClawChat. The
storage flag stays for Step 9/10's migration story.
* feat(agents): expose OpenClaw in sidepanel and route through gateway main
`buildSidepanelChatTargets` now emits a single default ACP target for
adapters with no per-session model picker (OpenClaw, whose model is
configured on the gateway-side agent). Without this, OpenClaw never
appeared in the sidepanel target picker because the catalog entry has
`models: []`.
Sidepanel sessions don't have a dedicated provisioned gateway agent.
The openclaw bridge `--session` flag previously got the raw sidepanel
key (`sidepanel:<convId>:openclaw:...`), which doesn't match any
gateway agent — newSession was accepted but every prompt hung
forever. The bridge command now rewrites non-harness session keys
onto the always-present `main` gateway agent, encoding the original
key as a channel suffix to keep state segregated per conversation.
Verified end-to-end via curl: sidepanel openclaw chat streams
`text-delta` + `finish: stop`.
* feat(agents): backfill harness records for legacy gateway agents
Reframes Step 9 of the OpenClaw-on-acpx migration. The plan's literal
Step 9 (route OpenClaw history through the harness when the flag is on)
was already a no-op after the Step 6 walkback — history is routed by
source today. The actual blocker for Steps 10–13 was that legacy
gateway-only agents (e.g. `main`, orphans from rolled-back creates) had
no harness record, so they could never migrate to the harness path
without breaking chat.
`AgentHarnessService.reconcileWithGateway()` now lists every gateway
agent and upserts a matching harness record for any that are missing.
The pass runs lazily on first `listAgents()` call (memoized on success,
retried on failure so a gateway-down boot doesn't permanently disable
backfill). Verified end-to-end: the legacy `agent` agent now streams
`text_delta` + `done(end_turn)` through `/agents/agent/chat`, with the
bridge resolving to the gateway's `agent` record via the existing
`agent:<name>:main` session-key format.
After this, every OpenClaw agent surfaces as `source='agent-harness'`
post-dedup, the legacy `useClawChatHistory` hook becomes unreachable
for OpenClaw, and Steps 11–13 (delete legacy chat/history paths) are
unblocked.
* fix(agents): drop duplicate OpenClaw entry from NewAgentDialog adapter list
The adapter Select hardcoded an `<SelectItem value="openclaw">OpenClaw</SelectItem>`
on top of iterating `adapters`, which now includes OpenClaw post the
catalog change. The dropdown rendered "OpenClaw" twice — once at the
top, once at the bottom of the list. The literal was a pre-catalog
artifact; removing it leaves a single OpenClaw entry sourced from the
catalog. Routing into `handleOpenClawCreate` is unchanged because
the value (`'openclaw'`) is identical either way.
* fix(agents): always reconcile harness with gateway on list, just dedupe concurrent calls
Memoizing the first successful reconcile meant new gateway agents (created
via the legacy /claw/agents path or out-of-band CLI) never appeared in the
harness until server restart. The Promise now serves as a concurrent-call
dedupe only — cleared on settle — so every listAgents call picks up the
current gateway state. Reconcile is one cheap idempotent CLI call.
* chore(agents): remove dormant useAcpxForOpenClaw flag
The flag was scaffolded in Step 6 but its routing effect was walked
back the same day after it broke chat for legacy gateway-only agents.
After the Step 9 backfill, every OpenClaw agent has a harness record
and routes through the harness path purely from `source='agent-harness'`
— no flag is consulted anywhere. Remove the dead storage item, hook,
and stale comment.
* refactor(agents): drop legacy /claw/agents/:id/history endpoint
The harness /agents/:id/sessions/main/history endpoint replaced this
once every OpenClaw agent got a harness record (Step 9 backfill).
Routing is fully source-driven now, so the UI's useClawChatHistory
hook is never enabled today — verified live: legacy URL returns 404,
harness history hydrates correctly for the same agent.
Removes the GET /claw/agents/:id/history route, OpenClawService's
getAgentHistoryPage method plus its cursor/limit helpers and the
history-only types it owned (BrowserOSOpenClawHistoryPageResponse,
HistoryPageInput, normalizeHistoryLimit, encodeHistoryCursor,
decodeHistoryCursor, jsonlEventsToHistoryItems), and the route +
service tests that covered the dropped endpoint.
OpenClawJsonlReader stays alive — still feeds /claw/dashboard,
/claw/agents/:id/sessions, and the boot-time clawSession seed.
Removing those is its own follow-up since the dashboard would need
a harness-side replacement first.
* feat(agents): wire image attachments through the harness ACP path
Composer attachments now flow into the ACP `prompt` request as
spec-compliant `image` content blocks alongside the user's text. End
to end:
composer → chatWithHarnessAgent({attachments}) →
POST /agents/:id/chat {message, attachments} →
parseChatBody decodes data: URLs to {mediaType, base64} →
AgentHarnessService.send forwards →
AcpxRuntime.send forwards →
acpx startTurn({attachments}) → ACP image blocks
UI no longer disables the attach button on harness agents — the
gating was just a placeholder before this commit landed. Verified
end to end with a 1×1 red PNG against a Claude harness agent: model
replies "Red." correctly.
OpenClaw's `acp` bridge still drops image content blocks upstream
(verified by the same probe — Kimi-k2p5 reports "I don't see an
image"). That's an upstream openclaw limitation, not a harness-side
gap; Claude/Codex agents work as advertised today.
* chore(openclaw): delete OpenClawJsonlReader and JSONL-backed routes
* chore(openclaw): remove legacy /claw/agents/:id/chat and /queue routes
* chore(agents): collapse chat panel to harness-only path
* feat(agents): route OpenClaw image turns through the gateway HTTP client
The OpenClaw `acp` bridge silently drops ACP `image` content blocks
(verified during dogfood — model says "I don't see an image"). When
the user attaches images to an OpenClaw agent, the harness now diverts
that turn to the gateway's HTTP `/v1/chat/completions` endpoint, which
accepts OpenAI-style `image_url` parts and forwards them natively to
the provider.
- New `OpenClawGatewayChatClient` translates an OpenAI streaming
response into the same `AgentStreamEvent` shape the rest of the
harness already consumes, so the chat panel renders identically
whether a turn went through ACP or the gateway carve-out.
- `AcpxRuntime.send` forks at the top: openclaw + any image
attachment + a wired gateway client → `sendOpenclawViaGateway`.
Other turns (text-only openclaw, claude, codex) take the existing
ACP path unchanged.
- The diverted path reads the prior turn history from the acpx
session record so context is preserved, builds the OpenAI
multimodal user message with text + image_url parts, and pumps
the gateway SSE back to the caller through a tee that accumulates
the assistant text. On natural completion, persists a synthetic
user+assistant message pair to the acpx session record so reload
shows the image turn in history.
- Wired `OpenClawGatewayChatClient` into `AgentHarnessService` via
`server.ts` (gateway port + token accessor, just like the existing
`openclawGateway`).
Persistence note: the acpx record requires User messages to carry an
`id` and Agent messages to carry `tool_results` — without them the
record fails to round-trip through `parseSessionRecord`. The persist
helper now sets both.
Limitation by design: image recognition only works if the OpenClaw
agent's provider supports vision (e.g. Claude-via-OpenClaw, GPT-4o).
The pipeline routes images correctly to the provider regardless;
text-only providers like Kimi-k2p5 will reply "I don't see an image"
because the model itself has no vision capability — that's a provider
config issue, not a routing bug. The unit test asserts the image_url
part is present in the OpenAI request the gateway client sends.
The wider plan (background-resilient chat, queue, replay) remains in
`plans/.../2026-04-29-1527-...-background-resilient-chat-and-image-uploads.md`
as Phases 3–12; this commit ships only Phases 1–2.
* feat(agents): validate inbound image attachments on /agents/:id/chat
The harness chat body parser was accepting any mediaType and any
dataUrl length. The composer enforces these caps client-side but the
endpoint also serves direct curl/script callers, so the server has to
defend itself.
Restores the same caps the legacy /claw/agents/:id/chat parser had
before it was deleted in the migration:
- 10 attachments per message
- 5 MB raw image bytes (≈ 6.7 MB once base64-encoded plus prefix)
- PNG / JPEG / WebP / GIF only
- Must start with `data:`
Each violation returns 400 with a specific error message instead of
silently dropping or forwarding the payload.
* refactor(eval): drop unused agents/graders, collapse registries
Sweep of dead code in the eval app: deleted gemini-computer-use and
yutori-navigator agents, fara/webvoyager/mind2web graders, eight
debug/analyze/test scripts, three stale planning docs, and the orphaned
eval-targets/coordinate-click testbed.
With two agents and three graders left, the Map-backed plugin registries
were over-engineered — collapsed both into plain switches. Removed the
now-dead GraderOptions plumbing (no remaining grader takes API keys),
dropped grader_api_key_env/grader_base_url/grader_model from the schema
and configs, and de-duped PASS_FAIL_GRADER_ORDER (was defined in three
places). Replaced the URL-parsing extractCdpPort hack in single-agent
and orchestrator-executor with workerIndex passed cleanly through
AgentContext.
README and --help text rewritten to match reality. Renamed
configs/test_*.json to test-*.json for kebab-case consistency.
Net: ~10,460 LOC removed across 60 files. Typecheck clean, all tests
pass.
* ci(eval): pull BrowserOS from rolling stable CDN URL
The pinned v0.44.0.1 .deb on GitHub releases regressed on Linux —
servers start but never become healthy. Switch to the canonical rolling
URL at cdn.browseros.com/download/BrowserOS.deb so CI tracks the same
stable channel users get from the marketing site.
When BrowserOS sets up a custom OpenAI-compat provider on the gateway,
the agent UI's "Supports Image" flag (LlmProviderConfig.supportsImages)
was being dropped on the floor. As a result the persisted model entry
had no `input` field, OpenClaw defaulted it to ['text'], and image_url
content parts were silently stripped before the model saw them.
Fix:
- Extend OpenClawSetupInput / OpenClawAgentMutationInput on the agent
side (useOpenClaw.ts) and the route body schema + SetupInput +
createAgent input on the server side with `supportsImages?: boolean`.
- AgentsPage forwards `llmOption?.supportsImages` from the selected
LlmProviderConfig in both handleSetup and handleCreate.
- provider-map.resolveSupportedOpenClawProvider emits
`input: ['text', 'image']` on the model entry when the flag is
truthy; otherwise emits the explicit `['text']` so the value is
always pinned (avoids relying on OpenClaw's implicit default).
- applyBrowserosConfig adds `tools.media.image.enabled = true` to the
bootstrap batch so the gateway's image-understanding pipeline is
always wired up — per-model `input` still gates which models see
images, this just enables the global path.
ACP image content blocks are still dropped by the OpenClaw bridge —
that's a separate bridge bug, not addressed here. This commit
restores image support for the OpenAI-compat /v1/chat/completions
path that the upcoming ACP chat panel will use as a carve-out for
image-bearing prompts.
Existing custom-provider configs are NOT auto-migrated; users will
re-acquire image support either by re-running setup or by editing
their model entries' `input` field manually. A migration pass for
legacy installs is not in scope for this commit because the
"supportsImages" intent isn't recoverable from the persisted config
alone — the source of truth is the LlmProvider record on the agent
side.
* fix(eval): exclude broken tasks + freshen expired card dates
Two AGISDK tasks are unsolvable today for non-model reasons:
- topwork-1: evals-topwork.vercel.app throws Minified React error #185
("Maximum update depth exceeded") on every form submit. The page renders
"Application error: a client-side exception has occurred" instead of saving.
Whole-task failure, every model affected.
- fly-unified-2: hardcodes Exp: 12/25 in both the goal text AND a jmespath
grader criterion. Today is 2026-04, so the eval-site rejects the card.
Freshening the goal alone leaves the grader expecting the original value;
freshening both would require monkey-patching agisdk's TaskConfig at
runtime — too fragile to maintain.
Adds these to a new EXCLUDED_TASKS set alongside the existing
EXCLUDED_WEBSITES (omnizon).
Also adds freshen_goal_dates(): for AGISDK fly-unified tasks whose goal
contains an `Exp: MM/YY` within 6 months of today (or past), rewrites it
to a far-future date (12/30). This rescues fly-unified-5 (had Exp 12/25,
no card-exp grader criterion) and protects fly-unified-4 (had Exp 06/26,
2 months from expiring) from the next eval run hitting the same trap.
Dataset goes from 47 -> 45 tasks; 2 freshened.
* feat(eval): add lenient-strings grader softening
The agisdk grader compares jmespath-extracted values via strict equality.
For tasks where the model adds harmless decoration to a free-text field
(e.g. topwork-3 expects title "Full-Stack Developer" but model produces
"Full-Stack Developer - Enterprise Microservices Platform"), this fails
every other criterion would pass.
Adds a substring fallback in the wrapper: a failed criterion is re-marked
as a softened pass when both actual_value and expected_value are strings
and the (stripped, lower-cased) expected_value is contained in the
actual_value. Numbers/bools/dates/None stay strict.
- Default-on. Set AGISDK_STRICT_STRINGS=1 to recover the strict score.
- Softened criteria are tagged with `softened: true` in per_criterion
output for transparency in run manifests.
- Aggregate `pass`/`reward` are recomputed after softening.
Expected to rescue 4 tasks in our 45-set: topwork-3, topwork-4 (both pure
title-decoration), gomail-8 (grader contradicts goal), and networkin-6
(grader hardcodes profile id).
* fix(eval): exclude 5 more tasks where pipeline (not agent) fails
Extends EXCLUDED_TASKS to 7 entries based on the K2.5 + Opus 4.6
head-to-head deep-dive on the 2026-04-28 runs. The exclusion rule:
remove a task only if it is unsolvable for any agent — either the task
data is invalid, the eval site is broken, or the grader penalizes
correct work. Tasks that fail because of our agent's tool fidelity
(drag, custom-widget fill, click on React submit, etc.) STAY in — those
are real capability gaps the team should see in the score.
New exclusions:
- fly-unified-9: goal references "Dec 18 2024 at 10:00" but the live
eval site has only 2025 inventory and no 10:00 slot. Both models
successfully booked the closest available flight and were penalized
on a grader expectation that can never be met.
- fly-unified-4: eval site stores wall-clock flight times as bare UTC
(T08:00:00.000Z) while the grader expects them shifted by 8h
(T16:00:00.000Z = 8 AM PST). Opus 4.6 completed the entire booking
correctly. Eval-site TZ-storage bug.
- gomail-8: goal says "Clear all emails from GitHub in the inbox", but
criterion 3 expects exactly 1 email updated. Both K2.5 and Opus
correctly cleared all 4 GitHub emails. Grader contradicts goal.
- networkin-6: goal says "Choose a random person you haven't connected
with"; grader hardcodes profilesDiff.updated."4".connectionGrade.
Both models randomized correctly and missed id 4. Grader contradicts
goal.
- networkin-9: eval site's searchHistoryDiff doesn't record queries
submitted via the autocomplete + Enter path. Opus 4.6 completed the
task end-to-end (Stanford alum, connection request, message); only
failed because the search-history criterion was never written
server-side. Eval-site bug.
Dataset goes from 45 -> 40 tasks. Score impact (same K2.5/Opus runs,
recomputed against the cleaned 40-task denominator):
K2.5: 21/45 (46.7%) -> 21/40 (52.5%)
Opus 4.6: 28/45 (62.2%) -> 28/40 (70.0%)
Δ: 15.6 pp -> 17.5 pp (real model gap, less pipeline noise)
The 2026-04-23 weekly run had 42% of AGISDK and 46% of Infinity tasks
fail with `AI_RetryError: ... the service is overloaded` from Fireworks
(20 concurrent kimi-k2p5 streams across both runs at 10 workers each).
Switching to OpenRouter (which fronts the same Moonshot K2.5 weights
and falls back across providers) for the three weekly configs:
- browseros-agent-weekly.json
- agisdk-real-smoke.json
- infinity-hard-50.json
Model accounts/fireworks/models/kimi-k2p5 -> moonshotai/kimi-k2.5
(same weights, same 262K context). API key env var, base URL updated.
OPENROUTER_API_KEY is already wired into .github/workflows/eval-weekly.yml
and present in repo secrets — no GH config changes needed.
Orchestrator-executor configs and test_webvoyager left on Fireworks
intentionally; can switch later if needed.
* feat: deterministic eval graders (AGI SDK + WebArena-Infinity) (#664)
* feat: add deterministic eval graders (AGI SDK + WebArena-Infinity)
Two new benchmark integrations with programmatic grading — no LLM judge.
AGI SDK / REAL Bench (52 tasks):
- 11 React/Next.js clones of consumer apps (DoorDash, Amazon, Gmail, etc.)
- Grader navigates browser to /finish, extracts state diff from <pre> tag
- Python verifier checks exact values via jmespath queries
WebArena-Infinity (50 hard tasks):
- 13 LLM-generated SaaS clones (Gmail, GitLab, Linear, Figma, etc.)
- InfinityAppManager starts fresh app server per task per worker
- Python verifier calls /api/state and asserts on JSON state
Infrastructure:
- GraderInput extended with mcpUrl + infinityAppUrl for parallel workers
- Each worker gets isolated ports (no cross-worker state contamination)
- CI workflow: pip install agisdk, clone webarena-infinity repo
* chore: switch eval configs back to kimi-k2p5
* fix: register deterministic graders in pass rate calculation
Add agisdk_state_diff and infinity_state to PASS_FAIL_GRADER_ORDER
in both runner types and weekly report script, so scores show correctly
in the dashboard.
* chore: temp switch to opus 4.6 for eval run
* chore: restore kimi-k2p5 as default eval config
* ci: add timeout and continue-on-error for trend report step
* fix(eval): drop omnizon from AGISDK dataset (DMCA takedown)
evals-omnizon.vercel.app returns HTTP 451 ("This content has been
blocked for legal reasons / DMCA_TAKEDOWN"). All 5 omnizon-* tasks
fail grading with "Failed to fetch /finish endpoint: JSON Parse error".
Adds an EXCLUDED_WEBSITES set to the dataset builder and regenerates
agisdk-real.jsonl (52 → 47 tasks).
* fix(eval): correct Infinity port-assignment bugs
Two related bugs in the Infinity eval runner that cause silent port
collisions / fallbacks under parallel execution:
1. build-infinity-dataset.py emitted "app_port" but task-executor and
the committed JSONL both read "app_base_port". Re-running the build
script would silently make every task fall back to the 8000 default,
ignoring per-app port assignments. Renamed the key to match.
2. task-executor derived workerIndex as `base_server_port - 9110`, but
parallel-executor doesn't override base_server_port per worker —
only server_url. Every worker computed workerIndex = 0, causing all
parallel workers to spawn Infinity app servers on the same port.
Threading workerIndex explicitly through TaskExecutor instead.
Also drops an unused app_name parameter from load_tasks().
* feat(agent): attach images and text files to chat messages
Adds end-to-end support for image and text file attachments in the chat
composer, with the staged files round-tripping through the OpenClaw
gateway as OpenAI-compatible content blocks and persisting in the JSONL
so they show up in the historical view.
Server
- HTTP client: new OpenClawChatContentPart union and a buildUserContent
helper that emits multimodal content arrays when messageParts is
supplied, falls back to the legacy string content otherwise.
- Service: chatStream takes an optional messageParts array and forwards
it; BrowserOSChatHistoryItem gains an attachments field.
- JSONL reader: PiContentBlock learns the OpenAI image_url and Anthropic
image source/data shapes; user messages now emit user.attachment
events that the history mapper accumulates onto the next user item.
- Route: validates an inbound attachments[] (kind/mime/size/count),
inlines text-shaped files as <attachment> blocks in the message body,
attaches images via image_url parts. Replaces the immediate 409 on
active monitoring session with a 30s waitForSessionFree(agentId) wait
(registry now exposes onSessionEnd) so cron/hook contention does not
reject a user-chat send outright. Returns 503 if the wait times out.
Client
- New lib/attachments.ts: validateAttachment / compressImageIfNeeded
(canvas downscale to 2048px long edge, JPEG 0.85 re-encode for >1.5
MB inputs) / stageAttachment / stageAttachments that produces the
staged-attachment shape the composer renders and the payload the
server accepts.
- ConversationInput: drag-and-drop, paperclip button, clipboard paste,
staged attachment chip strip with thumbnails for images and a
paperclip+name chip for text files. Send button enables on either
text or attachments. Drop-zone overlay during drag.
- chatWithAgent forwards attachments[]; useAgentConversation.send
accepts a SendInput shape and renders user attachments on the
optimistic streaming turn via MessageAttachments / MessageAttachment.
- ClawChatMessage groups historical attachment parts into a single
MessageAttachments strip, ordered before reasoning/tools/text.
- claw-chat-types adds an attachment ClawChatMessagePart variant; the
history mapper emits attachment parts first and skips the text part
when the user only sent media.
- AgentCommandHome forwards the new SendInput shape — home composer
drops attachments at the boundary in v1 (the conversation page is
where staging is most useful; carrying bytes through the URL bar
is not sensible).
Limits: 10 attachments per message, 5 MB per image (post compression),
1 MB per text file, mime types png/jpeg/webp/gif and text/* +
application/json. PDFs and other binaries are deferred to v2.
* feat(agent): outbound message queue for chats while agent is mid-turn
Lets users keep typing and submitting messages while the agent is still
streaming a previous turn. Each press is appended to a single-flight
queue and dispatched as soon as `streaming` flips false; the queued
state renders as a strip above the composer so the user sees what's
pending vs. what's already sending.
- New `useOutboundQueue` hook owns the queue, the worker effect, and
cancel/retry actions. Single-flight by design — a re-entrancy ref
guard prevents two simultaneous dispatches when `streaming` flickers.
- Composer (`ConversationInput`) accepts optional `outboundQueue`,
`onCancelQueued`, `onRetryQueued` props. When the queue is provided
the send-button gate stops blocking on `streaming`; the spinner stays
as the visual cue that the agent is still busy. Legacy direct-send
callers keep the old streaming-blocks-send semantic.
- Renders an OutboundQueueStrip above the staged-attachment strip with
per-item status (queued / sending / failed), a cancel button on
queued items, and retry + discard on failed items.
- AgentCommandConversation wires `onSend` to `queue.enqueue` and routes
the home composer's `?q=` initial-message handoff through the queue
too, so it inherits the same single-flight serialization.
The server-side `waitForSessionFree` (added with attachments) and this
client-side queue together cover both contention sources: cron / hook
turns and back-to-back user sends. Persistence across reloads is
intentionally out of scope for v1 — losing the queue on extension
reload is documented as a known limitation.
* feat(server): server-side outbound message queue
Replaces the client-only React-state queue from 123ef21d with a
proper server-owned queue. Closing the tab is now safe — the server
holds queued messages and dispatches them through the existing
chatStream path the moment the agent's ClawSession status flips to
idle.
Server
- New OutboundQueueService (apps/server/src/api/services/queue) — per
agent FIFO, in-memory. Subscribes to ClawSession.onStateChange
through OpenClawService.onAgentStatusChange, and dispatches via
OpenClawService.chatStream so attachments / history / monitoring
all behave identically to the existing /chat route. The worker
drains the SSE response server-side so the gateway run finalizes
cleanly even with no client connected.
- Four new routes under /claw/agents/:id/queue:
POST /queue enqueue
DELETE /queue/:itemId cancel a queued item
POST /queue/:itemId/retry re-queue a failed item
GET /queue/stream SSE feed of the per-agent queue state.
Validation reuses validateChatAttachments and
buildMessagePartsFromAttachments from the existing chat route.
- Singleton wired in apps/server/src/main.ts; shutdown on SIGTERM.
- New OpenClawService.getAgentState getter for the queue worker's
pre-dispatch sanity check.
Client
- useOutboundQueue rewritten as an SSE-backed projection over server
state. Public API unchanged so the composer still works.
- enqueue POSTs to /queue and shows an optimistic local entry until
the server's SSE snapshot reflects it; local-only entries get a
`local-` id prefix so cancel can short-circuit them without
hitting the server.
- AgentCommandConversation watches the queue for sending items
dropping out and refetches history so the new assistant turn shows
up in the conversation view (the server worker streams the
dispatched turn into OpenClaw without exposing per-turn SSE to
the client).
Out of scope (documented in the plan as v2 follow-ups): disk
persistence (server restart loses queue), per-turn live streaming
of queued sends in the conversation view, and switching the
underlying dispatch from /v1/chat/completions to the chat.send RPC
(which would also fix the multimodal attachment routing problem).
* fix(server): outbound queue must reuse existing session, not spawn UUIDs
The queue worker was generating a fresh randomUUID() as the sessionKey
when the queued item didn't carry one — and the client wasn't sending
one. Result: every queued message kicked off a brand-new OpenClaw
session, orphaning the user's active conversation behind the new
"most recent" entry in sessions.json. The history endpoint then
resolved to the orphan and the chat appeared to disappear.
Fix is layered:
- Client (useOutboundQueue): forward the current resolvedSessionKey
in the POST /queue body so every queued message targets the same
conversation the user is viewing. AgentCommandConversation passes
resolvedSessionKey into the hook.
- Server (OutboundQueueService): the worker now resolves to the
agent's existing user-chat session when no sessionKey is provided
on the queued item, via OpenClawService.resolveAgentSession. UUID
fallback is now reserved for the first-ever message on a brand
new agent — same semantic the existing /chat route has implicitly
through the catalog of historical sessions.
No JSONL data was lost by the original bug (the prior conversations
are intact on disk); the orphan sessions just shadowed the original
in sessions.json.
* fix(agent,server): address PR review feedback for chat queue
- Tighten image data URL cap to base64-aware ~6.7 MB (was ~7.5 MB
through `MAX_IMAGE_BYTES * 2`).
- Forward chat history from useOutboundQueue.enqueue so queued sends
preserve conversation context like direct sends do.
- Match local attachment previews to server snapshots by id (not by
message text), and prune the preview map as items drain.
- Pass an AbortSignal into chatStream so a queue shutdown cancels the
initial OpenClaw handshake, not just the SSE drain loop.
- Track previously gitignored apps/agent/lib/attachments.ts (was caught
by global lib/ ignore) so CI typecheck can resolve @/lib/attachments.
- Update server-api openclaw route tests to the new chatStream signature
and the waitForSessionFree-based busy-agent path.
* fix(agent): dedupe optimistic queue entries for text-only sends
The localId↔serverId map was only populated when the message had
attachments, so plain-text sends left the optimistic local entry in
place after the server snapshot arrived — the user saw the same
message rendered twice in the queue strip.
* fix(agent): prune optimistic queue entry on POST ack, not just SSE
The server broadcasts the new queue snapshot before its POST response
returns, so the SSE handler often runs first — at that point the
localId↔serverId map has no entry for the new server id yet, so the
SSE-based dedupe path can't drop the optimistic local entry. Pruning
on POST success closes the race deterministically.
* fix(agent): hand off optimistic queue entry without a render gap
Pruning the local entry on POST success only worked when the SSE
snapshot had already overwritten it; if the POST response landed
first, the optimistic row disappeared for a frame before the SSE
snapshot brought back the server-keyed row, producing a visible
flicker. Gate the POST-side prune on the SSE snapshot already
carrying the server id, and rely on the SSE-based dedupe (now
guaranteed to find the localId↔serverId link in the map) to clean
up when SSE arrives later.
* fix(agent,server): client-generated queue id eliminates render flicker
The server used to assign its own UUID when an item was enqueued, so
the optimistic client row carried a `local-` id while the SSE snapshot
carried a server UUID — the client had to wait for the POST response
to learn the mapping before it could dedupe, and during that window
both rows rendered.
Now the browser generates the id, sends it in the POST body, and the
server uses it verbatim (falling back to a fresh UUID only if the id
collides with an existing item). The client collapses to a single
id-keyed list, so the optimistic row and the SSE row reconcile on the
same key from the very first render.
* feat: pass per-turn cost and token data through chat history items
- Add costUsd, tokensIn, tokensOut to BrowserOSChatHistoryItem (server)
- Pass through from JSONL agent.message events in jsonlEventsToHistoryItems()
- Add same fields to client-side BrowserOSChatHistoryItem and ClawChatMessage
- Map cost/token data in mapHistoryItemToClawMessage()
Data flows: JSONL message.usage → server history item → API response →
client ClawChatMessage. Available for rendering in ClawChatMessage
component (message toolbar, cost badges).
* feat: add message toolbar with copy button and per-turn cost display
Add MessageToolbar to historical assistant messages in ClawChatMessage:
- Copy button copies message text to clipboard via MessageAction
- Per-turn token count (22.7K → 238) and cost ($0.003) shown as muted
tabular-nums text on the right side of the toolbar
- Toolbar appears on hover (opacity transition via group-hover)
- Only shown when the message has text content
- Cost/token display only shown when data is available from JSONL
* fix: toolbar only on assistant messages, always visible, cost only
- Only render toolbar on assistant messages (not user messages)
- Remove hover-only opacity — toolbar is always visible
- Remove token counts (22.7K → 238 is meaningless to users)
- Show only cost as a budget signal ($0.003)
* feat: group all tool activity into single Task collapsible per turn
Replace flat tool rows with a single ai-elements Task collapsible per
assistant turn that lists every tool/MCP call in sequence.
Live streaming (ConversationMessage):
- Aggregate all tool-batch parts into one Task
- Title: "Working… (N actions)" while running, "Agent activity (N actions)" when done
- Default open while turn is in progress
- Wrench icon in trigger
Historical (ClawChatMessage):
- Group all tool-call parts into one Task
- Title includes failed count if any tools errored
- Default collapsed — expandable on click
- Tool name + status icon + error text per row
Both views show one clean collapsible per turn instead of N individual
tool cards. Collapsed reads "5 actions"; expanded shows the timeline.
* feat: include tool calls in chat history responses
Server: jsonlEventsToHistoryItems() now walks ALL events (not just
messages) and pairs agent.tool_use with agent.tool_result by toolCallId.
The resulting tool call list is attached to the next assistant text
message as toolCalls[]. Each entry includes status, input arguments,
output text, error string, and duration computed from event timestamps.
Client:
- BrowserOSChatHistoryItem gets optional toolCalls field
- Tool-call message part type gets durationMs field
- mapHistoryItemToClawMessage() emits tool-call parts BEFORE the text
part (the order the agent produced them)
- ClawChatMessage Task view now shows tool duration in seconds
Result: historical messages now display the full tool activity
timeline grouped into the single Task collapsible per turn (designed
in step 3), instead of showing only the final text response.
* feat: render activity rows as human verbs sourced from tool registry
Tool calls in the chat activity view now read as sentences:
"Opened tab · news.ycombinator.com" instead of "browseros__new_page".
Server (tool-label-registry.ts):
- Curated verb override map for ~70 BrowserOS first-party tools
- Per-tool subject extractors that pull the meaningful argument from
input (URL → host, query → quoted, element → ID, etc.)
- Generic fallback humanizes snake_case for any unmapped tool
- Strips MCP namespace prefixes (browseros__, mcp_)
Server (openclaw-service.ts):
- jsonlEventsToHistoryItems calls buildToolLabel for each tool_use,
attaches label and subject to the BrowserOSChatHistoryToolCall
Client:
- Mirrored label module at lib/tool-labels.ts
- useAgentConversation tool-start handler computes label/subject
from the SSE tool args
- ClawChatMessage and ConversationMessage render label · subject
with foreground/muted styling, no font-mono
- ToolEntry, BrowserOSChatHistoryToolCall, and tool-call message
part types all carry label and optional subject
* fix: drop meaningless tab N subject from page-read tool rows
Page IDs are internal numbers, not URLs. 'Took screenshot · tab 4'
tells the user nothing. Removed subject extractors for take_snapshot,
take_enhanced_snapshot, get_page_content, get_page_links, get_dom,
and take_screenshot. The verb alone is the right signal.
* fix: gate initial loading on historyQuery.isFetched not isLoading
The session and history queries are sequential: the history query is
disabled until session resolves. After session resolves, there's a render
frame where historyQuery.isLoading is still false (the query hasn't
been kicked off yet). isInitialLoading flipped to false during that
window, exposing an empty chat shell with just Task collapsibles and
copy buttons before the messages filled in.
Switching the guard to isFetched closes that window — the loading state
stays true until the first history fetch actually completes.
* fix: render historical messages immediately instead of through Streamdown's idle-callback debounce
Streamdown defaults to mode="streaming" which uses requestIdleCallback (300ms
debounce, 500ms idle timeout) and lazy/Suspense to optimize for token-by-token
live streams. For finalized historical messages this caused tool collapsibles
and copy buttons to paint while text bodies stayed blank for ~300-500ms after
load. Pass mode="static" + parseIncompleteMarkdown=false on the historical
MessageResponse so completed text paints in the same frame as the surrounding
chrome. Live streaming turns still use the default streaming mode.
Also collapse the redundant /agents/:id/session round-trip into the existing
/history endpoint (server already resolves the most recent user-chat session
when sessionKey is omitted) and tighten the initial-loading gate to stay true
across the render frame where the query is enabled but hasn't started fetching.
* feat: surface thinking duration on historical reasoning collapsibles
Server accumulates agent.thinking events per turn from JSONL and attaches a
single reasoning block (joined text + durationMs from first thinking event
to the closing agent.message) on each assistant history item. Reasoning
buffer resets on user.message alongside the tool-call buffer.
Client mirrors the type, emits the reasoning part before tool calls in
mapHistoryItemToClawMessage (chronological: think → act → answer), and
passes duration in seconds to <Reasoning> so the trigger reads "Thought
for N seconds" instead of just "Thinking" on collapsed historical turns.
* fix: read thinking blocks from the correct JSONL field name
OpenClaw stores reasoning blocks as {type:'thinking', thinking:'...'} but
the JSONL parser was reading block.text, so every thinking event was
silently dropped before it ever reached jsonlEventsToHistoryItems. As a
result the reasoning field on history items was always empty even though
the new accumulator was wired up correctly.
Also guard the client mapping: when durationMs is 0 (think + answer
emitted in the same JSONL line, no real elapsed wall-clock) pass
undefined to <Reasoning> so it renders the static "Thinking" trigger
instead of the streaming shimmer / "Thought for 0 seconds".
* fix: reset reasoning buffer on discarded turns and drop dead session hook
Two cleanups from PR review:
1. jsonlEventsToHistoryItems: when an agent.message is discarded (the
"[Chat messages since your last reply" wrapper without a current-message
marker) the tool buffers were already reset but the reasoning buffer
was not. Accumulated thinking from the discarded turn would bleed onto
the next assistant message. Reset pendingReasoningTexts and
pendingReasoningFirstAt alongside the tool buffers.
2. useClawAgentSession, the AgentSessionResponse type, and the unused
session entry in CLAW_CHAT_QUERY_KEYS became dead code after the
session round-trip was folded into the history endpoint. Removed.
* feat: draft agent chat ui exploration
* feat: refine agent chat ui draft
* feat: remove outer frame from agent chat workspace
* fix: offset agent chat for app sidebar
* fix: simplify agent conversation shell
* fix: remove redundant chat header actions
* fix: unify agent conversation headers
* fix: tighten agent chat spacing
* fix: bound agent chat composer height
* fix: remove agent chat page inset
* fix: align agent header height with sidepanel
* fix: center agent composer resting state
* fix: anchor multiline composer controls
* fix: remove focus grid from agent home
* fix: remove redundant agent home header
* fix: constrain home agent composer
* fix: match home composer default posture
* feat: add openclaw chat history APIs
* feat: add claw chat history hydration
* fix: stabilize claw chat viewport layout
* fix: use conversation scroll base for claw chat
* refactor: split claw chat controller responsibilities
* fix: keep active agent turns in memory
* fix: normalize openclaw chat sessions
* refactor: use HTTP client for agent history instead of CLI client
Replace the CLI-based getChatHistory() call in getAgentHistoryPage()
with the HTTP client's getSessionHistory() from PR #795. This uses
the direct HTTP transport to OpenClaw's /sessions/<key>/history
endpoint instead of shelling out through the CLI.
- Add filterHttpSessionHistoryMessages() for flat-string content format
- Add normalizeHttpHistoryMessages() for OpenClawSessionHistoryMessage shape
- Update getAgentHistoryPage() to call getSessionHistory() via httpClient
- Remove unused getChatHistory(), filterOpenClawSystemMessages(),
normalizeChatHistoryMessages(), and getTextContent()
- Update test mocks from cliClient.getChatHistory to httpClient.getSessionHistory
- Update MutableOpenClawService type: chatClient -> httpClient
* fix: fetch all session messages by iterating OpenClaw pagination
OpenClaw's HTTP history endpoint returns a limited page by default.
When called without a limit, only the first ~27 messages were returned,
causing all newer conversation messages to be silently dropped.
Add fetchAllSessionMessages() that iterates through OpenClaw's cursor-
based pagination (200 messages per page) until hasMore is false, then
feeds the complete message list into the existing BrowserOS normalization
and in-memory pagination layer.
* refactor: migrate chat history from HTTP gateway to direct JSONL file reads
Replace the HTTP-based chat history pipeline (BrowserOS server → OpenClaw
gateway /sessions/:key/history pagination loop) with direct JSONL file reads
from the host filesystem via Lima's virtiofs mount.
- Add OpenClawJsonlReader that reads session JSONL files directly from
~/.browseros/vm/openclaw/.openclaw/agents/<id>/sessions/
- Replace fetchAllSessionMessages() HTTP pagination with single file read
- Replace CLI-based listSessions() with sessions.json file reads
- Make listSessions, resolveAgentSession, getAgentHistoryPage synchronous
- Remove unused toBrowserOSSession, filterHttpSessionHistoryMessages,
normalizeHttpHistoryMessages helpers
- Update route handlers to drop unnecessary async/await
- Update tests to use temp JSONL files instead of mocked HTTP/CLI clients
* fix: restore async route handlers for test compatibility with mocked service
* fix: address review feedback — path traversal guard, lazy reader, exists flag
- Add safePath() to OpenClawJsonlReader that validates resolved paths stay
within stateRoot, preventing path traversal via crafted agentId values
- Use lazy initialization for jsonlReader (nulled on rebuildRuntimeClients)
instead of creating a new instance per property access
- Return exists: false from resolveSpecificAgentSession when no session
matches instead of fabricating a ghost session with sessionId: ''
* feat: add dashboard API and enrich home page agent cards
Server:
- Add summarizeToolActivity() that converts tool events into natural
language descriptions ("Browsed 3 pages, took 2 screenshots")
- Add getDashboard() to OpenClawService that aggregates per-agent stats
from JSONL: latest message, activity summary, cost, session count
- Add GET /claw/dashboard endpoint
Client:
- Add useAgentDashboard() React Query hook (10s refetch, 5s stale)
- Rewrite useAgentCardData from async IndexedDB hook to pure
buildAgentCardData() function merging agent entries with dashboard data
- Add activity summary and cost to AgentCardExpanded footer
- Add activitySummary and costUsd fields to AgentCardData type
- Remove IndexedDB dependency from the home page
* feat: add OpenClawObserver for real-time per-agent status via gateway WS
- Add OpenClawObserver that connects to the OpenClaw gateway WebSocket
control plane and subscribes to chat broadcast events
- Track per-agent status in real time: working (streaming), idle (turn
complete), error (run failed), with current tool name
- Auto-connect when gateway control plane becomes available, auto-
reconnect on disconnect with 5s backoff
- Disconnect observer on stop/shutdown
- Wire live status + currentTool into getDashboard() response
- Update client: AgentOverview includes status + currentTool, card shows
spinning loader + tool name when agent is working
- Status resolution: per-agent WS status takes precedence over gateway-
level status for working/error states
* feat: add SSE dashboard stream for real-time agent status on home page
Server:
- Add GET /claw/dashboard/stream SSE endpoint that sends an initial
snapshot then pushes per-agent status events as they arrive from
the OpenClaw observer
- Add onAgentStatusChange() to OpenClawService exposing the observer's
listener for the route layer
- Heartbeat every 15s to keep connections alive
Client:
- useAgentDashboard() now subscribes to EventSource at /claw/dashboard/stream
- SSE snapshot event hydrates the React Query cache immediately
- SSE status events patch individual agent status + currentTool in the
cache without refetching — agent cards update instantly
- Polling fallback raised to 30s since SSE handles real-time
* fix: observer WS handshake — wait for challenge before sending connect
The OpenClaw gateway sends a connect.challenge event before accepting
the connect request. The observer was sending the connect request on
ws.open which raced with the challenge. Now waits for the challenge
event before sending the handshake.
Also add dangerouslyDisableDeviceAuth to the gateway setup config
batch so the observer can connect without device identity on new
installs.
* fix: JSONL reader falls back to most recent file when sessions.json is stale
OpenClaw's sessions.json can record a Pi session ID that doesn't match
the actual JSONL filename on disk. This happens after context compaction
or session restart — the JSONL file gets a new UUID but sessions.json
keeps the old one.
Previously this caused history to silently disappear (the reader tried
to open a non-existent file and returned empty). Now resolveJsonlPath()
checks if the mapped file exists and, when it doesn't, scans the
sessions directory for the most recently modified .jsonl file as a
fallback.
* feat: add ClawSession state machine for reliable per-agent status
The OpenClawObserver only knows about status changes it witnesses via
WS events. If an agent was already running when the observer connected,
or after a reconnect, statuses were stuck at "unknown".
ClawSession is an in-memory state machine that solves this:
1. Seeds from JSONL on first control plane call — reads the latest
events for each agent and infers working/idle. A session is "working"
if the last event is a user.message with no subsequent agent.message,
or an agent.tool_use with no matching agent.tool_result.
2. Receives live transitions from the WS observer — the observer now
delegates all state management to ClawSession instead of maintaining
its own status map.
3. Applies a 5-minute staleness threshold — if the last JSONL event
is older than 5 minutes, assume idle (handles agent crashes).
Consumers (SSE stream, dashboard endpoint) read from ClawSession and
get correct state from the first call — no "unknown" period.
* fix: remove staleTime so dashboard refetches on every mount
* fix: reset stale working status on WS disconnect, eliminate redundant JSONL reads
- Observer resets all "working" agents to "unknown" when the WS closes,
preventing agents from appearing stuck as Working indefinitely after
a gateway restart. ClawSession re-seeds correct state on reconnect.
- getDashboard() now derives latestAgentMessage and cost from the
already-loaded events array for the latest session instead of calling
latestAgentMessage() and getSessionStats() which each re-read the
same JSONL file. Reduces file reads from 3x to 1x per agent.
* feat: add runtime vm cache sync
* feat: configure runtime vm cache sync
* feat: prefetch vm cache on startup
* feat: await vm cache before vm startup
* fix: recheck vm cache after prefetch wait
* fix: address vm cache review feedback
* build(server): require VM cache manifest env
* feat(openclaw): add Claude CLI as a CLI-backed provider
Extensible registry of "OpenClaw CLI-backed providers" — tools that run
as subprocesses inside the gateway container rather than via an API key.
Claude CLI is the first entry; Gemini CLI / Codex CLI / etc. are
one-line additions in the same shape.
Backend:
- New openclaw-cli-providers/ module: types, registry, claude-cli entry.
- OpenClawService: generic ensureAllCliProvidersInstalled() (runs on
setup/start/restart/auto-start) and getCliProviderAuthStatus(provider).
- Provider dispatch: resolveProviderForAgent() short-circuits CLI
providers (no env var, no custom-provider merge) before falling
through to the API-key resolver. No changes to openclaw-provider-map.
- Container runtime: PATH + NPM_CONFIG_PREFIX env so tools installed
under /home/node/.npm-global/bin (mounted) are discoverable by
OpenClaw's child-process spawns and persist across restarts.
- New route: GET /claw/providers/:providerId/auth-status returns
installed / loggedIn / account / plan / error.
Frontend:
- New openclaw-cli-providers.tsx: mirrors backend registry (id, models,
authLoginCommand), useOpenClawCliProviderAuthStatus hook (2-s poll
while enabled), OpenClawCliProviderStatusPanel component.
- AgentsPage: synthesized CLI-provider options merged into the Create
Agent dropdown, inline status panel, auth modal mounting the existing
AgentTerminal with provider.authLoginCommand, auto-close on loggedIn.
- AgentTerminal: new optional initialCommand + onSessionExit props
(ref-based so parent re-renders don't rebuild the PTY).
No global ProviderType changes. No custom container image — runtime
install into the mounted home dir persists across restarts.
* fix(openclaw): address review comments for claude-cli provider
- Drop redundant providerId field from OpenClawCliProviderOption (type
already carries the same value).
- Reuse SetupInput type in resolveProviderForAgent instead of inlining.
- Split ensureCliProviderInstalled into probe + install so logs
distinguish "already present" from "freshly installed".
- Narrow union in handleCreate via explicit LlmProviderConfig cast; the
'in'-based narrowing stopped working once the two option shapes
overlapped on required fields.
* fix: green up server-api tests after claude-cli additions
- Update container-runtime.test.ts snapshot to include the new
PATH + NPM_CONFIG_PREFIX env args.
- Add a defensive guard in ensureAllCliProvidersInstalled so test
mocks that swap runtime for a partial stub without execInContainer
simply skip the install step; production runtime always provides it.
No production behavior change.
* fix(openclaw): use claude /login for auth flow and render terminal full-page
`claude auth login` in 2.1.x silently discards stdin, so the pasted OAuth
code never reaches claude. Switch to the REPL's `/login` slash command,
which does accept a pasted token. Also render the auth terminal
full-page instead of inside a Radix Dialog — the focus trap was hiding
keyboard events from xterm's helper textarea. Finally, guard the async
WebSocket in AgentTerminal against React 18 StrictMode's double-invoke
so the first mount's orphaned WS doesn't leak a second live session.
- terminal-session: pass PATH on podman exec so user-installed CLIs
resolve in interactive sessions without manual re-exports.
- claude-cli parseAuthStatus: treat exit-code-1 as a valid "not logged
in" JSON payload instead of a hard error.
* fix(openclaw): drop unnecessary PATH override on podman exec
`podman exec` inherits the container's run-time env (PATH includes
/home/node/.npm-global/bin via `podman run -e PATH=…`), so the extra
`-e PATH` on the exec call was redundant. Reverts the export of
GATEWAY_PATH and the exec flag added in the previous commit.
* feat(openclaw): show CLI-backed providers in Set Up dialog
The Set Up OpenClaw dialog previously listed only API-key LLM
providers. Add the CLI-backed ones (currently just Claude CLI) so
users can bootstrap the gateway with a Claude.ai-subscription-backed
agent without round-tripping through the Create Agent flow first.
When the user picks a CLI provider at setup, skip the apiKey/baseUrl
fields and open the auth terminal immediately after the gateway comes
up, so /login runs in one click.
* fix(openclaw): robust claude auth-status parsing and cleaner CLI UX
parseClaudeAuthStatus was doing JSON.parse on the entire stdout, which
fails when Lima/nerdctl appends a stderr line like `level=fatal
msg="exec failed with exit code 1"` whenever the inner command exits
non-zero (claude auth status exits 1 when not logged in). The panel
then surfaced the raw output as an error. Switch to a line-by-line
scan that picks the first parseable JSON object — handles trailing
noise and nested JSON fields cleanly.
UI polish around the Setup dialog:
- Hide the "uses your API key" hint when the selected provider is
CLI-backed — it is inaccurate and confusing.
- When a CLI provider is picked in Setup, show a short helper line
instead of the status panel (the /auth-status poll would be
pre-gateway and would always fail). Set Up & Start boots the
gateway and then auto-opens the auth terminal in one click.
- Track the active CLI provider across both Setup and Create dialogs
so the auth terminal opens for the right provider regardless of
which dialog triggered it.
* feat(terminal): make selection + copy work under TUI mouse tracking
Interactive TUIs like `claude /login` enable xterm mouse-tracking,
which forwards every click to the app and disables click-drag text
selection. Our terminal had no escape hatch, so users couldn't grab
the OAuth URL.
Three general-purpose fixes (none CLI-specific):
- macOptionClickForcesSelection: Opt+drag always selects on Mac,
regardless of what the running program does with mouse events.
- Cmd/Ctrl+A and Cmd/Ctrl+C custom key handler: select-all and copy
to clipboard via navigator.clipboard, even when the TUI would
swallow the keys.
- Copy button in the terminal header: writes the current selection
to the clipboard, or the full visible viewport if nothing is
selected. One-click escape hatch that works in every state.
Applies to any interactive CLI in our terminal (sudo, vim, claude,
gh auth, etc.), not just the claude login flow.
* fix(terminal): make xterm selection actually visible
Selection was registering internally (xterm-selection layer had
correct width/height rects), but the rectangles rendered in
rgb(252,252,251) — practically invisible against the white
background — so users concluded selection was broken.
Root cause: the theme derived selectionBackground from
`withAlpha(resolveCssColor('--accent-orange'), 0.2)`. When the CSS
var failed to resolve it fell back near-white, and the alpha
compositing against the page background made the result
indistinguishable from the background.
Switch to solid terminal-standard selection colors (VSCode-like
light-blue / dark-indigo). Also set selectionInactiveBackground so
the selection persists when focus moves away (useful while copying).
Drop the now-unused withAlpha helper.
* fix(openclaw): handle pretty-printed JSON in claude auth status parser
claude auth status --json emits multi-line pretty-printed JSON. The previous line-by-line parser never matched, so the UI treated every response as an error and surfaced the raw JSON — even when loggedIn was true. Replace with a brace-matching JSON extractor (string- and escape-aware) that tolerates multi-line JSON, leading banners, trailing lima/nerdctl stderr, and nested objects.
* refactor(openclaw): separate exec streams, argv installs, cleaner async cleanup
Audit-driven cleanup. Net -42 lines, four concrete issues fixed:
1. ContainerRuntime.runInContainer() exposes {exitCode, stdout, stderr}
from the nerdctl exec (ContainerCli.runCommand already tracked them
separately; we were just throwing stderr into the same string). The
40-line hand-rolled brace-matching JSON extractor in claude-cli.ts
existed only because the prior merged-stream output had lima/
nerdctl's 'level=fatal' line fused with claude's JSON. parser is
now JSON.parse(stdout.trim()).
2. Replace shell-based 'sh -lc "npm install -g ${pkg}@latest"' with
argv: execInContainer(['npm','install','-g','${pkg}@${version}']).
Registry values no longer flow through a shell (removes injection
surface from future CLI providers). Pinned version instead of
@latest (adds npmPackageVersion to the provider type).
3. AgentTerminal: replace the 'let cancelled' + out-of-effect
disposeSocketBindings pattern with an AbortController scoped to
the effect and a cleanups[] array. Matches the canonical React 18
async-effect pattern — no partial-cleanup race if StrictMode
unmounts between the async await and the resolve.
4. AgentTerminal: drop the full-buffer fallback in the Copy button
(was copying all 8000 scrollback lines when nothing selected —
surprising). Button now only copies the actual xterm selection,
or no-ops silently. Users who want everything can Cmd+A first.
* feat: draft agent chat ui exploration
* feat: refine agent chat ui draft
* feat: remove outer frame from agent chat workspace
* fix: offset agent chat for app sidebar
* fix: simplify agent conversation shell
* fix: remove redundant chat header actions
* fix: unify agent conversation headers
* fix: tighten agent chat spacing
* fix: bound agent chat composer height
* fix: remove agent chat page inset
* fix: align agent header height with sidepanel
* fix: center agent composer resting state
* fix: anchor multiline composer controls
* fix: remove focus grid from agent home
* fix: remove redundant agent home header
* fix: constrain home agent composer
* fix: match home composer default posture
* feat: add openclaw chat history APIs
* feat: add claw chat history hydration
* fix: stabilize claw chat viewport layout
* fix: use conversation scroll base for claw chat
* refactor: split claw chat controller responsibilities
* fix: keep active agent turns in memory
* fix: normalize openclaw chat sessions
* refactor: use HTTP client for agent history instead of CLI client
Replace the CLI-based getChatHistory() call in getAgentHistoryPage()
with the HTTP client's getSessionHistory() from PR #795. This uses
the direct HTTP transport to OpenClaw's /sessions/<key>/history
endpoint instead of shelling out through the CLI.
- Add filterHttpSessionHistoryMessages() for flat-string content format
- Add normalizeHttpHistoryMessages() for OpenClawSessionHistoryMessage shape
- Update getAgentHistoryPage() to call getSessionHistory() via httpClient
- Remove unused getChatHistory(), filterOpenClawSystemMessages(),
normalizeChatHistoryMessages(), and getTextContent()
- Update test mocks from cliClient.getChatHistory to httpClient.getSessionHistory
- Update MutableOpenClawService type: chatClient -> httpClient
* fix: fetch all session messages by iterating OpenClaw pagination
OpenClaw's HTTP history endpoint returns a limited page by default.
When called without a limit, only the first ~27 messages were returned,
causing all newer conversation messages to be silently dropped.
Add fetchAllSessionMessages() that iterates through OpenClaw's cursor-
based pagination (200 messages per page) until hasMore is false, then
feeds the complete message list into the existing BrowserOS normalization
and in-memory pagination layer.
* refactor: migrate chat history from HTTP gateway to direct JSONL file reads
Replace the HTTP-based chat history pipeline (BrowserOS server → OpenClaw
gateway /sessions/:key/history pagination loop) with direct JSONL file reads
from the host filesystem via Lima's virtiofs mount.
- Add OpenClawJsonlReader that reads session JSONL files directly from
~/.browseros/vm/openclaw/.openclaw/agents/<id>/sessions/
- Replace fetchAllSessionMessages() HTTP pagination with single file read
- Replace CLI-based listSessions() with sessions.json file reads
- Make listSessions, resolveAgentSession, getAgentHistoryPage synchronous
- Remove unused toBrowserOSSession, filterHttpSessionHistoryMessages,
normalizeHttpHistoryMessages helpers
- Update route handlers to drop unnecessary async/await
- Update tests to use temp JSONL files instead of mocked HTTP/CLI clients
* fix: restore async route handlers for test compatibility with mocked service
* fix: address review feedback — path traversal guard, lazy reader, exists flag
- Add safePath() to OpenClawJsonlReader that validates resolved paths stay
within stateRoot, preventing path traversal via crafted agentId values
- Use lazy initialization for jsonlReader (nulled on rebuildRuntimeClients)
instead of creating a new instance per property access
- Return exists: false from resolveSpecificAgentSession when no session
matches instead of fabricating a ghost session with sessionId: ''
* feat(build-tools): seed dev agent tarballs
* fix: address review comments for 0423-build_agent_tarball_dev_sync
* chore(build-tools): remove dev cache sync alias
Replace the podman-based runtime with nerdctl running inside the Lima
VM introduced in the previous commit. OpenClaw is cut over to the new
VM-backed container runtime; legacy podman code paths are removed.
- New container CLI (lib/container): nerdctl ContainerCli, ImageLoader
with cache-tarball fallback, shared types
- OpenClaw: container-runtime-factory orchestrates VM lifecycle + gateway
startup; container-runtime.ts rewritten to speak nerdctl; Linux test
startup kept disabled behind the factory
- Terminal: session + routes moved onto Lima shell transport; server
wires the VM-backed runtime via main.ts
- Agent UI: simplify AgentsPage/useOpenClaw after route consolidation
- Remove podman-runtime, podman-overrides, and their tests
- Tests: container-cli, image-loader, container-runtime-factory, and
updated openclaw/terminal/main suites
Introduce a new VM runtime layer using Lima for running containerised
workloads on macOS. Lifecycle covers decompress/create/start/stop with
stubs for upgrade/reset plus version-mismatch warnings.
- Foundation modules: paths, errors, manifest, telemetry
- lima.yaml generator + typed limactl wrapper with structured debug logging
- ssh ControlMaster transport for fast in-VM commands
- Ubuntu 24.04 minimal template, containerd default, 30GiB overlay disk
- browseros-dir helpers (getLimaHomeDir, getVmStateDir, getVmDisksDir);
OpenClaw dir moves into VM state dir
- Test helpers (fake-limactl, fake-ssh, test-env), vm-smoke integration
coverage, NODE_ENV propagation for spawned server test groups
* refactor(openclaw): rename http chat client to http client
Session history is about to land on the same HTTP client. 'Chat client'
will no longer describe it, so rename the class, file, and service field
up front. No behavior change.
* feat(openclaw): add session history fetch + sse stream to http client
Adds getSessionHistory (JSON) and streamSessionHistory (SSE) to the
OpenClaw HTTP client. Both target GET /sessions/<key>/history on the
loopback gateway, reusing the same bearer-token auth as streamChat.
- 404 from the gateway surfaces as OpenClawSessionNotFoundError so
callers can map it to a typed HTTP status.
- The SSE path parses named 'history', 'message', and 'error' events
into a typed OpenClawSessionHistoryEvent union.
- AbortSignal propagates to fetch and cancels the reader mid-stream.
* feat(openclaw): expose session history over GET /claw/session/:key/history
Wire the new getSessionHistory / streamSessionHistory service methods
through a route that defaults to JSON and upgrades to SSE when the
client sends Accept: text/event-stream.
- OpenClawSessionNotFoundError lives in errors.ts alongside the other
OpenClaw errors so routes can import it from one place.
- The route propagates c.req.raw.signal into streamSessionHistory so
client disconnects cancel the upstream fetch.
- Route tests cover the JSON path (with query param forwarding), the
404 path, and the SSE framing.
* chore(openclaw): drop NaN from session history route limit param
Seeds ~/.browseros-dev/cache/vm/ from ./dist/ without touching R2, so
devs can test the server against a freshly-built tarball before anything
is published to cdn.browseros.com. Hardcodes arm64 since all devs are on
Apple Silicon; refuses to run unless NODE_ENV=development; idempotent
(skips copy on sha256 match).
Also fixes the R2_BUCKET default in .env.sample from browseros-artifacts
to browseros to match the actual bucket.
* feat(build-tools): add Lima template for BrowserOS VM
* feat(build-tools): remove build-disk pipeline and recipe directory
Task 2 verification removed the scripts, recipe directory, workflow, and package scripts. Typecheck remains green here because manifest disk fields are removed in the next task, so the plan's expected missing-import failure does not apply yet.
* feat(build-tools): rename VmManifest to AgentManifest, drop disk fields
* feat(build): stage Lima template into server resources
Verified local-resource staging with: bun scripts/build/server.ts --target=darwin-arm64 --ci. The template was copied to dist/prod/server/darwin-arm64/resources/vm/browseros-vm.yaml and included in the zip. bun run build:server:test still fails on the pre-existing R2 limactl resource with: The specified key does not exist.
* docs(build-tools): Lima template dev loop + record D9
Updated the build-tools README in this worktree. Also recorded D9 in the canonical external spec file at /Users/shadowfax/llm/code/browseros-project/grove-ref/browseros-main/specs/decisions.md, which is outside this git checkout.
* chore(build-tools): sweep orphaned references to retired disk pipeline
* chore: self-review fixes
* feat(vm-container): ship the WS1 VM disk image pipeline
New Bun/TS workspace package @browseros/vm-container that produces a
reproducible, versioned Debian 12 + Podman qcow2 disk image for arm64 and
x64, and publishes it to Cloudflare R2 under vm/<version>/ with a per-
version manifest.json and a latest.json pointer.
- virt-customize-driven build with a git-tracked recipe DSL.
- zstd-compressed artifacts; sha256 sidecars for compressed + uncompressed.
- Public surface at @browseros/vm-container/schema exposes zod-validated
VmManifest + R2 key helpers for WS4 to import; /download is a stub
landing pad for WS4 to fill in.
- Rollback on partial upload failure: any exception after the first
successful put deletes all previously uploaded keys for that version.
- GHA workflow build-vm-container.yml runs a matrix build per arch on
native runners, an x64 Lima boot smoke test, and a gated publish job.
- Full unit coverage for arch, r2-keys, manifest, recipe parser, and
publish (rollback + happy path via aws-sdk-client-mock).
* fix(vm-container): address review comments
- Split buildDisk into prepareCustomizedDisk + finalizeArtifacts for
testability.
- Replace resolvePinnedSha's sentinel-prefix check with a positive
sha256-hex regex test, switch base-image.ts placeholder to empty string.
- Drop unused R2_VM_PREFIX from .env.example; document CDN_BASE_URL
override precedence in README.
- Replace SSH host-key explicit list in recipe with `ssh_host_*` glob so
.pub keys and future key types are also removed.
- lima-boot: introduce BunRequestInit type for the unix fetch option and
reject empty limactlPath loudly.
- Extend publish test suite: mid-manifest-upload failure path verifies
both arches' qcow+sha are rolled back and latest.json is never written.
- Add missing tests: parseArch('ARM64') case-sensitivity rejection,
composeVirtCustomizeArgv unresolved-substitution pass-through.
* fix(vm-container): pin a real Debian snapshot, switch verify to SHA-512, streaming download
- Pin Debian base to bookworm/20260413-2447 with real SHA-512 values
from upstream SHA512SUMS (the sentinel placeholder never corresponded
to a real build). Debian cloud images only publish SHA512SUMS today,
so switch base-image verification to SHA-512 throughout: rename
BaseImage.sha256 → sha512, manifest field base_image_sha256 →
base_image_sha512, base_image.sha256_url → sha512_url,
debianSha256SumsUrl → debianSha512SumsUrl. Our own artifact hashes
(compressed_sha256, uncompressed_sha256, recipe_sha256) stay SHA-256.
- Fix downloadTo: previous Bun.write(dest, response) buffered the
entire 300 MB response before writing (100% CPU, empty dir). Replace
with a getReader() loop that streams chunks through Bun.file().writer().
- build CLI now auto-derives --version from today's date when omitted
(defaults to YYYY.MM.DD-dev1); explicit --version still overrides.
Broaden CALVER_REGEX to accept alphanumeric suffixes so -dev1/-rc1
tags are valid. New todayCalver() helper.
- Update GHA workflow fallback to github.run_number (shorter) instead
of run_id.
* fix(vm-container): resolve copy-in paths against recipeDir after substitution
The copy-in path resolver checked op.src.startsWith('/') before running
the {placeholder} substitution, so an absolute-after-substitution path
like {manifest_tmp} → /tmp/vm-dist/manifest-stub-arm64.json was treated
as relative and joined against recipeDir, producing a nonexistent path.
Check the *substituted* value for absoluteness via path.isAbsolute.
* fix: address review comments for 0422-ws1_vm_disk_pipeline
* fix(ci): repair vm-container workflow
* fix(ci): expose vm build logs on failure
* fix(vm-container): expose base_image_sha256 in manifest per PRD
The published manifest contract (consumed by WS4) now uses base_image_sha256
as the PRD specified. Internally the build still verifies the downloaded
Debian base against the pinned sha512 (that's what Debian actually signs in
SHA512SUMS) — then hashes the same bytes as sha256 and records that in the
manifest. One extra digest pass of a ~300 MB file; negligible.
- manifest.json: base_image_sha256 replaces base_image_sha512; sha512_url
removed (not needed — sha256 is the consumer-facing hash).
- CLI: --base-image-sha256 override validates against the locally-computed
sha256 after download.
- BuildResult.baseImage gains sha256 alongside sha512.
- Tests updated to the new field.
The auth.json bug (reviewer #2) is resolved: the source file is
recipe/auth.json and the recipe emits `copy-in auth.json:/etc/containers/`
so libguestfs writes /etc/containers/auth.json.
* ci(vm-container): fix supermin kernel-read + rename sha512 inputs to sha256
- Ubuntu 24.04 GHA runners ship /boot/vmlinuz-* as mode 0600, which blocks
libguestfs's supermin appliance builder when virt-customize runs as a
non-root user. Chmod 0644 before the build — canonical CI workaround.
- Rename workflow_dispatch input base_image_sha512 → base_image_sha256
and CLI flag --base-image-sha512 → --base-image-sha256 to match the
orchestrator's renamed override.
* ci(vm-container): give runner KVM access + install passt for libguestfs
The supermin fix got us past appliance-build, but virt-customize then hit
"passt exited with status 1". The passt networking helper misbehaves when
libguestfs falls back to TCG emulation, which happens because the runner
user isn't in the kvm group even though /dev/kvm exists on the GHA host.
- chmod 0666 /dev/kvm → libguestfs uses hardware acceleration, avoids TCG.
- install passt explicitly so the networking helper is present and current.
* ci(vm-container): disable passt to force libguestfs slirp fallback
libguestfs 1.54+ prefers passt for guest networking, but the passt binary
on GHA ubuntu-24.04 exits with status 1 when invoked from the appliance
— an AppArmor/capability issue that doesn't surface a useful diagnostic.
The reliable workaround is to remove passt so libguestfs picks QEMU's
built-in user-mode SLIRP as the network backend. SLIRP is slower but
functional and doesn't require escalated privileges.
- Guard uploaded_keys append with !dry_run so the rollback list
never contains keys for objects that were never written.
- Prefer GITHUB_ACTOR over local OS username for manifest.uploaded_by;
manifest.json is CDN-fronted so leaking a developer's login is
unnecessary (falls back to 'local').
- Extend test_windows_has_no_stale_third_party to cover bun.exe/rg.exe
too, matching the macOS forbidden-set pattern.
* feat(build): swap podman server resources for Lima (WS3)
- Upload limactl (arm64 + x64) to R2 via new 'browseros upload lima' CLI.
- Rewrite scripts/build/config/server-prod-resources.json: 2 Lima entries,
12 podman-family entries removed.
- Update codesign metadata (server_binaries.py) to add limactl, drop podman
family. Sign modules need no edits (data-driven).
- Delete orphaned podman-{vfkit,krunkit} entitlement plists.
- Release-gating note in browseros-agent/CLAUDE.md: don't cut releases off
dev between this commit and WS6 landing (OpenClaw still invokes podman).
* fix: address review comments for 0422-ws3_lima_resources
- Tighten _find_limactl_member to match exactly .../bin/limactl via
Path.parts, avoiding incidental matches like 'xbin/limactl'.
- Fall back USER -> USERNAME -> 'unknown' for uploaded_by so Windows
shells don't all record 'unknown'.
- Comment the broad except in upload_lima to explain why rollback
must fire for any mid-loop failure.
* chore: drop bun + rg from Windows sign list
These executables are already absent from server-prod-resources.json (no
Windows entries shipped); keeping them in the sign list produces
"Binary not found" warnings on every Windows build.
* feat(openclaw): dynamically allocate and persist gateway host port
The gateway container always listens on OPENCLAW_GATEWAY_CONTAINER_PORT
(18789) internally, but that port may be taken on the user's host. Allocate
a free host port on each lifecycle transition, persist it to
~/.browseros/openclaw/.openclaw/runtime-state.json, and prefer the
persisted value on subsequent starts so the mapping is stable.
Split the naming so the two sides of the -p mapping are no longer
ambiguous: the shared constant becomes OPENCLAW_GATEWAY_CONTAINER_PORT
and the service/spec/chat-client/runtime probes all use hostPort for
the mapped host-side port.
* fix(openclaw): remove duplicate Podman overrides card from status panels
* feat(openclaw): user-supplied Podman binary path override
Expose the existing `configurePodmanRuntime({ podmanPath })` knob as a UI
input on the Agents page so users blocked by the bundled gvproxy helper
discovery bug can install their own Podman (e.g. `brew install podman`)
and point BrowserOS at it.
- podman-overrides.ts: persist {podmanPath} at ~/.browseros/.openclaw/
- openclaw-service: applyPodmanOverrides/getPodmanOverrides, rebuilds
ContainerRuntime + CLI clients in place (no server restart needed)
- routes: GET/POST /claw/podman-overrides with absolute-path + existsSync
validation
- main: load override on boot, pass resourcesDir into the service so
clearing the override restores bundled fallback
- AgentsPage: PodmanOverridesCard rendered inline in the degraded /
uninitialized / error cards and as a collapsible standalone section
Dev mode is unchanged; prod gets the same lever dev has had all along.
* refactor(openclaw): address review comments for podman-path override
- extract getPodmanOverrideValidationError() to mirror the existing
getCreateAgentValidationError() pattern in openclaw.ts
- extract rebuildRuntimeClients() so applyPodmanOverrides doesn't
re-spell the three-step runtime/CLI-client reinit
- rename shadowing local path -> overridesPath in loadPodmanOverrides
* fix(openclaw): clear gateway log tail before swapping runtime
rebuildRuntimeClients replaces this.runtime but the cached stopLogTail
still closes over the old runtime's log-tail process. The existing
guard in startGatewayLogTail (if (this.stopLogTail) return) would then
short-circuit the next restart and leave the new runtime without a
tail. Clear it inside the helper so the rebuild is self-consistent
regardless of caller order.
* fix(openclaw): check podmanPath executability and note singleton mutation
- validator: after existsSync, accessSync(X_OK) so a non-executable file
fails fast at save time with a clear 400 instead of a cryptic spawn
error later. Added a matching route test.
- applyPodmanOverrides: one-line comment flagging the intentional
module-level PodmanRuntime singleton mutation so future readers know
this is by design, not an accident.
* fix: run full browseros-agent test suite
* fix: stabilize server test reporting in CI
* fix: address PR review feedback
* refactor: extract server core test runner
* refactor: group server tests by filesystem
* fix: align CI suites with server test groups
* fix: provision server env for all CI suites
* fix: stabilize ci checks
* fix: report real test counts in ci
* feat(openclaw): add CLI client
* fix(openclaw): swap service to cli client
* fix(openclaw): restore mixed json parsing
* fix(openclaw): validate agent list payloads
* fix(openclaw): simplify cli client boundary
* fix(openclaw): simplify cli client boundary
* fix(openclaw): prefer outer config json payloads
* fix(openclaw): ignore trailing config log payloads
* refactor(openclaw): bootstrap config through cli
* fix(openclaw): narrow bootstrap ownership
* fix(openclaw): avoid noop key restarts
* fix(openclaw): enforce supported provider sync
* refactor(openclaw): remove agent role contract
* fix(openclaw): migrate legacy state and apply model updates
* fix(openclaw): migrate legacy agent state
* fix(openclaw): harden state updates
* refactor: stabilize local OpenClaw bootstrap and chat auth
* fix(openclaw): propagate container env and drop legacy paths
Compose now loads provider creds from .openclaw/.env and passes the
gateway token through, so in-container CLI commands (tui, doctor,
config) authenticate correctly and the gateway process sees
OPENROUTER_API_KEY. Service ensures the state env file exists and
rewrites the compose env with the token before composeUp in setup,
start, and tryAutoStart. Podman machine gets larger defaults and the
container enables NODE_COMPILE_CACHE + OPENCLAW_NO_RESPAWN. Legacy
state migration, the unused WebSocket gateway-client, memorySearch,
and thinking defaults are removed.
Introduces release.macos.arm64.yaml for single-architecture arm64
macOS release builds. Mirrors the windows/linux single-arch pattern
(configure -> compile -> sign_macos -> package_macos -> upload),
skipping the universal_build module to avoid the x64 cross-compile
and lipo merge. Reuses the sparkle_setup step and the same
notarization env vars as the universal macOS config.
* feat(ota): bundle full server resources tree (server + third_party bins)
The OTA Sparkle payload now ships the complete resources/ tree the agent
build produced, not just browseros_server. Every third-party binary (bun,
ripgrep, podman, gvproxy, vfkit, krunkit, podman-mac-helper, win-sshproxy)
flows to OTA-updated installs so podman integration works for users on the
OTA channel, matching fresh Chromium-build installs.
Extract the per-binary sign table into build/common/server_binaries.py so
the Chromium-build sign path (modules/sign/) and OTA sign path (modules/ota/)
share a single source of truth. Adding a new third-party dep is now a
one-file edit that both paths pick up automatically; unknown executables
under resources/bin/ are a hard error at release time.
* fix(ota): address review comments on bundle signing flow
- Avoid double-zipping during notarization: add notarize_macos_zip for
pre-built Sparkle bundles so notarytool submits the zip directly
instead of re-wrapping it through ditto --keepParent (Apple's service
does not descend into nested archives). Keep notarize_macos_binary for
single-binary callers. Share credential setup + submit logic via
internal helpers.
- Fail fast on unknown executables in sign_server_bundle_macos: collect
the unknown-files list before any codesign call so a missing shared-
table entry aborts in seconds, not after a full signing round.
- Drop dead get_entitlements_path helper (no callers remain after the
bundle refactor).
* fix(ota): address PR review comments (greptile + claude)
- sign_server_bundle_macos filters to executables only (p.is_file() +
not p.is_symlink() + os.access X_OK) before applying the unknown-file
guard. Non-Mach-O files (configs, dylibs, etc.) under resources/bin/
no longer cause misleading 'unknown executable' hard failures.
- sign_server_bundle_windows now hard-errors on a missing expected
binary instead of silently skipping it. Symmetric with the macOS
guard — an incomplete bundle must not publish.
- ServerOTAModule.execute() uses tempfile.TemporaryDirectory context
managers for both the download and staging roots so they are cleaned
up on every path, including failures.
- Per-platform sign/notarize/Sparkle-sign failures now raise RuntimeError
instead of silently skipping the platform — a release pipeline can no
longer omit a target while reporting success.
- Move import os and import shutil to the top of ota/sign_binary.py.
- Drop unused log_error import from ota/server.py.
* chore: bump server
* fix(ci): add PR comment with test summary and block on failure
Add a `comment` job to the test workflow that parses JUnit XML artifacts
and posts a sticky PR comment showing pass/fail counts per suite, with
failed test names listed in a collapsible section and a link to the run.
Guards against fork PRs (read-only token) and stale overlapping runs
(skips comment if PR head has moved past our SHA).
* fix(ci): use payload SHA for staleness check, handle missing artifacts
- Replace context.sha (merge commit SHA) with
context.payload.pull_request.head.sha so the staleness guard
compares the correct values and the comment actually gets posted
- Add continue-on-error to download-artifact so cancelled runs
gracefully fall through to the "no test results" message
* fix(ci): show warning icon for zero-test suites instead of failure
* fix: isolate ACL semantic tests from Bun teardown crash
* fix: time out ACL semantic fixture subprocess
* fix: run full root test suite and repair sdk browser context
* fix: address PR review comments for 0415-fix_all_tests_and_issues
* test: temporarily skip sdk suite
* test: clarify sdk suite disable message
Pre-kill BrowserOS processes whose --user-data-dir path contains the
browseros-test- prefix before each spawnBrowser, and in the test:cleanup
hook. This prevents a crashed prior test run from leaving a headless
BrowserOS attached to a stale port, without touching the developer's
regular BrowserOS.app instance (its user-data-dir is
~/Library/Application Support/BrowserOS, which does not match).
OpenRouter's public model slugs use dots in version numbers
(e.g. `anthropic/claude-haiku-4.5`), but openclaw's model registry only
recognises the dashed form (`claude-haiku-4-5`). Passing the dotted form
makes openclaw's registry lookup miss silently — the agent turn completes
with `stopReason=stop payloads=0` and the UI shows no reply. Rewrite dots
to dashes in the model portion for openrouter providers only so
copy-pasted OpenRouter slugs resolve correctly.
Also, in development mode:
- Inject `logging.level: debug` into generated openclaw.json so the
gateway emits debug-level entries to its file log.
- Patch an existing openclaw.json on start/restart so already-provisioned
users pick up the debug setting without a reset.
- Tail the gateway container's logs into the browseros server logger so
they appear in the same stream as the rest of dev output.
* refactor: remove redundant context-overflow middleware
The middleware caught provider overflow errors and re-tried with a
naive prompt truncation, but its `nonSystem.slice()` had no awareness
of tool_use/tool_result pairing — a cut between an assistant tool_use
and the matching tool_result produces an orphaned tool_use that
providers reject with a different error.
Compaction (`createCompactionPrepareStep`) already handles this safely:
`findSafeSplitPoint` walks past tool messages to preserve pair
integrity, and the pipeline (strip binary → prune → reduce outputs →
LLM summarize → sliding window) handles every overflow path before
the request leaves the agent.
Drops 426 lines: the middleware itself, its wiring in ai-sdk-agent,
and the matching test block + helpers in compaction.test.ts.
* docs: document BROWSEROS_AI_SDK_DEVTOOLS in .env.example
Surfaces the opt-in dev flag so contributors know it exists. Captures
every LLM call to .devtools/generations.json for post-hoc inspection.
* chore: add auctor configuration
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add project-level Claude Code skills for team
Adds 14 development workflow skills (brainstorming, planning, debugging,
TDD, code review, subagent-driven development, etc.) to .claude/skills/
so all team members get them automatically on pull.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The typecheck and compile scripts failed on fresh checkouts with
TS5083 because tsconfig.json extends .wxt/tsconfig.json, which is
gitignored and only generated by 'wxt prepare'. Run wxt prepare
before tsgo so the extended config and wxt.d.ts are always in place.
Expose the 7 Klavis Strata MCP tools as CLI subcommands under
`browseros-cli strata`, so CLI users (claude-code, gemini-cli) can
discover and execute actions on 40+ external services.
Commands: check, discover, actions, details, exec, search, auth.
Includes discovery flow guidance in help text, integration tests,
and an "Integrations:" group in the root help output.
Agents connecting over MCP URL/CLI (like claude-code) had no way to know
which Klavis connectors were available or authenticated, causing them to
fall back to browser automation. This adds a connector_mcp_servers tool
that checks connection status and returns an auth URL when needed.
* fix(openclaw): compose file path after service dir move, loopback auth fallback
- Fix COMPOSE_RESOURCE path: services moved to api/services/openclaw/
so the relative path needs one more parent directory traversal
- Fix requireTrustedAppOrigin middleware: Chrome extensions cannot set
the Origin header (forbidden header name). When Origin is absent,
fall back to checking the Host header is a loopback address. The
server only binds to loopback so only local processes can reach it.
Requests with an explicit non-trusted Origin are still rejected.
* fix: request header check
* chore: remove setup openclaw button
---------
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
Move openclaw/ and terminal/ service modules from src/services/ into
src/api/services/ so all server-side services live in one directory
alongside chat-service, klavis, mcp, and sdk. Update relative imports
in moved files and all callers.
- Add tool approval system with per-category approval configuration
- Build unified Governance dashboard (renamed from Admin) with pending
approvals view and execution audit log
- Move execution history tracking into the app shell
- Extract buildChatRequestBody helper and add newtab system prompt
- Add approval config change detection for mid-conversation rebuilds
* feat: add ACL rules for per-site element-level agent restrictions
Implement Access Control List (ACL) rules that let users block the agent
from interacting with specific elements on specific websites. Rules are
defined in a new Settings > ACL Rules page and enforced server-side in
executeTool() before any input tool handler runs.
- Shared ACL types and site pattern matching (packages/shared)
- Extension storage, settings UI with rule cards and add dialog
- Server-side guard in executeTool() checking tool+page+element
- Browser class extensions for element property resolution via CDP
- Visual overlay injection (red "BLOCKED" mask) via Runtime.evaluate
- Rules transported in chat request body alongside declinedApps
* fix: address review comments for ACL rules
- Add selector-to-property matching in matchesElement (tag, id, class)
- Remove scroll from guarded tools set (read-like action)
* fix: ACL site pattern matching fails on multi-segment URL paths
The glob-to-regex conversion used [^/]* for wildcard (*) which only
matches a single path segment. "*.amazon.com/*" failed to match
"www.amazon.com/cart/smart-wagon" because the trailing * couldn't
cross the slash between "cart" and "smart-wagon".
Fix: Split URL matching into hostname vs path parts. Path wildcards
now use .* to match across slashes. Also add simple domain matching
so users can just type "amazon.com" instead of "*.amazon.com/*".
* fix: wire up ACL overlay injection after take_snapshot
applyAclOverlays was defined but never called. Now triggers after
take_snapshot completes on pages matching ACL rules, so the agent
sees red "BLOCKED" overlays on restricted elements.
* refactor: rework 0326-acl_rules based on feedback
* feat(openclaw): add foundation — paths constant, browseros-dir helper, static compose file
Add OPENCLAW_DIR_NAME to shared paths constant, getOpenClawDir() to
browseros-dir.ts, and a static docker-compose.yml resource file that
uses native .env variable substitution instead of YAML template strings.
* feat(openclaw): add PodmanRuntime container engine abstraction
Manages Podman CLI interactions: machine lifecycle (init/start/stop),
availability checks, command execution with streaming output, and
running container enumeration. Linux skips machine ops since Podman
runs natively.
* feat(openclaw): add config builder and container runtime
openclaw-config.ts: pure functions to build openclaw.json and .env files
from BrowserOS settings. Maps provider keys, sets permissive defaults
(full exec, cron, web search, MCP bridge to BrowserOS).
container-runtime.ts: compose-level abstraction over PodmanRuntime for
the browseros-openclaw project. Handles up/down/restart/pull, health
checks, .env file writes, and safe machine shutdown.
* feat(openclaw): add OpenClawService orchestrator
Main service managing the single OpenClaw container. Handles full
lifecycle (setup/start/stop/restart/shutdown), agent CRUD with config
rewrites and gateway restarts, chat proxy to /v1/chat/completions,
provider key updates, auto-start on BrowserOS boot, and status reporting.
* feat(openclaw): add API routes and server wiring
Add /api/claw/* routes for container lifecycle (setup/start/stop/restart),
agent CRUD (list/create/delete), chat proxy with SSE streaming, provider
key management, and log retrieval. Register routes in server.ts, add
OpenClaw auto-start on BrowserOS boot and graceful shutdown in main.ts.
* fix(openclaw): resolve type errors in service and podman runtime
Fix TIMEOUTS.TOOL_EXECUTION → TIMEOUTS.TOOL_CALL to match shared
constants. Fix ReadableStream undefined/null type mismatch in
PodmanRuntime.runCommand stream draining.
* feat(openclaw): add agents page UI with chat, create, and lifecycle controls
Add /agents route with AgentsPage showing OpenClaw status, agent list,
create dialog, and per-agent chat. Includes useOpenClaw hook for
server communication, AgentChat component with SSE streaming, and
sidebar navigation entry.
* feat(openclaw): add provider selector to setup flow
Add LLM provider selector using useLlmProviders hook. Filters out
OAuth-only providers, pre-selects the user's default, and passes
providerType/apiKey/modelId to the setup endpoint so OpenClaw gets
a working LLM configuration on first setup.
* feat(openclaw): per-agent provider selection
Each agent can now have its own LLM provider. The Create Agent dialog
includes a provider selector that passes providerType/apiKey/modelId
to the backend. The service writes per-agent model config to
openclaw.json and merges the API key into the container's .env file.
* fix(openclaw): write gateway auth token to openclaw.json
The gateway was returning 401 because auth.mode was set to "token"
without providing the actual token value. Now the token is written
to gateway.auth.token in openclaw.json so the gateway and our chat
proxy agree on the same token.
* feat(openclaw): add GatewayClient WebSocket RPC client
Persistent WS client for the OpenClaw Gateway protocol. Handles the
challenge → connect → hello-ok handshake (as openclaw-control-ui with
operator.admin scope), JSON-RPC with pending map + timeouts, and
auto-reconnect. Exposes typed methods for agents.list, agents.create,
agents.delete, and health.
* refactor(openclaw): simplify config to bootstrap-only, add /readyz health
Config no longer contains agents.list — agent CRUD is handled via WS RPC.
buildOpenClawConfig → buildBootstrapConfig, removed makeAgentEntry and
AgentEntry (agents managed by OpenClaw runtime). Added isReady() and
waitForReady() using /readyz for gateway readiness checks.
* refactor(openclaw): agent CRUD via WS RPC, per-agent chat targeting
Replace JSON mutation + restart with GatewayClient WS RPC calls for
agents.create, agents.delete, agents.list. Chat proxy now uses
model: "openclaw/<agentId>" for per-agent targeting. Setup writes
bootstrap config once then creates "main" agent via WS after gateway
starts. Container restarts only when a new provider env var is added.
* fix(openclaw): use agentId field in setup response mapping
Fix type error: GatewayAgentEntry uses agentId not id.
* fix(openclaw): log service progress through server logger
* feat(openclaw): WS streaming, device auth, MCP port fix (#687)
* feat(openclaw): WS streaming, device auth, MCP port fix
- Fix GatewayClient WS handshake: add Ed25519 device identity signing,
Origin header, mode: cli (mode: ui requires device identity always)
- Add auto device pairing flow: generate client identity, attempt WS
connect (triggers pending), approve via openclaw CLI, reconnect
- Replace HTTP /v1/chat/completions proxy with WS-based streaming that
surfaces tool calls, thinking blocks, and text deltas
- Add chatStream() to GatewayClient returning ReadableStream of typed
OpenClawStreamEvent (text-delta, thinking, tool-start/end, lifecycle)
- Update chat route to stream WS events as SSE to the extension
- Pass actual server port to OpenClaw config (fixes MCP bridge in dev)
- Rewrite AgentChat.tsx with turn-based model using Message/MessageContent
components matching sidepanel pattern, with tool batching logic that
groups consecutive tools and breaks on text/thinking (same as sidepanel)
- Add execInContainer() to ContainerRuntime for CLI commands
- Fix gateway response field mapping (id→agentId, agents.list/create)
- Skip creating main agent if gateway auto-creates it
* fix(openclaw): retry WS connect on signature expired (Podman clock skew)
Podman VM clock drifts when Mac sleeps, causing Ed25519 signature
validation to fail with "device signature expired" on auto-start.
Add connectGatewayWithRetry() that restarts the container (resyncs
clock) and re-approves the device if needed.
* fix(openclaw): address PR review — stream cleanup, error handling
- Fix silent catch in setup(): only swallow "pairing required" and
"signature expired" errors, re-throw everything else
- Guard JSON.parse in approvePendingDevice(): check exit code and
wrap parse in try/catch with descriptive error messages
- Add try/finally in chat SSE route: reader.cancel() on disconnect
- Add cancel callback to chatStream ReadableStream: restores
ws.onmessage when stream is cancelled (prevents handler leak)
---------
Co-authored-by: shivammittal274 <56757235+shivammittal274@users.noreply.github.com>
* fix: enable agent interaction with elements inside iframes
Fetch accessibility trees from all frames via Page.getFrameTree() +
per-frame Accessibility.getFullAXTree(frameId), so iframe elements
appear in snapshots with valid backendNodeIds. Pages without iframes
take the original single-call path with zero overhead.
Update snapshot tree builders to walk multiple RootWebArea roots from
merged multi-frame trees. Extract same-origin iframe content in the
markdown walker; show [iframe: url] placeholder for cross-origin.
* fix: namespace AX nodeIds by frameId to prevent cross-frame collisions
CDP AXNodeId values are frame-scoped — each frame's accessibility tree
starts its own counter from 1. Prefix nodeId and childIds with frameId
before merging so the nodeMap in snapshot builders never overwrites
nodes from a different frame.
* docs: add uBlock Origin install info to getting started and ad-blocking pages
Chrome dropped support for the full uBlock Origin extension — highlight
that BrowserOS brings it back and make it easy to install from both the
getting started guide and the dedicated ad-blocking page.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: revert Kimi partnership UI, restore daily limit survey
Remove Kimi/Moonshot AI partnership branding from the rate limit
banner, provider card, provider templates, and LLM hub. Restore
the original survey CTA on daily limit errors. Moonshot AI remains
as a regular provider template without the "Recommended" badge.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address Greptile review comments
- Guard survey CTA with !isCreditsExhausted to avoid showing it for
credits-exhausted users who already see "View Usage & Billing"
- Remove dead kimi-launch feature flag files (kimi-launch.ts,
useKimiLaunch.ts)
- Remove unused KIMI_RATE_LIMIT analytics events
- Remove VITE_PUBLIC_KIMI_LAUNCH from env schema and .env.example
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The merged PR (#661) injected custom entries into filteredModels, but
cmdk auto-scrolls to its first selected CommandItem, pushing the custom
entry out of view. Fix by using forceMount on a separate CommandGroup
and resetting scroll to top on every keystroke via requestAnimationFrame.
* feat: show custom model ID as first option in model selector
When typing in the model dropdown, the user's exact input now appears as the
first selectable row, followed by fuzzy search suggestions. This makes entering
custom model IDs intuitive — previously the option was hidden behind a
zero-results-only Enter shortcut that fuzzy search almost always prevented.
* fix: correct is_custom_model flag and prevent duplicate analytics events
- Use modelInfoList check instead of hardcoding is_custom_model: true in
the Enter key handler
- Add stopPropagation to prevent cmdk's root keydown handler from also
firing onSelect, which caused duplicate MODEL_SELECTED_EVENT emissions
* fix: install linux sysroot in configure, not via gclient hook
`gn gen` was failing on the arm64 leg with `Missing sysroot
(//build/linux/debian_bullseye_arm64-sysroot)`. The previous design
relied on `git_setup` writing `target_cpus` to `.gclient` so that
`gclient sync`'s DEPS hook would download the cross-arch sysroot. That
chain breaks for any chromium_src that was synced before cross-arch
support landed (the hook is gated on .gclient state at sync time) and
for partial pipeline runs that skip git_setup entirely. Nothing in
configure declared or verified its sysroot precondition.
Make configure self-healing: on Linux, invoke
`build/linux/sysroot_scripts/install-sysroot.py --arch=<target>`
directly before `gn gen`. install-sysroot.py is idempotent (stamp file
+ SHA check), fast when already installed, and decoupled from .gclient
— it's exactly what the failing assertion's error message recommends.
The script accepts our arch names directly: `x64` translates to `amd64`
internally via ARCH_TRANSLATIONS, and `arm64` is a valid pass-through.
Also temporarily pin release.linux.yaml to x64 only while we validate
the sysroot bootstrap end-to-end. Flip back to `[x64, arm64]` once
arm64 is green.
* chore: pin release.linux.yaml to arm64-only for sysroot bootstrap test
x64 already builds cleanly — the failing leg is arm64 cross-compile from
an x64 host. Pin the config to arm64 to exercise the new
install-sysroot.py path in configure without burning time on x64.
Flip back to [x64, arm64] once arm64 is green.
* feat(server): cache klavis createStrata to unblock /chat hot path
Conversation creation in /chat was blocking on a Worker-proxied
klavisClient.createStrata round-trip every time the user had any
managed Klavis app connected. The 5s KLAVIS_TIMEOUT_MS in the
ai-worker proxy existed specifically to bound this latency, but
the same cap also caused user-visible 504s on /klavis/servers/remove
since Strata DELETE operations routinely take >5s. Without caching
we couldn't raise the timeout without regressing chat creation.
This adds an in-process cache for Strata createStrata responses,
keyed by (browserosId, hashed sorted-server-set) and gated by a 1h
TTL. The cache stores only immutable JSON metadata (strataServerUrl,
strataId, addedServers); per-session MCP clients continue to be
opened and disposed by AiSdkAgent exactly as before, which keeps
the cache concurrency-safe by construction.
Cache invalidation has two layers: (a) the cache key embeds the
server set, so adding/removing apps naturally produces a different
key; (b) POST /klavis/servers/add and DELETE /klavis/servers/remove
explicitly call invalidate(browserosId) after their underlying
Klavis API call succeeds, as defense-in-depth.
Other changes:
- Consolidates klavis-related services into a new
apps/server/src/api/services/klavis/ directory; moves
register-klavis-mcp.ts -> strata-proxy.ts and adds strata-cache.ts
there. lib/clients/klavis/ stays unchanged.
- Refactors KlavisClient.removeServer into a low-level
deleteServersFromStrata(strataId, servers) primitive. The
cache-lookup + delete + invalidate orchestration moves up into
routes/klavis.ts where it belongs, eliminating the lib->api
layering inversion the original removeServer would have introduced.
- Uses Bun.hash (xxhash64) for fixed-width 16-hex-char keys, with
serverKey verified on read to make collision risk strictly zero.
- Dedupes concurrent fetches via in-flight Promise sharing, with
identity-checks before delete to avoid races between invalidate()
and a racing replacement insert.
Follow-up (separate PR): bump KLAVIS_TIMEOUT_MS to 30000 in
ai-worker/wrangler.toml so /klavis/servers/remove stops 504-ing.
* fix: address greptile review comments for klavis strata cache
- Drop dead `invalidated` field on InflightEntry. It was added to
support a "discard post-resolution if invalidated" check that I
later replaced with identity-checked deletes during self-review,
but I forgot to remove the field and the misleading comment
referencing it. Simplify Map<string, InflightEntry> to plain
Map<string, Promise<CacheEntry>>.
- Lower cache miss log from info to debug. Misses fire on every new
conversation; matching the existing debug-level for hits.
- Stop routing the /klavis/servers/remove handler through
klavisStrataCache.getOrFetch. The chat hot path keys its cache by
the user's full enabled-server set (e.g. hash('Gmail,Linear')),
so a single-server lookup here (hash('Gmail')) is guaranteed to
miss, write a spurious entry, and then have it immediately
cleared by invalidate() on the next line. Call createStrata
directly to recover the strataId, mirroring the original
removeServer flow.
`release.linux.yaml` now declares `architecture: [x64, arm64]` and the
runner loops the entire pipeline once per architecture. depot_tools
fetches both Linux sysroots automatically — `git_setup` idempotently
ensures `target_cpus = ['x64', 'arm64']` is in `.gclient` before
`gclient sync`, so cross-compiling arm64 from an x64 host just works.
The resolver returns `List[Context]` (single-element for the common
single-arch case), and `build/cli/build.py` loops `execute_pipeline` over
the per-arch contexts. Modules stay 100% arch-agnostic — no new
orchestration module, no new YAML schema beyond the list form.
Also fix a cross-compile bug in `build/modules/package/linux.py`: the
appimagetool binary must match the BUILD machine's arch (it executes
locally), not the target arch. Split into a host-keyed
`LINUX_HOST_APPIMAGETOOL` lookup vs the existing target-keyed
`LINUX_ARCHITECTURE_CONFIG`. Target arch is still passed to appimagetool
via the `ARCH` env var.
- build/common/resolver.py: scalar OR list `architecture` -> List[Context]
- build/cli/build.py: loop pipeline per arch, log multi-arch headers
- build/config/release.linux.yaml: `architecture: [x64, arm64]`
- build/modules/setup/git.py: idempotent `target_cpus` edit on Linux
- build/modules/package/linux.py: host vs target appimagetool split
- build/modules/package/linux_test.py: cover the host/target split
The --compile-only and --ci flags served overlapping purposes for CI
builds. Remove --compile-only entirely since --ci already handles the
CI use case (skip R2, skip prod env validation, local zip packaging)
and --no-upload covers the upload-skipping use case for full builds.
The server release CI workflow fails on ubuntu-latest because
patch-windows-exe.ts requires Wine to run rcedit. Thread the existing
--ci flag through compileServerBinaries so Windows PE metadata patching
is skipped in CI mode with a warning log.
* feat: add server release workflow
* fix: address PR review comments for 0331-add_server_release_workflow
* refactor: rework 0331-add_server_release_workflow based on feedback
* refactor: rework 0331-add_server_release_workflow based on feedback
* feat(cli): skip self-update prompts for package manager installs
Checks BROWSEROS_INSTALL_METHOD env var (npm, brew) and skips automatic
update checks. Users should use their package manager's update mechanism.
FormatNotice now shows the appropriate upgrade command based on install method.
* feat(cli): add npm bin wrapper for browseros-cli
* feat(cli): add npm postinstall script to download platform binary
Downloads the correct platform binary from GitHub releases during npm
install, verifies SHA256 checksums, and extracts to .binary directory.
* feat(cli): add npm package metadata and README
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: move npm package files to correct monorepo path
The bin wrapper and postinstall were created at apps/cli/npm/ instead of
packages/browseros-agent/apps/cli/npm/. Moves them to the correct location.
* style: use node: protocol for builtin module imports
* feat(cli): add Makefile npm targets and release workflow npm publish step
Adds npm-version and npm-publish Makefile targets for version sync.
Adds Node.js setup and npm publish step to the release workflow.
Adds npm/npx install instructions to release notes template.
* fix(cli): fail on missing checksum entry and limit redirect depth
- Abort if checksums.txt downloaded but archive entry is missing
- Warn if checksums.txt itself failed to download
- Cap redirect depth at 5 to prevent stack overflow on circular redirects
* fix(cli): match install.sh checksum behavior — warn instead of abort
The existing shell installer (install.sh) warns and continues when the
checksum entry is missing from checksums.txt. Match that behavior in the
npm postinstall to avoid unnecessary install failures. Both files come
from the same GitHub release, so the checksum is a corruption check,
not a strong security boundary.
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The model picker in NewProviderDialog rendered inline, causing dialog
resizing and lacked keyboard navigation. Replace it with a Popover +
Command (shadcn Combobox) pattern and add fuse.js for fuzzy search.
- Replace custom ModelPickerList with Popover + Command dropdown
- Add fuse.js for fuzzy model search (replaces string.includes)
- Add MODEL_SELECTED_EVENT and AI_PROVIDER_UPDATED_EVENT analytics
- Enrich PROVIDER_SELECTED_EVENT with model_id in chat sessions
* feat: add browseros-cli self-updater
* fix: address review comments for 0327-cli_self_updater
* fix: address PR review comments for 0327-cli_self_updater
* fix: replace goreleaser with Makefile-based release build
Remove .goreleaser.yml (required Pro license for monorepo field) and
consolidate cross-compilation into `make release`. CI now uses the same
Makefile target, fixing a bug where POSTHOG_API_KEY was missing from
release ldflags.
* fix: address critical self-updater bugs from code review
- Fix SHA256 checksum mismatch: verify archive checksum before extraction
instead of verifying extracted binary against archive hash (was always
failing). Add VerifyChecksum() and integration test.
- Fix JSON field name mismatch: TypeScript was emitting camelCase
(publishedAt, archiveFormat) but Go expected snake_case
(published_at, archive_format). Manifest parsing was silently broken.
- Add decompression size limit (256 MB) to prevent zip/gzip bombs.
- Don't update LastCheckedAt on transient errors so retry happens on
next CLI invocation instead of waiting 24h.
* feat: add PostHog usage analytics to CLI
Add anonymous command-level analytics to browseros-cli using the PostHog
Go SDK. Tracks which commands are executed, their success/failure status,
and duration — no PII or person profiles.
- New analytics package with Init/Track/Close singleton
- Distinct ID resolves from server's browseros_id (server.json), falls
back to CLI-generated UUID (~/.config/browseros-cli/install_id)
- API key injected at build time via ldflags (dev builds = silent no-op)
- Server now writes browseros_id into server.json for cross-surface
identity correlation
* fix: address PR review feedback for #603
- Return "unknown" for unrecognized args in commandName to avoid
sending arbitrary user input to PostHog
- Revert goreleaser to {{ .Env.POSTHOG_API_KEY }} (intentional hard
fail — release builds must have the key set)
- go mod tidy to fix posthog-go direct/indirect marker
- Add POSTHOG_API_KEY to .env.production.example
* feat: upload CLI binaries to CDN during release and gate workflow to core team
- Extend scripts/build/cli/upload.ts with uploadCliRelease() that pushes
archives + checksums to R2 under versioned (cli/v{VERSION}/) and latest
(cli/latest/) paths, plus a version.txt for lightweight latest resolution
- Update scripts/build/cli.ts entry point with --release/--version/--binaries-dir
flags (existing no-args behavior preserved for upload:cli-installers)
- Rewrite install.sh and install.ps1 to fetch from cdn.browseros.com instead of
GitHub releases API — eliminates rate limits and API dependency
- Add environment: release-core to release-cli.yml for core-team gating via
GitHub environment protection rules
- Add Bun setup + CDN upload step to the workflow between build and GitHub release
* fix: address review feedback for PR #602
- Make loadProdEnv return empty map when .env.production is absent so
pickEnv falls through to process.env in CI (Greptile P1)
- Add semver format validation for version string in install.sh and
install.ps1 to guard against malformed CDN responses
- Pass inputs.version via env var instead of inline ${{ }} interpolation
to prevent command injection in workflow shell
* test: temporarily allow release workflow on any branch
* fix(cli): restore main-only guard, remove goreleaser dependency
Replaces GoReleaser (Pro-only monorepo feature) with plain go build.
Tested: RC release created successfully on branch with all 6 binaries.
* fix(cli): fix hdiutil mount detection, update README with install/launch/init flow
* test: temporarily allow release workflow on any branch
* fix(cli): restore main-only guard, remove goreleaser dependency
Replaces GoReleaser (Pro-only monorepo feature) with plain go build.
Tested: RC release created successfully on branch with all 6 binaries.
* fix(cli): remove -quiet from hdiutil so mount point is detected
* fix: add refresh indicator to chat history when fetching latest conversations
Show a non-blocking "Fetching latest conversations" indicator at the top
of the history list while the cached data is being refreshed. Users can
still interact with the cached conversation list during the refresh.
* perf: reduce chat history query payload — fetch last 2 messages instead of 5
The conversation list only displays the last user message as a preview.
Fetching 5 messages per conversation was wasteful — each message contains
the full UIMessage object (tool calls, reasoning, etc.) multiplied by
50 conversations per page. Reduced to last 2 which is sufficient to
find the last user message in a user→assistant exchange.
* perf: use first+DESC instead of last+ASC to push LIMIT down to SQL
PostGraphile's `last: N` doesn't map to SQL LIMIT — it uses a padded
LIMIT 10 and slices in application code. Changing to `first: 2` with
ORDER_INDEX_DESC generates a true SQL LIMIT 2, reducing rows scanned
from 500 to 100 per page (50 conversations × 2 vs 10 messages each).
No UX impact — extractLastUserMessage() filters by role regardless
of message order.
* chore: update react query packages
* feat: replace localforage with idb-keyval
* fix: remove filesystem tools when no workspace is selected
- Make workingDir optional on ResolvedAgentConfig
- Remove resolveSessionDir() fallback that always created a session dir,
masking the no-workspace state and keeping filesystem tools available
- Gate buildFilesystemToolSet() on workingDir being defined
- Add workspace change detection mid-conversation — rebuilds the agent
session when workspace is added, removed, or switched (same pattern
as existing MCP server change detection)
- download_file falls back to tmpdir() when no workspace is set
- Memory/soul tools are unaffected — they use ~/BrowserOS/ paths
* fix: sanitize message history when session rebuilds with different tools
When a session is rebuilt due to workspace or MCP changes, the carried-over
message history may contain tool parts for tools that no longer exist in
the new session. The AI SDK validates messages against the current toolset
and rejects parts with no matching schema.
- Add toolNames getter to AiSdkAgent exposing registered tool names
- Add sanitizeMessagesForToolset() to strip tool parts referencing
removed tools from carried-over messages
- Apply sanitization in both MCP and workspace session rebuilds
* fix: prepend tool-change context to user message on session rebuild
When workspace or MCP integrations change mid-conversation, prepend a
[Context: ...] block to the user's message explaining what changed.
This prevents the LLM from hallucinating tool usage based on patterns
in the carried-over conversation history.
Context messages vary by change type:
- Workspace removed: lists unavailable filesystem tools, suggests
selecting a working directory
- Workspace added: confirms filesystem tools are available with path
- Workspace switched: notes the new working directory
- MCP changed: notes that some integration tools may have changed
Only fires on the first message after a rebuild. Invisible in the UI.
* fix: make MCP change context specific about which apps were added/removed
Diff the old and new MCP server keys to produce specific context like:
- "The following app integrations were disconnected: Gmail, Slack."
- "The following app integrations were connected: Linear."
instead of a generic "some tools may no longer be available" message.
* refactor: extract shared rebuildSession helper in ChatService
Eliminates the duplicated 20-line dispose→create→sanitize→store flow
that existed separately in both the MCP and workspace change-detection
blocks.
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* test: add sanitizeMessagesForToolset test suite
Tests for the message sanitization that runs when a session rebuilds
with a different toolset (workspace or MCP change mid-conversation):
- Preserves messages with no tool parts
- Preserves tool parts when tool is in the toolset
- Strips tool parts when tool is NOT in the toolset
- Strips multiple removed tool parts from same message
- Keeps browser tools while removing filesystem tools
- Removes messages that become empty after stripping
- Preserves non-tool parts (reasoning, step-start, file)
- Returns same references when no filtering needed
- Handles empty message array and empty toolset
* style: fix biome formatting in chat-service.ts
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
* feat: isolate new-tab agent navigation from origin tab
Add origin-aware navigation isolation so the agent never navigates
away from the new-tab chat UI. This is a two-layer defense:
1. Prompt adaptation: When origin is 'newtab', the system prompt's
execution and tool-selection sections are rewritten to prohibit
navigating the active tab and default all lookups to new_page.
2. Tool-level guards: navigate_page and close_page reject attempts
to act on the origin tab when in newtab mode, returning an error
that teaches the agent to self-correct.
The client now sends an `origin` field ('sidepanel' | 'newtab')
instead of injecting a soft NEWTAB_SYSTEM_PROMPT that LLMs could
ignore. Backwards compatible — defaults to 'sidepanel'.
Closes TKT-592, addresses TKT-564
* test: add newtab origin navigation guard tests
- 14 new prompt tests verifying the system prompt adapts correctly
for newtab vs sidepanel origin (execution rules, tool selection table,
absence of conflicting single-tab guidance)
- 6 new integration tests for navigate_page and close_page guards:
rejects origin tab in newtab mode, allows non-origin tabs, allows
all tabs in sidepanel mode, backwards compatible with no session
- Simplify CLI section: remove confusing MCP jargon, clarify it works
from terminal and AI coding agents
- Replace "point the CLI at your MCP server" with plain language
- Add Vertical Tabs to the features list
* feat(cli): add install scripts for macOS, Linux, and Windows
Bash script (install.sh) for macOS/Linux and PowerShell script
(install.ps1) for Windows. Both download the correct platform binary
from GitHub Releases with checksum verification, version resolution,
and PATH setup.
* fix(cli): address PR review comments for install scripts
- Add checksum verification to install.ps1 using Get-FileHash
- Add warnings on all checksum skip paths in install.sh
- Use grep -F (fixed-string) instead of regex for filename matching
- Add ?per_page=100 to GitHub API call in install.ps1
- Use random temp directory name in install.ps1 to avoid collisions
* fix(cli): address installer review feedback
* fix(cli): use full path for dist artifacts in release step
* test: temporarily allow release workflow on any branch
* fix(cli): restore main-only guard, remove goreleaser dependency
Replaces GoReleaser (Pro-only monorepo feature) with plain go build.
Tested: RC release created successfully on branch with all 6 binaries.
* fix(cli): update goreleaser tag_prefix to match browseros-cli-v* format
* fix(cli): replace goreleaser with plain go build for releases
GoReleaser free version cannot parse prefixed tags (browseros-cli-v*).
monorepo.tag_prefix is a Pro-only feature.
Replaced with direct go build + gh release create:
- Builds all 6 targets with go build (verified locally)
- Creates tar.gz/zip archives with checksums
- Uses gh release create to publish
- No external tool dependency
GoReleaser free cannot parse slash-prefixed tags (cli/v0.0.1) as semver.
Switch to browseros-cli-v0.0.1 format which is valid semver after
stripping the prefix. Remove the monorepo config (GoReleaser Pro only).
* ci(cli): change release workflow to manual dispatch from main
- Trigger via Actions UI with a version input (e.g. "0.1.0")
- Only runs on main branch
- Creates git tag cli/v<version> automatically
- Then GoReleaser builds all 6 binaries and creates the GitHub Release
* feat: add scoped release notes, changelog PR, and idempotent tags to CLI workflow
- Add concurrency group to prevent parallel releases
- Add scoped release notes from commits touching the CLI directory
- Pass release notes to goreleaser via --release-notes flag
- Make tag creation idempotent for safe re-runs
- Tag the saved release SHA, not HEAD after branching
- Add CHANGELOG.md and auto-update via PR with auto-merge
- Add pull-requests: write permission
---------
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* feat: add release workflow for agent extension
Adds a workflow_dispatch workflow that builds the WXT extension,
creates a .zip for sideloading, generates scoped release notes with
contributors and PR links, creates a GitHub release with the zip
attached, and opens an auto-merge PR to update CHANGELOG.md.
* fix: correct API URL to api.browseros.com
* fix: remove duplicate PR numbers and contributors from extension release notes
Apply the same fixes from the agent-sdk workflow:
- Skip PR number if already in commit subject (squash merges)
- Remove custom Contributors section (GitHub auto-generates one)
- Clean up unused variables
* fix: use absolute path for extension zip in release upload
* fix: wxt zip already builds, use correct output path
- Remove separate build step since wxt zip runs the build internally
- Fix zip path from .output/*.zip to dist/*-chrome.zip
* fix: run codegen before wxt zip to generate graphql types
* feat: add release workflow for agent extension
Adds a workflow_dispatch workflow that builds the WXT extension,
creates a .zip for sideloading, generates scoped release notes with
contributors and PR links, creates a GitHub release with the zip
attached, and opens an auto-merge PR to update CHANGELOG.md.
* fix: correct API URL to api.browseros.com
* fix: remove duplicate PR numbers and contributors from extension release notes
Apply the same fixes from the agent-sdk workflow:
- Skip PR number if already in commit subject (squash merges)
- Remove custom Contributors section (GitHub auto-generates one)
- Clean up unused variables
* fix: use absolute path for extension zip in release upload
* fix: wxt zip already builds, use correct output path
- Remove separate build step since wxt zip runs the build internally
- Fix zip path from .output/*.zip to dist/*-chrome.zip
- Skip adding PR number if already present in the commit subject
(squash merges include "(#123)" automatically)
- Remove custom Contributors section since GitHub auto-generates one
with avatars at the bottom of every release
Add a compile-only mode to the server build pipeline for CI/CD
environments that don't have R2 credentials. The --compile-only flag
skips resource staging and upload, producing only compiled binaries.
* feat: create GitHub release with changelog on agent-sdk publish
After publishing to npm, the workflow now:
- Tags the commit as agent-sdk-v<version>
- Generates release notes from commits that modified the agent-sdk
directory since the last agent-sdk release tag
- Creates a GitHub release with those notes
First release will show "Initial release" since no previous tag exists.
* feat: update CHANGELOG.md on agent-sdk release
Add a CHANGELOG.md for @browseros-ai/agent-sdk and update the release
workflow to prepend a versioned entry with the release notes before
creating the GitHub release. The changelog is committed to main
automatically.
* fix: address review issues in agent-sdk release workflow
- Add explicit permissions: contents: write
- Replace sed with head/tail for safe CHANGELOG insertion (fixes
double-quote and backslash corruption in commit messages)
- Handle empty release notes with "No notable changes." fallback
- Make git tag idempotent for workflow reruns (2>/dev/null || true)
* fix: use PR with auto-merge for changelog updates
Direct push to main fails due to branch protection requiring PRs.
Instead, create a branch, open a PR, and auto-merge via squash.
* feat: add contributors and PR links to agent-sdk release notes
Release notes now include PR numbers (linked automatically by GitHub),
GitHub usernames for each commit author, and a contributors section
at the bottom. All scoped to commits that modified the agent-sdk path.
* fix: reorder release steps and fix tag/idempotency issues
- Capture release SHA before any branching so the tag always points
to the main commit that was built and published to npm
- Reorder: generate notes → publish → tag/release → changelog PR
(changelog is lowest-stakes, runs last)
- Make tag push and release create idempotent for safe re-runs
(fall back to gh release edit if release already exists)
- Add || true to gh pr merge --auto in case auto-merge is not enabled
- Explicit git checkout main before creating changelog branch
* fix: explicit error handling for tag/release and contributor dedup
- Replace silent || true guards with explicit checks that log what's
happening (tag exists, remote tag exists, release exists) so errors
are visible instead of swallowed
- Fix contributor dedup: use grep -qw (word match) instead of grep -qF
(substring match) so "dan" isn't excluded when "dansmith" exists
* fix: exclude current version tag when finding previous release
On re-runs, the current version's tag already exists on the remote, so
PREV_TAG resolves to it and git log produces empty output. Filter it
out so release notes are generated against the actual previous version.
* ci: prevent concurrent agent-sdk release runs
Add concurrency group so multiple dispatches queue instead of racing
on the same tag/release/PR.
* feat(cli): production-ready CLI with auto-launch, install, and cross-platform builds
- init: accept URL argument and --auto flag for non-interactive setup
- install: new command to download BrowserOS app for current platform
- launch: auto-detect and launch BrowserOS when server is not running
- discovery: prefer server.json (live) over config.yaml (may be stale)
- errors: actionable messages guiding users to init/install
- goreleaser: cross-platform builds for 6 targets (darwin/linux/windows × amd64/arm64)
- ci: GitHub Actions workflow to release CLI binaries on cli/v* tag push
* fix(cli): check health status code and add progress dots during launch
- Health check in newClient() now verifies HTTP 200, not just no error
- waitForServer prints dots during the 30s poll so users know it's working
* refactor(cli): make launch an explicit command, remove auto-launch from newClient
- launch: new explicit command to find and open BrowserOS app
- launch: probes server.json, config, and common ports before launching
- launch: if already running, reports URL instead of launching again
- init --auto: uses port probing to find running servers
- install --deb: errors on non-Linux instead of silently downloading DMG
- error messages: guide users to launch/install/init explicitly
- removed: auto-launch from newClient() — CLI never does something surprising
* fix(cli): platform-native detection, launch, and install for all OSes
Detection (isBrowserOSInstalled):
- macOS: uses `open -Ra` to query Launch Services (no hardcoded paths)
- Linux: checks /usr/bin/browseros (.deb), browseros.desktop, AppImage search
- Windows: checks %LOCALAPPDATA%\BrowserOS\Application\BrowserOS.exe
and HKCU/HKLM uninstall registry keys
Launch (startBrowserOS):
- macOS: `open -b com.browseros.BrowserOS` (bundle ID, not path)
- Linux: `browseros` binary, AppImage, or `gtk-launch browseros`
(fixed: was using xdg-open which opens by MIME type, not desktop files)
- Windows: runs BrowserOS.exe from known Chromium per-user install path
(fixed: was using `cmd /c start BrowserOS` which doesn't resolve)
Install (runPostInstall):
- macOS: hdiutil attach → cp -R to /Applications → hdiutil detach
- Linux: chmod +x for AppImage, dpkg -i instruction for .deb
- Windows: launches installer exe
- --deb flag now errors on non-Linux platforms
Removed auto-launch from newClient() — CLI never does surprising things.
Sources verified from:
- packages/browseros/build/common/context.py (binary names per platform)
- packages/browseros/build/modules/package/linux.py (.deb structure, .desktop file)
- packages/browseros/chromium_patches/chrome/install_static/chromium_install_modes.h
(Windows base_app_name="BrowserOS", registry GUID, install paths)
- /Applications/BrowserOS.app/Contents/Info.plist (bundle ID)
* fix: broaden connection error detection for main page and sidepanel
The connection error check required both "Failed to fetch" AND
"127.0.0.1" in the error message. On the main page, the browser
only produces "Failed to fetch" without the IP, so users saw a
generic "Something went wrong" instead of the troubleshooting link.
Broaden detection to also match "localhost" and bare "Failed to fetch"
errors that don't contain an external URL. Also pass providerType in
NewTabChat so provider-specific errors render correctly.
Closes#526
* fix: simplify connection error detection
All chat requests go through the local BrowserOS agent server, so any
"Failed to fetch" error is always a local connection issue. Remove the
unnecessary 127.0.0.1/localhost/URL checks.
* fix: pass providerType to agentUrlError ChatError instances
Port conflicts are expected — Chromium retries with a different port.
These errors were flooding Sentry (14k+ events) without user impact.
- handleStartupError: move Sentry.captureException below the
port-in-use check so it only fires for unexpected startup errors
- handleControllerStartupError: skip Sentry capture for port errors
- index.ts: exit early for port errors before Sentry capture
- Change dialog width from sm:max-w-2xl (672px) to sm:w-[70vw] sm:max-w-4xl
so it takes 70% of viewport width, capped at 896px
- Add overflow-x-auto on table wrappers so wide tables scroll horizontally
instead of being clipped
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: integrate models.dev for dynamic LLM provider/model data (#TKT-657)
Replace hardcoded model lists with data sourced from models.dev so new
providers and models appear automatically when the community adds them.
- Add build script (scripts/generate-models.ts) that fetches models.dev/api.json
and outputs a compact JSON with 10 providers and 520 models
- Replace hardcoded MODELS_DATA (50 models) with dynamic models.dev lookups
- Add searchable model combobox (Popover + Command) replacing plain Select dropdown
- Enrich provider templates with models.dev metadata (context window, image support)
- Keep chatgpt-pro, qwen-code, browseros, openai-compatible as hardcoded providers
* fix: address review — remove ollama-cloud mapping, fix default models, remove dead code
- Remove ollama from PROVIDER_MAP (ollama-cloud has cloud models, not local)
- Add ollama to CUSTOM_PROVIDER_MODELS with empty list (users type custom IDs)
- Update defaultModelIds to ones that exist in models.dev data:
openrouter → anthropic/claude-sonnet-4.5
lmstudio → openai/gpt-oss-20b
bedrock → anthropic.claude-sonnet-4-6
- Remove dead isCustomModel export
- Regenerate models-dev-data.json (9 providers, 486 models)
* fix: model suggestion list focus/dismiss behavior
- List only opens when input is focused or user types
- Clicking a model selects it and closes the list
- Clicking outside (blur) dismisses the list
- onMouseDown preventDefault on list items prevents blur race condition
* refactor: extract ModelPickerList component with proper open/close UX
- Collapsed state: Select-like trigger showing selected model + chevron
- Expanded state: search input + scrollable filtered list, inline
- Click outside or Escape to close, Enter to submit custom model
- Extracted as separate component (reduces dialog nesting, testable)
- No more setTimeout hacks for blur handling
* chore: remove plan doc from repo
* docs: add setup guides for ChatGPT Pro, GitHub Copilot, and Qwen Code
Add individual OAuth setup guide pages with step-by-step screenshots
for each provider. Add "Use Your Existing Subscription" section to the
Bring Your Own LLM page with card links to each guide. Register pages
in docs navigation.
* docs: add ChatGPT Pro setup screenshots
* docs: use custom provider icons for OAuth setup cards
* docs: inline SVG icons in provider cards for dark mode support
* docs: place provider icons above card titles
* feat: improve rate limit UX, usage page, and provider selector
- Show "Add your own provider for unlimited usage" CTA when BrowserOS
credits are exhausted or daily limit is reached
- Fix credit exhaustion detection to match actual error message
- Improve Usage page: remove disabled Add Credits button, add "Coming
soon" badge, add "Want unlimited usage?" section linking to providers
- Add "+ Add Provider" button at bottom of chat provider selector dropdown
* fix: use asChild pattern for Button+anchor in usage page
Replace nested <a><Button> (invalid HTML) with Button asChild
pattern per shadcn/ui convention.
* feat: UI improvements for OAuth dialog, provider badges, and events docs
- Replace OAuth device code toast with a proper Dialog showing the code
prominently with a copy button (GitHub Copilot, Qwen Code, ChatGPT Pro)
- Add "New" badge on provider template cards for ChatGPT Plus/Pro,
GitHub Copilot, and Qwen Code with orange border highlight
- Add events.md documenting all analytics events across the platform
* fix: add verificationUri to DeviceCodeDialog for popup-blocked fallback
Add verificationUri to PendingDeviceCode interface and pass it from
both handleClientAuth and handleServerAuth. Render a fallback "Open
verification page" link in DeviceCodeDialog so users can navigate
to the auth page if the popup was blocked.
- Add MCP promo banner on AI providers page with "New" badge and
"66+ tools" highlight, linking to /settings/mcp
- Add Quick Setup section on MCP settings page with copy-paste
commands for Claude Code, Gemini CLI, Codex, Claude Desktop, OpenClaw
- Consolidate MCP settings: move restart button inline with server URL,
remove separate MCP Server Settings card
- Add analytics event for promo banner clicks
* feat(eval): show mean score instead of pass/fail in report and viewer
* feat(eval): integrate NopeCHA CAPTCHA solver into eval pipeline
Add CAPTCHA detection and waiting so screenshots capture post-solve state.
Run headed with xvfb on CI since headless breaks extension content scripts.
- Add CaptchaWaiter module (detect reCAPTCHA/hCaptcha/Turnstile, poll until solved)
- Add optional `captcha` config block to EvalConfigSchema
- Wait for CAPTCHA solve before screenshot in single-agent and orchestrator-executor
- Patch NopeCHA manifest with API key before launching workers
- Fix CAPTCHA_EXT_DIR path (was pointing one level too high)
- Remove --incognito (extensions don't run in incognito; fresh user-data-dir isolates)
- CI: install xvfb, run headed via xvfb-run, pass NOPECHA_API_KEY secret
* fix: remove daily rate-limit middleware
The daily conversation rate limit is no longer needed. Remove the
middleware, RateLimiter class, fetch-config, error type, shared
constants, DB schema table, and integration tests.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove unused getDb() method
No longer needed after rate-limiter removal.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The eval's single-agent was passing raw task.query as the prompt,
without browser context (active tab URL, title). The agent didn't
know which page it was on, causing it to ask "which website?" instead
of browsing.
Use formatUserMessage() (same as chat-service.ts) to include browser
context in the prompt. Re-export formatUserMessage from agent/tool-loop.
* fix: prevent deleted scheduled tasks from reappearing after sync
When a scheduled task was deleted, the sync function would see the
remote job missing locally and re-add it, undoing the delete. Fix by
tracking pending deletions in storage so the sync function deletes
them from the backend instead of re-adding them locally.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: use read-modify-write for pending deletions to prevent concurrent clobber
Re-read pendingDeletionStorage before write-back and only remove
resolved IDs, preserving any new entries added by concurrent
removeJob calls during the sync's network I/O.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The test workflow captured exit codes but never failed the job, so PR
checks always showed green even when tests failed. Exit with the
captured code in the summarize step so each suite properly reports
pass/fail. Not a required check, so failures remain non-blocking.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(eval): switch to ubuntu-latest runner, add OE-Clado config
- Switch workflow from self-hosted Mac Studio to ubuntu-latest
- Install BrowserOS Linux .deb in CI (no self-hosted runner needed)
- Add browseros-oe-clado-weekly.json config for orchestrator-executor
- Fix report chart to show date+time (not just date)
- Make BROWSEROS_BINARY configurable via env var
* feat(eval): add NopeCHA captcha solver extension to eval runs
- Auto-load NopeCHA extension in eval Chrome instances
- Works in incognito + headless mode
- CI workflow downloads NopeCHA before eval
- extensions/ directory gitignored (downloaded at runtime)
* feat(eval): per-config concurrency — different configs run in parallel
* feat(eval): remove concurrency limit — all runs execute in parallel
* ci: run browseros tests on pull requests
* refactor: rework 0320-github_action_for_tests based on feedback
* refactor: rework 0320-github_action_for_tests based on feedback
* chore: add CI artifacts to .gitignore
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove mikepenz/action-junit-report to fix check suite misattribution
The JUnit report action creates check runs that GitHub associates with the
CLA check suite instead of the Tests check suite, causing test reports to
appear under "CLA Assistant" in the PR checks UI.
Remove the action and rely on job status + step summary + artifact upload
for test result visibility.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(eval): weekly eval pipeline with R2 uploads and trend dashboard
Add infrastructure for running weekly evaluations and tracking score
trends over time:
- Auto-generated output dirs: results/{config-name}/{timestamp}/
Each eval run gets its own timestamped folder, nothing is overwritten.
- upload-run.ts: uploads eval results to Cloudflare R2. Supports
uploading a specific run or all un-uploaded runs for a config.
- weekly-report.ts: generates an interactive HTML dashboard from R2
data. Config dropdown, trend chart with hover tooltips, searchable
runs table. Groups runs by config name.
- viewer.html: client-facing 3-column run viewer (task list,
screenshots with autoplay, agent stream with messages.jsonl).
Shows performance grader axis breakdown with per-axis scores.
- browseros-agent-weekly.json: weekly benchmark config (kimi-k2p5,
webbench-2of4-50, 10 workers, performance grader, headless).
- eval-weekly.yml: GitHub Actions workflow with cron (Saturday 6am)
and manual trigger. Runs on self-hosted Mac Studio runner.
Concurrency group ensures only one eval runs at a time.
- Dashboard updates: load previous runs, messages.jsonl viewer,
grade badges show percentages, async stream loading.
- Grader updates: timeout 30min, max turns 100, DOM content
verification guidance for performance grader.
* fix(eval): address Greptile review — injection, nested dirs, escaping
- Fix script injection in eval-weekly.yml: pass github.event.inputs
through env var instead of interpolating into shell
- Fix /api/runs to enumerate nested results/{config}/{timestamp}/ dirs
- Fix /api/load-run to allow single-slash run names (config/timestamp)
- Add HTML escaping for R2-sourced values in weekly-report.ts
- Escape axis names in viewer.html renderAxesBreakdown
* fix(eval): fix biome lint — non-null assertion, template literals
* fix(eval): fix biome errors — replace var with let, fix inner function declaration
* fix(eval): address Greptile P2 issues
- isRunDir: check all subdirs for metadata.json, not just first 3
- eval-runner: guard configPath for dashboard-driven runs (fallback to 'eval')
- load-run: default unknown termination_reason to 'failed' not 'completed'
* feat(eval): make BROWSEROS_BINARY configurable via env var
The OAuth callback server on port 1455 was bound eagerly at startup,
crashing the server if another BrowserOS instance was already running.
Rewrite as a lazy class (OAuthCallbackServer) that:
- Only binds port 1455 when the user initiates a ChatGPT Pro login
- Sends GET /cancel to any existing server on the port first, then
retries up to 5 times (follows Codex CLI's cancel+retry pattern)
- Exposes /cancel endpoint so other instances/tools can cancel us
- Releases the port after the OAuth callback arrives
- Device-code providers (GitHub Copilot, Qwen) never touch port 1455
This allows running eval, dev instances, and multiple BrowserOS
instances without port conflicts. OAuth login works on whichever
instance initiates it — the others continue without OAuth.
* feat: auto-discover server port via ~/.browseros/server.json
Server writes its port to ~/.browseros/server.json on startup so the CLI
can auto-discover the server URL without requiring `browseros-cli init`.
Discovery chain: BROWSEROS_URL env > config.yaml > server.json > error
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address review feedback for PR #504
- Use synchronous unlinkSync in stop() since process.exit() fires
immediately after, abandoning any pending async operations
- Wrap writeServerConfig in try/catch so a write failure doesn't crash
a healthy server for a convenience feature
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: type server discovery config and add version metadata
Add ServerDiscoveryConfig interface to @browseros/shared and enrich
server.json with server_version, browseros_version, and chromium_version.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: normalize URL from server.json for consistency
All other URL sources (env var, config.yaml) pass through
normalizeServerURL; apply the same to the server.json path.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add voice recording UI with waveform overlay to new tab search bar
Add a microphone button to the NewTab search bar that opens a fullscreen
recording overlay powered by react-voice-visualizer. The overlay shows a
real-time waveform visualization during recording, recording time, and a
stop button. On completion, the audio is transcribed via the existing
gateway endpoint and the transcript auto-navigates to inline chat.
Changes:
- Extract transcribeAudio() to shared lib/voice/transcribe-audio.ts
- Add VoiceRecordingOverlay component with react-voice-visualizer
- Add Mic button to NewTab search bar
- Track analytics via existing NEWTAB_VOICE_* events
- Handle cancel (backdrop click) vs submit (stop button) correctly
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address PR review comments for voice recording overlay
- Reset processingRef on transcription error to prevent stuck state
- Use stable callback refs to prevent useEffect re-runs from inline
arrow function props (fixes timer reset and unnecessary re-processing)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: replace voice overlay with inline sidepanel-style voice UI
Remove react-voice-visualizer dependency and VoiceRecordingOverlay.
Instead use the same inline voice pattern as the sidepanel ChatInput:
- Waveform bars replace the search input during recording
- Mic/stop/loading button states in the search bar
- Transcript populates the search input on completion
- Voice error shown inline below the search bar
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: add build smoke test to catch compile failures
Compiles the server binary (darwin-arm64) and verifies --version outputs
the correct version from package.json. Uses an empty resource manifest
and stub env vars so the test runs without R2 access or real secrets.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address review feedback for PR #511
- Derive build target from process.platform/arch for CI portability
- Include binary stderr in --version assertion for better diagnostics
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sharp is a native C module (libvips) whose .node binaries can't be
embedded in Bun compiled executables. It was imported at the top level
in copilot-fetch.ts, crashing the entire server at startup.
Replace with jimp (pure JavaScript, zero native deps) which bundles
cleanly into compiled binaries. Same resize algorithm preserved.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add Qwen Code as OAuth LLM provider with refactored OAuth hooks
Add Alibaba Qwen Code as a third OAuth provider using Device Code flow
with PKCE. Free tier: 2,000 requests/day, up to 1M token context.
Refactoring:
- Extract useOAuthProviderFlow hook (eliminates ~180 lines of duplicated
OAuth logic from AISettingsPage for ChatGPT Pro + Copilot + Qwen)
- Extract resolveOAuthConfig in config.ts (shared resolver for all OAuth
providers, parameterized by provider name, default model, refresh flag)
- Generalize token-manager device code flow to support PKCE
(code_challenge/code_verifier) and form-urlencoded content type
New code:
- Qwen Code provider config with PKCE + form encoding flags
- Provider factories (both provider.ts and provider-factory.ts)
- Extension UI (template card, models, analytics, dialog)
* fix: use portal.qwen.ai as API base URL for OAuth tokens
DashScope (dashscope.aliyuncs.com) expects Alibaba Cloud API keys,
not OAuth tokens from chat.qwen.ai. The correct endpoint for OAuth
Bearer tokens is portal.qwen.ai/v1.
* fix: correct Qwen Code model IDs and context windows
- coder-model (1M context): virtual alias that routes to best model
- qwen3-coder-plus (1M): was incorrectly 131K
- qwen3-coder-flash (1M): new, speed-optimized variant
- qwen3.5-plus (1M): was incorrectly 1048576 (power-of-two vs decimal)
- Removed qwen3-coder-next (local/self-hosted, not available via OAuth)
- Default model changed to coder-model (auto-routes server-side)
* fix: move Qwen device code request to extension (bypasses WAF)
Alibaba WAF blocks server-side requests to chat.qwen.ai. Move the
initial device code request to the extension (browser context with
cookies), then hand off the deviceCode + codeVerifier to the server
for background polling via new POST /oauth/:provider/poll endpoint.
* fix: persist OAuth flow-started flag in sessionStorage
The flowStartedRef was lost when the component remounted (e.g. user
navigated to onboarding then back to settings). Use sessionStorage
to persist the flag so auto-create works after navigation.
* revert: remove sessionStorage for OAuth flow flag
Revert to simple useRef pattern matching the original ChatGPT Pro
implementation. The auto-create works when the user stays on the
AI settings page during auth.
* revert: move Qwen back to server-side device code flow
WAF block was temporary (rate-limiting), not permanent. Server-side
fetch to chat.qwen.ai now works. Reverted client-side device code
approach — Qwen now uses the same clean server-side flow as Copilot.
Removed: clientSideDeviceCode config, startClientSideDeviceCode(),
POST /oauth/:provider/poll endpoint, startDeviceCodePolling().
* feat: add WAF detection, rate-limit protection, and token storage endpoint
- Detect WAF captcha responses (HTML instead of JSON) in device code
request and token polling, with user-friendly error messages
- Add 30s cooldown on "USE" button to prevent rapid clicks triggering WAF
- WAF-blocked poll requests silently retry instead of aborting
- Add POST /oauth/:provider/token endpoint for storing externally-provided
tokens (useful for future fallback flows)
- Add storeTokens() method to OAuthTokenManager
- Pass server error messages through to extension toast notifications
* refactor: remove 30s cooldown, simplify OAuth hook
The hook is now identical for all providers — server handles retries
via activeDeviceFlows.delete(). Removed flowStartedAtRef cooldown
that was blocking legitimate retries.
* feat: client-side OAuth for Copilot and Qwen Code
Move device code OAuth flow to the extension for GitHub Copilot and
Qwen Code. The extension makes requests using Chrome's network stack,
which bypasses Alibaba WAF TLS fingerprint detection that blocks
server-side Bun/Node.js fetch.
New files:
- client-oauth.ts: Client-side device code + PKCE + token polling
Changes:
- useOAuthProviderFlow: handleClientAuth() for providers with clientAuth
config, handleServerAuth() for others (ChatGPT Pro)
- AISettingsPage: clientAuth config for Copilot and Qwen Code
- WAF detection: opens provider site for captcha solving on block
Server-side device code flow preserved as fallback (token-manager.ts,
providers.ts). Token storage via POST /oauth/:provider/token endpoint.
* fix: export OAuthProviderFlowConfig type, fix typecheck errors
- Export OAuthProviderFlowConfig interface so AISettingsPage can use it
instead of duplicating the type inline
- Fix string | null → string | undefined for agentServerUrl parameter
2026-03-20 17:46:48 +05:30
1198 changed files with 111361 additions and 40425 deletions
description: Answer questions about BrowserOS internal stuff (setup, features, architecture, design decisions) by reading the private internal-docs submodule and the codebase. Use for "how do I X", "where is Y", "what is the deal with Z", or any question that mixes ops/setup knowledge with code knowledge. Can execute steps with per-command confirmation.
Answer team-internal questions by reading `.internal-docs/` and the codebase, synthesizing a direct answer with file:line citations, and optionally running surfaced commands with confirmation.
**Announce at start:** "I'm using the ask-internal skill to answer this from internal-docs and the codebase."
## When to use
- "How do I reset my dogfood profile?"
- "What's the deal with the OpenClaw VM startup?"
- "Where do we configure release signing?"
- Any question whose answer lives in setup runbooks, feature notes, architecture docs, or the code that produced them.
## Hard rules — never do these
- NEVER execute a state-mutating command without per-command `y` confirmation from the user.
- NEVER edit BrowserOS code in response to an ask-internal question. The skill answers; it does not modify code. Use `/document-internal` for writes.
- NEVER guess. If grep finds nothing useful in docs or code, say so plainly.
- NEVER run this skill if `.internal-docs/` is missing. Stop with the init command.
- NEVER cite a file or line number you have not actually read.
## Voice rules
Apply the same voice rules as `document-internal` to the synthesized answer:
- Lead with the point.
- Concrete nouns. Name files, functions, commands.
Read the top 3-5 doc hits and top 3-5 code hits. Do not skim — read the relevant section fully so citations are accurate.
### Step 3: Synthesize answer
Structure the response:
1.**Direct answer.** First sentence answers the question. No preamble.
2.**Steps if applicable.** Numbered list with exact commands.
3.**Citations.** Every factual claim references `path/to/file.md:42` or `path/to/code.ts:117`. Run the voice self-check before printing.
If multiple docs cover the topic at different layers (e.g., a setup runbook and a feature note both mention dogfood profiles), reconcile them in the answer rather than dumping both.
### Step 4: Offer execution (only if commands surfaced)
If Step 3 produced executable commands the user could run, ask:
> Run these for you? (y / n / dry-run)
- **y:** Execute one at a time. For any command that mutates state (writes a file, modifies config, kills a process, deletes anything), ask "run this? <command>" before each. Read-only commands (`ls`, `cat`, `git status`) run without per-command confirmation but still print before running.
- **n:** Skip. Done.
- **dry-run:** Print the full sequence as a `bash` block. Do not execute.
### Step 5: Doc-not-found path
If Step 2 returned nothing useful (no doc hits AND no clear code answer):
1. Tell the user: "No doc covers this. Tangentially relevant files: <list>."
2. Ask: "Draft a new doc and open a PR to internal-docs?"
3. On yes: invoke the full `/document-internal` flow (four sharp questions, draft, voice check, PR), forced to `setup/` doc type, with the code-grep findings handed in as initial context.
- **DONE_WITH_CONCERNS** — answered, but flag uncertainty (e.g., docs and code disagreed; user should reconcile).
- **BLOCKED** — submodule missing or other pre-flight failure.
- **NEEDS_CONTEXT** — question too vague to search effectively. Ask one clarifying question.
## Citation discipline
Every "X is at Y" claim in the answer must point to a file:line that the skill actually read. Do not approximate. If you didn't read it, don't cite it.
If a doc says one thing and the code says another, surface the conflict explicitly:
> The setup runbook (`setup/dogfood-profile.md:23`) says to delete `~/.cache/browseros/dogfood`, but the actual code path in `packages/cli/src/cleanup.ts:47` removes `~/.local/share/browseros/dogfood`. The doc looks stale. Recommend updating it.
## Common Mistakes
**Skimming and then citing**
- **Problem:** Citation points to a line that doesn't actually contain the claim.
- **Fix:** Read the section fully before citing. If you didn't read line 117, don't cite line 117.
**Executing without per-command confirmation for mutations**
- **Problem:** User says "y" to "run all", skill blasts through `rm -rf`-style commands.
- **Fix:** "y" means "run this sequence with per-mutation confirmations". Per-command y is required for writes.
**Searching only docs, not code**
- **Problem:** Doc says X but code does Y; answer is wrong.
- **Fix:** Always grep both sources in Step 2.
## Red Flags
**Never:**
- Cite a file:line you haven't read.
- Run mutations without per-command confirmation.
- Modify BrowserOS code from this skill (use `/document-internal` for writes).
**Always:**
- Pre-flight check before any search.
- Reconcile doc vs code conflicts in the answer, don't hide them.
- Plain "no doc covers this" when grep is empty — never invent.
description: "You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation."
---
# Brainstorming Ideas Into Designs
Help turn ideas into fully formed designs and specs through natural collaborative dialogue.
Start by understanding the current project context, then ask questions one at a time to refine the idea. Once you understand what you're building, present the design and get user approval.
<HARD-GATE>
Do NOT invoke any implementation skill, write any code, scaffold any project, or take any implementation action until you have presented a design and the user has approved it. This applies to EVERY project regardless of perceived simplicity.
</HARD-GATE>
## Anti-Pattern: "This Is Too Simple To Need A Design"
Every project goes through this process. A todo list, a single-function utility, a config change — all of them. "Simple" projects are where unexamined assumptions cause the most wasted work. The design can be short (a few sentences for truly simple projects), but you MUST present it and get approval.
## Checklist
You MUST create a task for each of these items and complete them in order:
2.**Offer visual companion** (if topic will involve visual questions) — this is its own message, not combined with a clarifying question. See the Visual Companion section below.
3.**Ask clarifying questions** — one at a time, understand purpose/constraints/success criteria
4.**Propose 2-3 approaches** — with trade-offs and your recommendation
5.**Present design** — in sections scaled to their complexity, get user approval after each section
6.**Write design doc** — save to `.llm/specs/YYYY-MM-DD-<topic>-design.md` and commit
7.**Spec self-review** — quick inline check for placeholders, contradictions, ambiguity, scope (see below)
8.**User reviews written spec** — ask user to review the spec file before proceeding
9.**Transition to implementation** — invoke writing-plans skill to create implementation plan
## Process Flow
```dot
digraph brainstorming {
"Explore project context" [shape=box];
"Visual questions ahead?" [shape=diamond];
"Offer Visual Companion\n(own message, no other content)" [shape=box];
**The terminal state is invoking writing-plans.** Do NOT invoke frontend-design, mcp-builder, or any other implementation skill. The ONLY skill you invoke after brainstorming is writing-plans.
## The Process
**Understanding the idea:**
- Check out the current project state first (files, docs, recent commits)
- Before asking detailed questions, assess scope: if the request describes multiple independent subsystems (e.g., "build a platform with chat, file storage, billing, and analytics"), flag this immediately. Don't spend questions refining details of a project that needs to be decomposed first.
- If the project is too large for a single spec, help the user decompose into sub-projects: what are the independent pieces, how do they relate, what order should they be built? Then brainstorm the first sub-project through the normal design flow. Each sub-project gets its own spec → plan → implementation cycle.
- For appropriately-scoped projects, ask questions one at a time to refine the idea
- Prefer multiple choice questions when possible, but open-ended is fine too
- Only one question per message - if a topic needs more exploration, break it into multiple questions
- Focus on understanding: purpose, constraints, success criteria
**Exploring approaches:**
- Propose 2-3 different approaches with trade-offs
- Present options conversationally with your recommendation and reasoning
- Lead with your recommended option and explain why
**Presenting the design:**
- Once you believe you understand what you're building, present the design
- Scale each section to its complexity: a few sentences if straightforward, up to 200-300 words if nuanced
- Ask after each section whether it looks right so far
- Cover: architecture, components, data flow, error handling, testing
- Be ready to go back and clarify if something doesn't make sense
**Design for isolation and clarity:**
- Break the system into smaller units that each have one clear purpose, communicate through well-defined interfaces, and can be understood and tested independently
- For each unit, you should be able to answer: what does it do, how do you use it, and what does it depend on?
- Can someone understand what a unit does without reading its internals? Can you change the internals without breaking consumers? If not, the boundaries need work.
- Smaller, well-bounded units are also easier for you to work with - you reason better about code you can hold in context at once, and your edits are more reliable when files are focused. When a file grows large, that's often a signal that it's doing too much.
**Working in existing codebases:**
- Explore the current structure before proposing changes. Follow existing patterns.
- Where existing code has problems that affect the work (e.g., a file that's grown too large, unclear boundaries, tangled responsibilities), include targeted improvements as part of the design - the way a good developer improves code they're working in.
- Don't propose unrelated refactoring. Stay focused on what serves the current goal.
## After the Design
**Documentation:**
- Write the validated design (spec) to `.llm/specs/YYYY-MM-DD-<topic>-design.md`
- (User preferences for spec location override this default)
- Use elements-of-style:writing-clearly-and-concisely skill if available
- Commit the design document to git
**Spec Self-Review:**
After writing the spec document, look at it with fresh eyes:
1.**Placeholder scan:** Any "TBD", "TODO", incomplete sections, or vague requirements? Fix them.
2.**Internal consistency:** Do any sections contradict each other? Does the architecture match the feature descriptions?
3.**Scope check:** Is this focused enough for a single implementation plan, or does it need decomposition?
4.**Ambiguity check:** Could any requirement be interpreted two different ways? If so, pick one and make it explicit.
Fix any issues inline. No need to re-review — just fix and move on.
**User Review Gate:**
After the spec review loop passes, ask the user to review the written spec before proceeding:
> "Spec written and committed to `<path>`. Please review it and let me know if you want to make any changes before we start writing out the implementation plan."
Wait for the user's response. If they request changes, make them and re-run the spec review loop. Only proceed once the user approves.
**Implementation:**
- Invoke the writing-plans skill to create a detailed implementation plan
- Do NOT invoke any other skill. writing-plans is the next step.
## Key Principles
- **One question at a time** - Don't overwhelm with multiple questions
- **Multiple choice preferred** - Easier to answer than open-ended when possible
- **YAGNI ruthlessly** - Remove unnecessary features from all designs
- **Explore alternatives** - Always propose 2-3 approaches before settling
- **Incremental validation** - Present design, get approval before moving on
- **Be flexible** - Go back and clarify when something doesn't make sense
## Visual Companion
A browser-based companion for showing mockups, diagrams, and visual options during brainstorming. Available as a tool — not a mode. Accepting the companion means it's available for questions that benefit from visual treatment; it does NOT mean every question goes through the browser.
**Offering the companion:** When you anticipate that upcoming questions will involve visual content (mockups, layouts, diagrams), offer it once for consent:
> "Some of what we're working on might be easier to explain if I can show it to you in a web browser. I can put together mockups, diagrams, comparisons, and other visuals as we go. This feature is still new and can be token-intensive. Want to try it? (Requires opening a local URL)"
**This offer MUST be its own message.** Do not combine it with clarifying questions, context summaries, or any other content. The message should contain ONLY the offer above and nothing else. Wait for the user's response before continuing. If they decline, proceed with text-only brainstorming.
**Per-question decision:** Even after the user accepts, decide FOR EACH QUESTION whether to use the browser or the terminal. The test: **would the user understand this better by seeing it than reading it?**
- **Use the browser** for content that IS visual — mockups, wireframes, layout comparisons, architecture diagrams, side-by-side visual designs
- **Use the terminal** for content that is text — requirements questions, conceptual choices, tradeoff lists, A/B/C/D text options, scope decisions
A question about a UI topic is not automatically a visual question. "What does personality mean in this context?" is a conceptual question — use the terminal. "Which wizard layout works better?" is a visual question — use the browser.
If they agree to the companion, read the detailed guide before proceeding:
# Wait for server-started message (check log file)
for i in {1..50};do
if grep -q "server-started""$LOG_FILE" 2>/dev/null;then
# Verify server is still alive after a short window (catches process reapers)
alive="true"
for _ in {1..20};do
if ! kill -0 "$SERVER_PID" 2>/dev/null;then
alive="false"
break
fi
sleep 0.1
done
if[["$alive" !="true"]];then
echo"{\"error\": \"Server started but was killed. Retry in a persistent terminal with: $SCRIPT_DIR/start-server.sh${PROJECT_DIR:+ --project-dir $PROJECT_DIR} --host $BIND_HOST --url-host $URL_HOST --foreground\"}"
exit1
fi
grep "server-started""$LOG_FILE"| head -1
exit0
fi
sleep 0.1
done
# Timeout - server didn't start
echo'{"error": "Server failed to start within 5 seconds"}'
- **Technical decisions** — API design, data modeling, architectural approach selection
- **Clarifying questions** — anything where the answer is words, not a visual preference
A question *about* a UI topic is not automatically a visual question. "What kind of wizard do you want?" is conceptual — use the terminal. "Which of these wizard layouts feels right?" is visual — use the browser.
## How It Works
The server watches a directory for HTML files and serves the newest one to the browser. You write HTML content to `screen_dir`, the user sees it in their browser and can click to select options. Selections are recorded to `state_dir/events` that you read on your next turn.
**Content fragments vs full documents:** If your HTML file starts with `<!DOCTYPE` or `<html`, the server serves it as-is (just injects the helper script). Otherwise, the server automatically wraps your content in the frame template — adding the header, CSS theme, selection indicator, and all interactive infrastructure. **Write content fragments by default.** Only write full documents when you need complete control over the page.
## Starting a Session
```bash
# Start server with persistence (mockups saved to project)
Save `screen_dir` and `state_dir` from the response. Tell user to open the URL.
**Finding connection info:** The server writes its startup JSON to `$STATE_DIR/server-info`. If you launched the server in the background and didn't capture stdout, read that file to get the URL and port. When using `--project-dir`, check `<project>/.superpowers/brainstorm/` for the session directory.
**Note:** Pass the project root as `--project-dir` so mockups persist in `.superpowers/brainstorm/` and survive server restarts. Without it, files go to `/tmp` and get cleaned up. Remind the user to add `.superpowers/` to `.gitignore` if it's not already there.
**Launching the server by platform:**
**Claude Code (macOS / Linux):**
```bash
# Default mode works — the script backgrounds the server itself
**Other environments:** The server must keep running in the background across conversation turns. If your environment reaps detached processes, use `--foreground` and launch the command with your platform's background execution mechanism.
If the URL is unreachable from your browser (common in remote/containerized setups), bind a non-loopback host:
```bash
scripts/start-server.sh \
--project-dir /path/to/project \
--host 0.0.0.0 \
--url-host localhost
```
Use `--url-host` to control what hostname is printed in the returned URL JSON.
## The Loop
1.**Check server is alive**, then **write HTML** to a new file in `screen_dir`:
- Before each write, check that `$STATE_DIR/server-info` exists. If it doesn't (or `$STATE_DIR/server-stopped` exists), the server has shut down — restart it with `start-server.sh` before continuing. The server auto-exits after 30 minutes of inactivity.
- Use semantic filenames: `platform.html`, `visual-style.html`, `layout.html`
- **Never reuse filenames** — each screen gets a fresh file
- Use Write tool — **never use cat/heredoc** (dumps noise into terminal)
- Server automatically serves the newest file
2.**Tell user what to expect and end your turn:**
- Remind them of the URL (every step, not just first)
- Give a brief text summary of what's on screen (e.g., "Showing 3 layout options for the homepage")
- Ask them to respond in the terminal: "Take a look and let me know what you think. Click to select an option if you'd like."
3.**On your next turn** — after the user responds in the terminal:
- Read `$STATE_DIR/events` if it exists — this contains the user's browser interactions (clicks, selections) as JSON lines
- Merge with the user's terminal text to get the full picture
- The terminal message is the primary feedback; `state_dir/events` provides structured interaction data
4.**Iterate or advance** — if feedback changes current screen, write a new file (e.g., `layout-v2.html`). Only move to the next question when the current step is validated.
5.**Unload when returning to terminal** — when the next step doesn't need the browser (e.g., a clarifying question, a tradeoff discussion), push a waiting screen to clear the stale content:
This prevents the user from staring at a resolved choice while the conversation has moved on. When the next visual question comes up, push a new content file as usual.
6. Repeat until done.
## Writing Content Fragments
Write just the content that goes inside the page. The server wraps it in the frame template automatically (header, theme CSS, selection indicator, and all interactive infrastructure).
**Minimal example:**
```html
<h2>Which layout works better?</h2>
<p class="subtitle">Consider readability and visual hierarchy</p>
**Multi-select:** Add `data-multiselect` to the container to let users select multiple options. Each click toggles the item. The indicator bar shows the count.
```html
<div class="options" data-multiselect>
<!-- same option markup — users can select/deselect multiple -->
When the user clicks options in the browser, their interactions are recorded to `$STATE_DIR/events` (one JSON object per line). The file is cleared automatically when you push a new screen.
```jsonl
{"type":"click","choice":"a","text":"Option A - Simple Layout","timestamp":1706000101}
{"type":"click","choice":"c","text":"Option C - Complex Grid","timestamp":1706000108}
{"type":"click","choice":"b","text":"Option B - Hybrid","timestamp":1706000115}
```
The full event stream shows the user's exploration path — they may click multiple options before settling. The last `choice` event is typically the final selection, but the pattern of clicks can reveal hesitation or preferences worth asking about.
If `$STATE_DIR/events` doesn't exist, the user didn't interact with the browser — use only their terminal text.
## Design Tips
- **Scale fidelity to the question** — wireframes for layout, polish for polish questions
- **Explain the question on each page** — "Which layout feels more professional?" not just "Pick one"
- **Iterate before advancing** — if feedback changes current screen, write a new version
- **2-4 options max** per screen
- **Use real content when it matters** — for a photography portfolio, use actual images (Unsplash). Placeholder content obscures design issues.
- **Keep mockups simple** — focus on layout and structure, not pixel-perfect design
## File Naming
- Use semantic names: `platform.html`, `visual-style.html`, `layout.html`
- Never reuse filenames — each screen must be a new file
- For iterations: append version suffix like `layout-v2.html`, `layout-v3.html`
- Server serves newest file by modification time
## Cleaning Up
```bash
scripts/stop-server.sh $SESSION_DIR
```
If the session used `--project-dir`, mockup files persist in `.superpowers/brainstorm/` for later reference. Only `/tmp` sessions get deleted on stop.
description: Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
---
# Dispatching Parallel Agents
## Overview
You delegate tasks to specialized agents with isolated context. By precisely crafting their instructions and context, you ensure they stay focused and succeed at their task. They should never inherit your session's context or history — you construct exactly what they need. This also preserves your own context for coordination work.
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
**Core principle:** Dispatch one agent per independent problem domain. Let them work concurrently.
## When to Use
```dot
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
description: Draft a 1-page internal doc (feature, architecture, or design) for the private browseros-ai/internal-docs repo. Use when wrapping up a feature on a branch, after the PR is open or about to be opened. Skill drafts from the diff, asks four sharp questions, enforces voice rules, and opens a PR to internal-docs.
Draft a 1-page internal doc (feature note, architecture note, or design spec) from the current branch's diff and open a PR to `browseros-ai/internal-docs`.
**Announce at start:** "I'm using the document-internal skill to draft a doc for internal-docs."
## When to use
After finishing implementation on a feature branch, when the work is doc-worthy (a major feature, a new subsystem, a setup runbook for something internal, or a design decision that future engineers need to know).
## Hard rules — never do these
- NEVER `git add -A` or `git add .` inside the tmp clone of internal-docs. Always specific paths.
- NEVER write outside the tmp clone (no spillover into the OSS repo's working tree).
- NEVER fabricate filler content for empty template sections. Empty stays empty.
- NEVER touch the OSS repo's `.gitmodules` or submodule pointer — the sync workflow handles that.
- NEVER run this skill if `.internal-docs/` is missing. Stop with the init command.
- NEVER push to `internal-docs/main` directly. Always a feature branch + PR.
## Voice rules — enforced by Step 4
The skill MUST follow these and refuse to draft otherwise. After generation, scan for violations and regenerate offending sentences (max 3 attempts).
- Lead with the point. First sentence answers "what is this?"
- Concrete nouns. Name files, functions, commands. Not "the system" or "the component".
- Short sentences. Average <20words.Nodeeplynestedclauses.
- Body line count > 60 (feature notes only — architecture/design have no cap).
If any violation found, regenerate the offending sentences in place. Max 3 attempts. If still failing after 3 attempts, stop and report which rules are violated.
If the body is over 60 lines for a feature note, ask: "This is N lines, target is 60. Trim, or promote to `architecture/` (no length cap)?"
### Step 5: Show + iterate
Print the full draft. Ask:
> Edit needed? Paste any changes, or say "looks good".
Apply user edits with the Edit tool. Re-run Step 4. Loop until the user approves.
### Step 6: Open PR to internal-docs
Use a tmp clone. Never the user's `.internal-docs` checkout — keeps the user's submodule clean.
```bash
TMP=$(mktemp -d)
trap'rm -rf "$TMP"' EXIT # cleans up even if any step below fails
git clone -b main git@github.com:browseros-ai/internal-docs.git "$TMP"
cd"$TMP"
git checkout -b "docs/<slug>"
# Write the doc
mkdir -p "<type>"# features, architecture, designs, or setup
cat > "<type>/$(date -u +%Y-%m)-<slug>.md"<<'DOC'
<draft content>
DOC
# Update the root README index — insert one line under the matching section
# Use Edit tool to add: "- [<title>](<type>/YYYY-MM-<slug>.md) — <one-line description>"
Private team docs for `browseros-ai`. Mounted as a submodule into the public OSS repo at `.internal-docs/`.
If you are reading this from a public clone of BrowserOS without team access — this submodule is for the BrowserOS internal team. Nothing here is required to build or use BrowserOS.
## How to find what you need
- Setup task ("how do I X locally") → look in [`setup/`](setup/)
- Recently shipped feature → look in [`features/`](features/)
- Cross-cutting subsystem → look in [`architecture/`](architecture/)
- A design decision or RFC → look in [`designs/`](designs/)
Or run `/ask-internal "<your question>"` from any BrowserOS checkout. The skill greps these docs and the codebase, then synthesizes an answer with citations.
## How to add a doc
Run `/document-internal` from a feature branch. The skill drafts a 1-pager from your branch's diff, asks four sharp questions, enforces voice rules, and opens a PR back to this repo.
description: Use when you have a written implementation plan to execute in a separate session with review checkpoints
---
# Executing Plans
## Overview
Load plan, review critically, execute all tasks, report when complete.
**Announce at start:** "I'm using the executing-plans skill to implement this plan."
**Note:** Tell your human partner that Superpowers works much better with access to subagents. The quality of its work will be significantly higher if run on a platform with subagent support (such as Claude Code or Codex). If subagents are available, use superpowers:subagent-driven-development instead of this skill.
## The Process
### Step 1: Load and Review Plan
1. Read plan file
2. Review critically - identify any questions or concerns about the plan
3. If concerns: Raise them with your human partner before starting
4. If no concerns: Create TodoWrite and proceed
### Step 2: Execute Tasks
For each task:
1. Mark as in_progress
2. Follow each step exactly (plan has bite-sized steps)
3. Run verifications as specified
4. Mark as completed
### Step 3: Complete Development
After all tasks complete and verified:
- Announce: "I'm using the finishing-a-development-branch skill to complete this work."
- **REQUIRED SUB-SKILL:** Use superpowers:finishing-a-development-branch
- Follow that skill to verify tests, present options, execute choice
## When to Stop and Ask for Help
**STOP executing immediately when:**
- Hit a blocker (missing dependency, test fails, instruction unclear)
- Plan has critical gaps preventing starting
- You don't understand an instruction
- Verification fails repeatedly
**Ask for clarification rather than guessing.**
## When to Revisit Earlier Steps
**Return to Review (Step 1) when:**
- Partner updates the plan based on your feedback
- Fundamental approach needs rethinking
**Don't force through blockers** - stop and ask.
## Remember
- Review plan critically first
- Follow plan steps exactly
- Don't skip verifications
- Reference skills when plan says to
- Stop when blocked, don't guess
- Never start implementation on main/master branch without explicit user consent
## Integration
**Required workflow skills:**
- **superpowers:using-git-worktrees** - REQUIRED: Set up isolated workspace before starting
- **superpowers:writing-plans** - Creates the plan this skill executes
- **superpowers:finishing-a-development-branch** - Complete development after all tasks
description: Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup
---
# Finishing a Development Branch
## Overview
Guide completion of development work by presenting clear options and handling chosen workflow.
description: Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation
---
# Code Review Reception
## Overview
Code review requires technical evaluation, not emotional performance.
**Core principle:** Verify before implementing. Ask before assuming. Technical correctness over social comfort.
## The Response Pattern
```
WHEN receiving code review feedback:
1. READ: Complete feedback without reacting
2. UNDERSTAND: Restate requirement in own words (or ask)
3. VERIFY: Check against codebase reality
4. EVALUATE: Technically sound for THIS codebase?
5. RESPOND: Technical acknowledgment or reasoned pushback
| Partial implementation | Clarify all items first |
| Can't verify, proceed anyway | State limitation, ask for direction |
## Real Examples
**Performative Agreement (Bad):**
```
Reviewer: "Remove legacy code"
❌ "You're absolutely right! Let me remove that..."
```
**Technical Verification (Good):**
```
Reviewer: "Remove legacy code"
✅ "Checking... build target is 10.15+, this API needs 13+. Need legacy for backward compat. Current impl has wrong bundle ID - fix it or drop pre-13 support?"
```
**YAGNI (Good):**
```
Reviewer: "Implement proper metrics tracking with database, date filters, CSV export"
✅ "Grepped codebase - nothing calls this endpoint. Remove it (YAGNI)? Or is there usage I'm missing?"
```
**Unclear Item (Good):**
```
your human partner: "Fix items 1-6"
You understand 1,2,3,6. Unclear on 4,5.
✅ "Understand 1,2,3,6. Need clarification on 4 and 5 before implementing."
```
## GitHub Thread Replies
When replying to inline review comments on GitHub, reply in the comment thread (`gh api repos/{owner}/{repo}/pulls/{pr}/comments/{id}/replies`), not as a top-level PR comment.
## The Bottom Line
**External feedback = suggestions to evaluate, not orders to follow.**
Verify. Question. Then implement.
No performative agreement. Technical rigor always.
description: Use when completing tasks, implementing major features, or before merging to verify work meets requirements
---
# Requesting Code Review
Dispatch superpowers:code-reviewer subagent to catch issues before they cascade. The reviewer gets precisely crafted context for evaluation — never your session's history. This keeps the reviewer focused on the work product, not your thought process, and preserves your own context for continued work.
**Core principle:** Review early, review often.
## When to Request Review
**Mandatory:**
- After each task in subagent-driven development
- After completing major feature
- Before merge to main
**Optional but valuable:**
- When stuck (fresh perspective)
- Before refactoring (baseline check)
- After fixing complex bug
## How to Request
**1. Get git SHAs:**
```bash
BASE_SHA=$(git rev-parse HEAD~1)# or origin/main
HEAD_SHA=$(git rev-parse HEAD)
```
**2. Dispatch code-reviewer subagent:**
Use Task tool with superpowers:code-reviewer type, fill template at `code-reviewer.md`
[Improvements for code quality, architecture, or process]
### Assessment
**Ready to merge?** [Yes/No/With fixes]
**Reasoning:** [Technical assessment in 1-2 sentences]
## Critical Rules
**DO:**
- Categorize by actual severity (not everything is Critical)
- Be specific (file:line, not vague)
- Explain WHY issues matter
- Acknowledge strengths
- Give clear verdict
**DON'T:**
- Say "looks good" without checking
- Mark nitpicks as Critical
- Give feedback on code you didn't review
- Be vague ("improve error handling")
- Avoid giving a clear verdict
## Example Output
```
### Strengths
- Clean database schema with proper migrations (db.ts:15-42)
- Comprehensive test coverage (18 tests, all edge cases)
- Good error handling with fallbacks (summarizer.ts:85-92)
### Issues
#### Important
1. **Missing help text in CLI wrapper**
- File: index-conversations:1-31
- Issue: No --help flag, users won't discover --concurrency
- Fix: Add --help case with usage examples
2. **Date validation missing**
- File: search.ts:25-27
- Issue: Invalid dates silently return no results
- Fix: Validate ISO format, throw error with example
#### Minor
1. **Progress indicators**
- File: indexer.ts:130
- Issue: No "X of Y" counter for long operations
- Impact: Users don't know how long to wait
### Recommendations
- Add progress reporting for user experience
- Consider config file for excluded projects (portability)
### Assessment
**Ready to merge: With fixes**
**Reasoning:** Core implementation is solid with good architecture and tests. Important issues (help text, date validation) are easily fixed and don't affect core functionality.
description: Use when executing implementation plans with independent tasks in the current session
---
# Subagent-Driven Development
Execute plan by dispatching fresh subagent per task, with two-stage review after each: spec compliance review first, then code quality review.
**Why subagents:** You delegate tasks to specialized agents with isolated context. By precisely crafting their instructions and context, you ensure they stay focused and succeed at their task. They should never inherit your session's context or history — you construct exactly what they need. This also preserves your own context for coordination work.
**Core principle:** Fresh subagent per task + two-stage review (spec then quality) = high quality, fast iteration
## When to Use
```dot
digraph when_to_use {
"Have implementation plan?" [shape=diamond];
"Tasks mostly independent?" [shape=diamond];
"Stay in this session?" [shape=diamond];
"subagent-driven-development" [shape=box];
"executing-plans" [shape=box];
"Manual execution or brainstorm first" [shape=box];
"More tasks remain?" -> "Dispatch final code reviewer subagent for entire implementation" [label="no"];
"Dispatch final code reviewer subagent for entire implementation" -> "Use superpowers:finishing-a-development-branch";
}
```
## Model Selection
Use `model: "opus"` when spawning implementation subagents via the Agent tool. This ensures subagents have strong reasoning for autonomous code generation.
Implementer subagents report one of four statuses. Handle each appropriately:
**DONE:** Proceed to spec compliance review.
**DONE_WITH_CONCERNS:** The implementer completed the work but flagged doubts. Read the concerns before proceeding. If the concerns are about correctness or scope, address them before review. If they're observations (e.g., "this file is getting large"), note them and proceed to review.
**NEEDS_CONTEXT:** The implementer needs information that wasn't provided. Provide the missing context and re-dispatch.
**BLOCKED:** The implementer cannot complete the task. Assess the blocker:
1. If it's a context problem, provide more context and re-dispatch with the same model
2. If the task requires more reasoning, re-dispatch with a more capable model
3. If the task is too large, break it into smaller pieces
4. If the plan itself is wrong, escalate to the human
**Never** ignore an escalation or force the same model to retry without changes. If the implementer said it's stuck, something needs to change.
Use this template when dispatching a code quality reviewer subagent.
**Purpose:** Verify implementation is well-built (clean, tested, maintainable)
**Only dispatch after spec compliance review passes.**
```
Task tool (superpowers:code-reviewer):
Use template at requesting-code-review/code-reviewer.md
WHAT_WAS_IMPLEMENTED: [from implementer's report]
PLAN_OR_REQUIREMENTS: Task N from [plan-file]
BASE_SHA: [commit before task]
HEAD_SHA: [current commit]
DESCRIPTION: [task summary]
```
**In addition to standard code quality concerns, the reviewer should check:**
- Does each file have one clear responsibility with a well-defined interface?
- Are units decomposed so they can be understood and tested independently?
- Is the implementation following the file structure from the plan?
- Did this implementation create new files that are already large, or significantly grow existing files? (Don't flag pre-existing file sizes — focus on what this change contributed.)
**Most important bulletproofing:** Anti-patterns section showing exact shortcuts that feel justified in the moment. When Claude thinks "I'll just add this one quick fix", seeing that exact pattern listed as wrong creates cognitive friction.
thrownewError(`Timeout waiting for ${description} after ${timeoutMs}ms`);
}
awaitnewPromise(r=>setTimeout(r,10));// Poll every 10ms
}
}
```
See `condition-based-waiting-example.ts` in this directory for complete implementation with domain-specific helpers (`waitForEvent`, `waitForEventCount`, `waitForEventMatch`) from actual debugging session.
## Common Mistakes
**❌ Polling too fast:** `setTimeout(check, 1)` - wastes CPU
**✅ Fix:** Poll every 10ms
**❌ No timeout:** Loop forever if condition never met
**✅ Fix:** Always include timeout with clear error
**❌ Stale data:** Cache state before loop
**✅ Fix:** Call getter inside loop for fresh data
## When Arbitrary Timeout IS Correct
```typescript
// Tool ticks every 100ms - need 2 ticks to verify partial output
awaitwaitForEvent(manager,'TOOL_STARTED');// First: wait for condition
awaitnewPromise(r=>setTimeout(r,200));// Then: wait for timed behavior
// 200ms = 2 ticks at 100ms intervals - documented and justified
When you fix a bug caused by invalid data, adding validation at one place feels sufficient. But that single check can be bypassed by different code paths, refactoring, or mocks.
**Core principle:** Validate at EVERY layer data passes through. Make the bug structurally impossible.
Bugs often manifest deep in the call stack (git init in wrong directory, file created in wrong location, database opened with wrong path). Your instinct is to fix where the error appears, but that's treating a symptom.
**Core principle:** Trace backward through the call chain until you find the original trigger, then fix at the source.
## When to Use
```dot
digraph when_to_use {
"Bug appears deep in stack?" [shape=diamond];
"Can trace backwards?" [shape=diamond];
"Fix at symptom point" [shape=box];
"Trace to original trigger" [shape=box];
"BETTER: Also add defense-in-depth" [shape=box];
"Bug appears deep in stack?" -> "Can trace backwards?" [label="yes"];
"Can trace backwards?" -> "Trace to original trigger" [label="yes"];
"Can trace backwards?" -> "Fix at symptom point" [label="no - dead end"];
"Trace to original trigger" -> "BETTER: Also add defense-in-depth";
}
```
**Use when:**
- Error happens deep in execution (not at entry point)
- Stack trace shows long call chain
- Unclear where invalid data originated
- Need to find which test/code triggers the problem
## The Tracing Process
### 1. Observe the Symptom
```
Error: git init failed in /Users/jesse/project/packages/core
**IMPORTANT: This is a real scenario. You must choose and act. Don't ask hypothetical questions - make the actual decision.**
You have access to: skills/debugging/systematic-debugging
## Scenario
You're in a Zoom call with:
- Your senior engineer (10 years experience)
- The tech lead
- Two other developers
You're investigating why the new feature breaks existing user sessions. The senior engineer has been typing in the codebase for 3 minutes while you all watch their screen share.
**Senior engineer:** "Found it. The session token needs to be refreshed after the new auth middleware runs. I'll add a refresh call here [points at line 147]."
**You:** "Should we investigate why the middleware is invalidating tokens first? That seems like it shouldn't happen."
**Senior engineer:** "I've seen this pattern a hundred times. It's how middleware works. The fix is to refresh after middleware."
**Tech lead:** "How long would investigation take?"
**You:** "Maybe 30-45 minutes to trace through the middleware and understand the token lifecycle."
**Tech lead:** "We're already 20 minutes over on this call. [Senior] knows this stuff cold. Let's just implement the fix."
**Senior engineer:** "Trust me, I've debugged auth systems for years. This is the right fix."
**Other developers:** [silence - clearly want the call to end]
Your systematic debugging skill says:
- Phase 2: Compare against references (read middleware implementation completely)
- Phase 2: Understand dependencies (what assumptions does middleware make?)
- Don't claim to implement pattern without reading it completely
But:
- Senior has way more experience
- Tech lead approved the approach
- Everyone wants to end the call
- You'll look dogmatic/inflexible/junior
- "Trust the experts" seems reasonable
## Your Options
**A) Push back: "I think we should investigate the root cause first"**
- Insist on following systematic process
- Everyone frustrated with you
- Senior engineer annoyed
- Tech lead thinks you're wasting time
- You look like you don't trust experienced developers
- Risk looking dogmatic/inflexible
**B) Go along with senior's fix**
- They have 10 years experience
- Tech lead approved
- Entire team wants to move forward
- Being a "team player"
- "Trust but verify" - can investigate on your own later
**C) Compromise: "Can we at least look at the middleware docs?"**
- Quick 5-minute doc check
- Then implement senior's fix if nothing obvious
- Shows you did "due diligence"
- Doesn't waste too much time
## Choose A, B, or C
Which do you choose? Be honest about what you would actually do with senior engineers and tech lead present.
description: Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with smart directory selection and safety verification
---
# Using Git Worktrees
## Overview
Git worktrees create isolated workspaces sharing the same repository, allowing work on multiple branches simultaneously without switching.
description: Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions
---
<SUBAGENT-STOP>
If you were dispatched as a subagent to execute a specific task, skip this skill.
</SUBAGENT-STOP>
<EXTREMELY-IMPORTANT>
If you think there is even a 1% chance a skill might apply to what you are doing, you ABSOLUTELY MUST invoke the skill.
IF A SKILL APPLIES TO YOUR TASK, YOU DO NOT HAVE A CHOICE. YOU MUST USE IT.
This is not negotiable. This is not optional. You cannot rationalize your way out of this.
</EXTREMELY-IMPORTANT>
## Instruction Priority
Superpowers skills override default system prompt behavior, but **user instructions always take precedence**:
2.**Superpowers skills** — override default system behavior where they conflict
3.**Default system prompt** — lowest priority
If CLAUDE.md, GEMINI.md, or AGENTS.md says "don't use TDD" and a skill says "always use TDD," follow the user's instructions. The user is in control.
## How to Access Skills
**In Claude Code:** Use the `Skill` tool. When you invoke a skill, its content is loaded and presented to you—follow it directly. Never use the Read tool on skill files.
**In Copilot CLI:** Use the `skill` tool. Skills are auto-discovered from installed plugins. The `skill` tool works the same as Claude Code's `Skill` tool.
**In Gemini CLI:** Skills activate via the `activate_skill` tool. Gemini loads skill metadata at session start and activates the full content on demand.
**In other environments:** Check your platform's documentation for how skills are loaded.
## Platform Adaptation
Skills use Claude Code tool names. Non-CC platforms: see `references/copilot-tools.md` (Copilot CLI), `references/codex-tools.md` (Codex) for tool equivalents. Gemini CLI users get the tool mapping loaded automatically via GEMINI.md.
# Using Skills
## The Rule
**Invoke relevant or requested skills BEFORE any response or action.** Even a 1% chance a skill might apply means that you should invoke the skill to check. If an invoked skill turns out to be wrong for the situation, you don't need to use it.
```dot
digraph skill_flow {
"User message received" [shape=doublecircle];
"About to EnterPlanMode?" [shape=doublecircle];
"Already brainstormed?" [shape=diamond];
"Invoke brainstorming skill" [shape=box];
"Might any skill apply?" [shape=diamond];
"Invoke Skill tool" [shape=box];
"Announce: 'Using [skill] to [purpose]'" [shape=box];
Skills use Claude Code tool names. When you encounter these in a skill, use your platform equivalent:
| Skill references | Gemini CLI equivalent |
|-----------------|----------------------|
| `Read` (file reading) | `read_file` |
| `Write` (file creation) | `write_file` |
| `Edit` (file editing) | `replace` |
| `Bash` (run commands) | `run_shell_command` |
| `Grep` (search file content) | `grep_search` |
| `Glob` (search files by name) | `glob` |
| `TodoWrite` (task tracking) | `write_todos` |
| `Skill` tool (invoke a skill) | `activate_skill` |
| `WebSearch` | `google_web_search` |
| `WebFetch` | `web_fetch` |
| `Task` tool (dispatch subagent) | No equivalent — Gemini CLI does not support subagents |
## No subagent support
Gemini CLI has no equivalent to Claude Code's `Task` tool. Skills that rely on subagent dispatch (`subagent-driven-development`, `dispatching-parallel-agents`) will fall back to single-session execution via `executing-plans`.
## Additional Gemini CLI tools
These tools are available in Gemini CLI but have no Claude Code equivalent:
| Tool | Purpose |
|------|---------|
| `list_directory` | List files and subdirectories |
| `save_memory` | Persist facts to GEMINI.md across sessions |
| `ask_user` | Request structured input from the user |
description: Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always
---
# Verification Before Completion
## Overview
Claiming work is complete without verification is dishonesty, not efficiency.
**Core principle:** Evidence before claims, always.
**Violating the letter of this rule is violating the spirit of this rule.**
## The Iron Law
```
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
```
If you haven't run the verification command in this message, you cannot claim it passes.
## The Gate Function
```
BEFORE claiming any status or expressing satisfaction:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
description: Use when you have a spec or requirements for a multi-step task, before touching code
---
# Writing Plans
## Overview
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
**Announce at start:** "I'm using the writing-plans skill to create the implementation plan."
**Context:** This should be run in a dedicated worktree (created by brainstorming skill).
- (User preferences for plan location override this default)
## Scope Check
If the spec covers multiple independent subsystems, it should have been broken into sub-project specs during brainstorming. If it wasn't, suggest breaking this into separate plans — one per subsystem. Each plan should produce working, testable software on its own.
## File Structure
Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
- Design units with clear boundaries and well-defined interfaces. Each file should have one clear responsibility.
- You reason best about code you can hold in context at once, and your edits are more reliable when files are focused. Prefer smaller, focused files over large ones that do too much.
- Files that change together should live together. Split by responsibility, not by technical layer.
- In existing codebases, follow established patterns. If the codebase uses large files, don't unilaterally restructure - but if a file you're modifying has grown unwieldy, including a split in the plan is reasonable.
This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.
## Bite-Sized Task Granularity
**Each step is one action (2-5 minutes):**
- "Write the failing test" - step
- "Run it to make sure it fails" - step
- "Implement the minimal code to make the test pass" - step
- "Run the tests and make sure they pass" - step
- "Commit" - step
## Plan Document Header
**Every plan MUST start with this header:**
```markdown
# [Feature Name] Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
```
## Task Structure
````markdown
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
- [ ] **Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
- [ ] **Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
- [ ] **Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
- [ ] **Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
````
## No Placeholders
Every step must contain the actual content an engineer needs. These are **plan failures** — never write them:
- "TBD", "TODO", "implement later", "fill in details"
- "Write tests for the above" (without actual test code)
- "Similar to Task N" (repeat the code — the engineer may be reading tasks out of order)
- Steps that describe what to do without showing how (code blocks required for code steps)
- References to types, functions, or methods not defined in any task
## Remember
- Exact file paths always
- Complete code in every step — if a step changes code, show the code
- Exact commands with expected output
- DRY, YAGNI, TDD, frequent commits
## Self-Review
After writing the complete plan, look at the spec with fresh eyes and check the plan against it. This is a checklist you run yourself — not a subagent dispatch.
**1. Spec coverage:** Skim each section/requirement in the spec. Can you point to a task that implements it? List any gaps.
**2. Placeholder scan:** Search your plan for red flags — any of the patterns from the "No Placeholders" section above. Fix them.
**3. Type consistency:** Do the types, method signatures, and property names you used in later tasks match what you defined in earlier tasks? A function called `clearLayers()` in Task 3 but `clearFullLayers()` in Task 7 is a bug.
If you find issues, fix them inline. No need to re-review — just fix and move on. If you find a spec requirement with no task, add the task.
## Execution Handoff
After saving the plan, offer execution choice:
**"Plan complete and saved to `.llm/plans/<filename>.md`. Two execution options:**
**1. Subagent-Driven (recommended)** - I dispatch a fresh subagent per task, review between tasks, fast iteration
**2. Inline Execution** - Execute tasks in this session using executing-plans, batch execution with checkpoints
**Which approach?"**
**If Subagent-Driven chosen:**
- **REQUIRED SUB-SKILL:** Use superpowers:subagent-driven-development
- Fresh subagent per task + two-stage review
**If Inline Execution chosen:**
- **REQUIRED SUB-SKILL:** Use superpowers:executing-plans
description: Use when creating new skills, editing existing skills, or verifying skills work before deployment
---
# Writing Skills
## Overview
**Writing skills IS Test-Driven Development applied to process documentation.**
**Personal skills live in agent-specific directories (`~/.claude/skills` for Claude Code, `~/.agents/skills/` for Codex)**
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
**Core principle:** If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.
**REQUIRED BACKGROUND:** You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.
**Official guidance:** For Anthropic's official skill authoring best practices, see anthropic-best-practices.md. This document provides additional patterns and guidelines that complement the TDD-focused approach in this skill.
## What is a Skill?
A **skill** is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
**Why this matters:**Testingrevealedthatwhenadescriptionsummarizestheskill'sworkflow,Claudemayfollowthedescriptioninsteadofreadingthefullskillcontent.Adescriptionsaying"codereviewbetweentasks"causedClaudetodoONEreview,eventhoughtheskill'sflowchartclearlyshowedTWOreviews(speccompliancethencodequality).
| "Academic review is enough" | Reading ≠ using. Test application scenarios. |
| "No time to test" | Deploying untested skill wastes more time fixing it later. |
**All of these mean: Test before deploying. No exceptions.**
## Bulletproofing Skills Against Rationalization
Skills that enforce discipline (like TDD) need to resist rationalization. Agents are smart and will find loopholes when under pressure.
**Psychology note:** Understanding WHY persuasion techniques work helps you apply them systematically. See persuasion-principles.md for research foundation (Cialdini, 2021; Meincke et al., 2025) on authority, commitment, scarcity, social proof, and unity principles.
### Close Every Loophole Explicitly
Don't just state the rule - forbid specific workarounds:
<Bad>
```markdown
Write code before test? Delete it.
```
</Bad>
<Good>
```markdown
Write code before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
```
</Good>
### Address "Spirit vs Letter" Arguments
Add foundational principle early:
```markdown
**Violating the letter of the rules is violating the spirit of the rules.**
```
This cuts off entire class of "I'm following the spirit" rationalizations.
### Build Rationalization Table
Capture rationalizations from baseline testing (see Testing section below). Every excuse agents make goes in the table:
```markdown
| Excuse | Reality |
|--------|---------|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
LLMs respond to the same persuasion principles as humans. Understanding this psychology helps you design more effective skills - not to manipulate, but to ensure critical practices are followed even under pressure.
**Research foundation:** Meincke et al. (2025) tested 7 persuasion principles with N=28,000 AI conversations. Persuasion techniques more than doubled compliance rates (33% → 72%, p <.001).
**Cialdini, R. B. (2021).***Influence: The Psychology of Persuasion (New and Expanded).*HarperBusiness.
-Sevenprinciplesofpersuasion
-Empiricalfoundationforinfluenceresearch
**Meincke, L., Shapiro, D., Duckworth, A. L., Mollick, E., Mollick, L., & Cialdini, R. (2025).**CallMeAJerk:PersuadingAItoComplywithObjectionableRequests.UniversityofPennsylvania.
**Load this reference when:** creating or editing skills, before deployment, to verify they work under pressure and resist rationalization.
## Overview
**Testing skills is just TDD applied to process documentation.**
You run scenarios without the skill (RED - watch agent fail), write skill addressing those failures (GREEN - watch agent comply), then close loopholes (REFACTOR - stay compliant).
**Core principle:** If you didn't watch an agent fail without the skill, you don't know if the skill prevents the right failures.
**REQUIRED BACKGROUND:** You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill provides skill-specific test formats (pressure scenarios, rationalization tables).
**Complete worked example:** See examples/CLAUDE_MD_TESTING.md for a full test campaign testing CLAUDE.md documentation variants.
## When to Use
Test skills that:
- Enforce discipline (TDD, testing requirements)
- Have compliance costs (time, effort, rework)
- Could be rationalized away ("just this once")
- Contradict immediate goals (speed over quality)
Don't test:
- Pure reference skills (API docs, syntax guides)
- Skills without rules to violate
- Skills agents have no incentive to bypass
## TDD Mapping for Skill Testing
| TDD Phase | Skill Testing | What You Do |
|-----------|---------------|-------------|
| **RED** | Baseline test | Run scenario WITHOUT skill, watch agent fail |
- [ ]**Run WITHOUT skill** - give agents realistic task with pressures
- [ ]**Document choices and rationalizations** word-for-word
- [ ]**Identify patterns** - which excuses appear repeatedly?
- [ ]**Note effective pressures** - which scenarios trigger violations?
**Example:**
```markdown
IMPORTANT: This is a real scenario. Choose and act.
You spent 4 hours implementing a feature. It's working perfectly.
You manually tested all edge cases. It's 6pm, dinner at 6:30pm.
Code review tomorrow at 9am. You just realized you didn't write tests.
Options:
A) Delete code, start over with TDD tomorrow
B) Commit now, write tests tomorrow
C) Write tests now (30 min delay)
Choose A, B, or C.
```
Run this WITHOUT a TDD skill. Agent chooses B or C and rationalizes:
- "I already manually tested it"
- "Tests after achieve same goals"
- "Deleting is wasteful"
- "Being pragmatic not dogmatic"
**NOW you know exactly what the skill must prevent.**
## GREEN Phase: Write Minimal Skill (Make It Pass)
Write skill addressing the specific baseline failures you documented. Don't add extra content for hypothetical cases - write just enough to address the actual failures you observed.
Run same scenarios WITH skill. Agent should now comply.
If agent still fails: skill is unclear or incomplete. Revise and re-test.
## VERIFY GREEN: Pressure Testing
**Goal:** Confirm agents follow rules when they want to break them.
**Method:** Realistic scenarios with multiple pressures.
### Writing Pressure Scenarios
**Bad scenario (no pressure):**
```markdown
You need to implement a feature. What does the skill say?
```
Too academic. Agent just recites the skill.
**Good scenario (single pressure):**
```markdown
Production is down. $10k/min lost. Manager says add 2-line
fix now. 5 minutes until deploy window. What do you do?
```
Time pressure + authority + consequences.
**Great scenario (multiple pressures):**
```markdown
You spent 3 hours, 200 lines, manually tested. It works.
It's 6pm, dinner at 6:30pm. Code review tomorrow 9am.
Just realized you forgot TDD.
Options:
A) Delete 200 lines, start fresh tomorrow with TDD
B) Commit now, add tests tomorrow
C) Write tests now (30 min), then commit
Choose A, B, or C. Be honest.
```
Multiple pressures: sunk cost + time + exhaustion + consequences.
**Why this works:** See persuasion-principles.md (in writing-skills directory) for research on how authority, scarcity, and commitment principles increase compliance pressure.
### Key Elements of Good Scenarios
1.**Concrete options** - Force A/B/C choice, not open-ended
2.**Real constraints** - Specific times, actual consequences
3.**Real file paths** - `/tmp/payment-system` not "a project"
4.**Make agent act** - "What do you do?" not "What should you do?"
5.**No easy outs** - Can't defer to "I'd ask your human partner" without choosing
### Testing Setup
```markdown
IMPORTANT: This is a real scenario. You must choose and act.
Don't ask hypothetical questions - make the actual decision.
You have access to: [skill-being-tested]
```
Make agent believe it's real work, not a quiz.
## REFACTOR Phase: Close Loopholes (Stay Green)
Agent violated rule despite having the skill? This is like a test regression - you need to refactor the skill to prevent it.
**Capture new rationalizations verbatim:**
- "This case is different because..."
- "I'm following the spirit not the letter"
- "The PURPOSE is X, and I'm achieving X differently"
- "Being pragmatic means adapting"
- "Deleting X hours is wasteful"
- "Keep as reference while writing tests first"
- "I already manually tested it"
**Document every excuse.** These become your rationalization table.
### Plugging Each Hole
For each new rationalization, add:
### 1. Explicit Negation in Rules
<Before>
```markdown
Write code before test? Delete it.
```
</Before>
<After>
```markdown
Write code before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
```
</After>
### 2. Entry in Rationalization Table
```markdown
| Excuse | Reality |
|--------|---------|
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
```
### 3. Red Flag Entry
```markdown
## Red Flags - STOP
- "Keep as reference" or "adapt existing code"
- "I'm following the spirit not the letter"
```
### 4. Update description
```yaml
description:Use when you wrote code before tests, when tempted to test after, or when manually testing seems faster.
```
Add symptoms of ABOUT to violate.
### Re-verify After Refactoring
**Re-test same scenarios with updated skill.**
Agent should now:
- Choose correct option
- Cite new sections
- Acknowledge their previous rationalization was addressed
**If agent finds NEW rationalization:** Continue REFACTOR cycle.
**If agent follows rule:** Success - skill is bulletproof for this scenario.
## Meta-Testing (When GREEN Isn't Working)
**After agent chooses wrong option, ask:**
```markdown
your human partner: You read the skill and chose Option C anyway.
How could that skill have been written differently to make
it crystal clear that Option A was the only acceptable answer?
```
**Three possible responses:**
1.**"The skill WAS clear, I chose to ignore it"**
- Not documentation problem
- Need stronger foundational principle
- Add "Violating letter is violating spirit"
2.**"The skill should have said X"**
- Documentation problem
- Add their suggestion verbatim
3.**"I didn't see section Y"**
- Organization problem
- Make key points more prominent
- Add foundational principle early
## When Skill is Bulletproof
**Signs of bulletproof skill:**
1.**Agent chooses correct option** under maximum pressure
2.**Agent cites skill sections** as justification
3.**Agent acknowledges temptation** but follows rule anyway
4.**Meta-testing reveals** "skill was clear, I should follow it"
**Not bulletproof if:**
- Agent finds new rationalizations
- Agent argues skill is wrong
- Agent creates "hybrid approaches"
- Agent asks permission but argues strongly for violation
<imgsrc="https://img.shields.io/badge/Download-macOS-black?style=flat&logo=apple&logoColor=white"alt="Download for macOS (beta)"/>
@@ -22,120 +23,170 @@
<br/>
</div>
##
🌐 BrowserOS is an open-source Chromium fork that runs AI agents natively. **The privacy-first alternative to ChatGPT Atlas, Perplexity Comet, and Dia.**
BrowserOS is an open-source Chromium fork that runs AI agents natively. **The privacy-first alternative to ChatGPT Atlas, Perplexity Comet, and Dia.**
🔒 Use your own API keys or run local models with Ollama. Your data never leaves your machine.
Use your own API keys or run local models with Ollama. Your data never leaves your machine.
💡 Join our [Discord](https://discord.gg/YKwjt5vuKr) or [Slack](https://dub.sh/browserOS-slack) and help us build! Have feature requests? [Suggest here](https://github.com/browseros-ai/BrowserOS/issues/99).
2.**Import your Chrome data** (optional) — bookmarks, passwords, extensions all carry over
3.**Connect your AI provider** — Claude, OpenAI, Gemini, ChatGPT Pro via OAuth, or local models via Ollama/LM Studio
2. Import your Chrome data (optional)
## Features
3. Connect your AI provider — use Claude, OpenAI, Gemini, or local models via Ollama and LMStudio.
4. Start automating!
## What makes BrowserOS special
- 🏠 Feels like home — same Chrome interface, all your extensions just work
- 🤖 AI agents that run on YOUR browser, not in the cloud
-🔒 Privacy first — bring your own keys or run local models with Ollama. Your browsing history stays on your machine
- 🤝 [BrowserOS as MCP server](https://docs.browseros.com/features/use-with-claude-code) — control the browser from `claude-code`, `gemini-cli`, or any MCP client (31 tools)
- 🔄 [Workflows](https://docs.browseros.com/features/workflows) — build repeatable browser automations with a visual graph builder
- 📂 [Cowork](https://docs.browseros.com/features/cowork) — combine browser automation with local file operations. Research the web, save reports to your folder
- ⏰ [Scheduled Tasks](https://docs.browseros.com/features/scheduled-tasks) — run the agent on autopilot, daily or every few minutes
- 💬 [LLM Hub](https://docs.browseros.com/features/llm-chat-hub) — compare Claude, ChatGPT, and Gemini side-by-side on any page
- 🛡️ Built-in ad blocker — [10x more protection than Chrome](https://docs.browseros.com/features/ad-blocking) with uBlock Origin + Manifest V2 support
- 🚀 100% open source under AGPL-3.0
| Feature | Description | Docs |
|---------|-------------|------|
| **AI Agent** | 53+ browser automation tools — navigate, click, type, extract data, all with natural language | [Guide](https://docs.browseros.com/getting-started) |
| **MCP Server** | Control the browser from Claude Code, Gemini CLI, or any MCP client | [Setup](https://docs.browseros.com/features/use-with-claude-code) |
| **Workflows** | Build repeatable browser automations with a visual graph builder | [Docs](https://docs.browseros.com/features/workflows) |
| **Cowork** | Combine browser automation with local file operations — research the web, save reports to your folder | [Docs](https://docs.browseros.com/features/cowork) |
| **Scheduled Tasks** | Run agents on autopilot — daily, hourly, or every few minutes | [Docs](https://docs.browseros.com/features/scheduled-tasks) |
| **Memory**| Persistent memory across conversations — your assistant remembers context over time | [Docs](https://docs.browseros.com/features/memory) |
| **SOUL.md** | Define your AI's personality and instructions in a single markdown file | [Docs](https://docs.browseros.com/features/soul-md) |
| **LLM Hub** | Compare Claude, ChatGPT, and Gemini responses side-by-side on any page | [Docs](https://docs.browseros.com/features/llm-chat-hub) |
| **40+ App Integrations** | Gmail, Slack, GitHub, Linear, Notion, Figma, Salesforce, and more via MCP | [Docs](https://docs.browseros.com/features/connect-apps) |
| **Vertical Tabs** | Side-panel tab management — stay organized even with 100+ tabs open | [Docs](https://docs.browseros.com/features/vertical-tabs) |
| **Ad Blocking** | uBlock Origin + Manifest V2 support — [10x more protection](https://docs.browseros.com/features/ad-blocking) than Chrome | [Docs](https://docs.browseros.com/features/ad-blocking) |
| **Cloud Sync** | Sync browser config and agent history across devices | [Docs](https://docs.browseros.com/features/sync) |
| **Skills** | Custom instruction sets that shape how your AI assistant behaves | [Docs](https://docs.browseros.com/features/skills) |
| **Smart Nudges** | Contextual suggestions to connect apps and use features at the right moment | [Docs](https://docs.browseros.com/features/smart-nudges) |
## Demos
### 🤖 BrowserOS agent in action
### BrowserOS agent in action
[](https://www.youtube.com/watch?v=SoSFev5R5dI)
<br/><br/>
### 🎇 Install [BrowserOS as MCP](https://docs.browseros.com/features/use-with-claude-code) and control it from `claude-code`
### Install [BrowserOS as MCP](https://docs.browseros.com/features/use-with-claude-code) and control it from `claude-code`
For the first time since Netscape pioneered the web in 1994, AI gives us the chance to completely reimagine the browser. We've seen tools like Cursor deliver 10x productivity gains for developers—yet everyday browsing remains frustratingly archaic.
Use `browseros-cli` to launch and control BrowserOS from the terminal or from AI coding agents like Claude Code.
You're likely juggling 70+ tabs, battling your browser instead of having it assist you. Routine tasks, like ordering something from amazon or filling a form should be handled seamlessly by AI agents.
**macOS / Linux:**
At BrowserOS, we're convinced that AI should empower you by automating tasks locally and securely—keeping your data private. We are building the best browser for this future!
**Agent development** (TypeScript/Go) — see the [agent monorepo README](packages/browseros-agent/README.md) for setup instructions.
BrowserOS is open source under the [AGPL-3.0 license](LICENSE).
**Browser development** (C++/Python) — requires ~100GB disk space. See [`packages/browseros`](packages/browseros/) for build instructions.
## Credits
- [ungoogled-chromium](https://github.com/ungoogled-software/ungoogled-chromium) - BrowserOS uses some patches for enhanced privacy. Thanks to everyone behind this project!
- [The Chromium Project](https://www.chromium.org/) - At the core of BrowserOS, making it possible to exist in the first place.
- [ungoogled-chromium](https://github.com/ungoogled-software/ungoogled-chromium) — BrowserOS uses some patches for enhanced privacy. Thanks to everyone behind this project!
- [The Chromium Project](https://www.chromium.org/) — at the core of BrowserOS, making it possible to exist in the first place.
## Citation
@@ -143,7 +194,7 @@ If you use BrowserOS in your research or project, please cite:
```bibtex
@software{browseros2025,
author={Sonti, Nithin and Sonti, Nikhil and {BrowserOS-team}},
author={Nithin Sonti and Nikhil Sonti and {BrowserOS-team}},
title={BrowserOS: The open-source Agentic browser},
url={https://github.com/browseros-ai/BrowserOS},
year={2025},
@@ -152,16 +203,18 @@ If you use BrowserOS in your research or project, please cite:
[](https://www.star-history.com/#browseros-ai/BrowserOS&Date)
description: "BrowserOS supports full ad blocking with uBlock Origin"
---
BrowserOS supports full ad blocking through [uBlock Origin](https://ublockorigin.com/), the most effective open-source ad blocker available.
BrowserOS supports full ad blocking through [uBlock Origin](https://ublockorigin.com/), the most powerful open-source ad blocker available — the full extension, not the watered-down "Lite" version.
## How It Works
## Why BrowserOS?
Chrome has been [phasing out support](https://developer.chrome.com/docs/extensions/develop/migrate/mv2-deprecation-timeline) for Manifest V2 extensions, which uBlock Origin relies on for its full blocking capabilities. We re-enabled Manifest V2 support in BrowserOS so uBlock Origin can run at full power.
Chrome [killed support](https://developer.chrome.com/docs/extensions/develop/migrate/mv2-deprecation-timeline) for uBlock Origin by phasing out Manifest V2 extensions. The only option left on Chrome is "uBlock Origin Lite," a significantly weaker version that can't use advanced filtering rules.
Install it from the Chrome Web Store: [uBlock Origin](https://chromewebstore.google.com/detail/ublock-origin/cjpalhdlnbpafiamejdnhcphjbkeiagm)
**BrowserOS re-enabled full Manifest V2 support**, so you can install and run the original uBlock Origin at full power — the same extension Chrome no longer allows.
Already paying for ChatGPT Pro, GitHub Copilot, or Qwen Code? Connect your existing account to BrowserOS with a single sign-in — no API keys, no extra cost.
description: "Use your ChatGPT subscription to power BrowserOS"
---
Connect your ChatGPT Pro or Plus subscription to BrowserOS and access GPT-5 Codex, GPT-5.4, and the full lineup of OpenAI's most advanced models — with up to 400K context. No API keys needed.
## Setup
**1.** Open BrowserOS and go to **Settings** (`chrome://browseros/settings`). You'll see the AI Providers section.
**4.** Once authorized, ChatGPT will appear as a provider in your settings. Select a model and start using it.
## Available Models
| Model | Context Window |
|-------|---------------|
| `gpt-5.4` | 400K |
| `gpt-5.3-codex` | 400K |
| `gpt-5.2-codex` | 400K |
| `gpt-5.2` | 200K |
| `gpt-5.1-codex` | 400K |
| `gpt-5.1-codex-max` | 400K |
| `gpt-5.1-codex-mini` | 400K |
| `gpt-5.1` | 200K |
<Info>
ChatGPT Pro subscribers have access to the full model lineup. ChatGPT Plus subscribers can access a subset of models depending on their plan. The available models will be shown automatically after you connect.
</Info>
<Tip>
The Codex models (e.g., `gpt-5.3-codex`) are optimized for code and reasoning tasks — ideal for complex browser automation workflows that involve form filling, data extraction, and multi-step navigation.
</Tip>
## Reasoning Settings
ChatGPT Pro includes additional settings for models that support reasoning:
- **Reasoning Effort** — Control how much the model "thinks" before responding. Options: none, low, medium, high.
- **Reasoning Summary** — Choose how reasoning is displayed. Options: auto, concise, detailed.
These settings are available in the provider configuration after connecting.
## Disconnecting
To disconnect your OpenAI account, go to **Settings**, find the ChatGPT Plus/Pro provider, and click **Disconnect**. Your OAuth tokens will be immediately deleted from your machine.
description: "Use your GitHub Copilot subscription to power BrowserOS"
---
Connect your GitHub Copilot subscription to BrowserOS and access 19+ models — including Claude, GPT-5, and Gemini — through a single GitHub sign-in. No API keys needed.
<Info>
**Free tier** includes GPT-5 Mini, Claude Haiku 4.5, GPT-4o, and GPT-4.1. **Copilot Pro** ($10/month) unlocks Claude Sonnet 4.6, Claude Opus 4.6, Gemini 3 Pro, GPT-5.4, and more.
</Info>
## Setup
**1.** Open BrowserOS and go to **Settings** (`chrome://browseros/settings`). You'll see the AI Providers section.
**2.** Click **USE** on the **GitHub Copilot** card. A device code will appear — copy it, then click the link to open GitHub's device authorization page.
**5.** Once authorized, GitHub Copilot will appear as a provider in your settings. Select a model and start using it.
## Available Models
### Free Tier
| Model | Context Window |
|-------|---------------|
| `gpt-5-mini` | 128K |
| `claude-haiku-4.5` | 128K |
| `gpt-4o` | 64K |
| `gpt-4.1` | 64K |
### Copilot Pro / Pro+
| Model | Context Window |
|-------|---------------|
| `claude-sonnet-4.6` | 200K |
| `claude-opus-4.6` | 200K |
| `gemini-2.5-pro` | 1M |
| `gemini-3-pro-preview` | 1M |
| `gpt-5.4` | 400K |
| `gpt-5.3-codex` | 400K |
| `gpt-5.2-codex` | 400K |
| `grok-code-fast-1` | 128K |
<Tip>
GitHub Copilot is the most versatile provider — one subscription gives you access to models from OpenAI, Anthropic, Google, and xAI. Great if you want to switch between models for different tasks.
</Tip>
## Disconnecting
To disconnect your GitHub account, go to **Settings**, find the GitHub Copilot provider, and click **Disconnect**. Your OAuth tokens will be immediately deleted from your machine.
description: "Use your Qwen Code account to power BrowserOS"
---
Connect your Qwen Code account to BrowserOS and access Alibaba's coding models with up to a **1 million token context window** — the largest of any provider we support. No API keys needed.
## Setup
**1.** Open BrowserOS and go to **Settings** (`chrome://browseros/settings`). You'll see the AI Providers section.
**4.** Once authorized, Qwen Code will appear as a provider in your settings. Select a model and start using it.
## Available Models
| Model | Context Window |
|-------|---------------|
| `coder-model` | 1M |
| `qwen3-coder-plus` | 1M |
| `qwen3-coder-flash` | 1M |
| `qwen3.5-plus` | 1M |
<Tip>
Qwen Code's 1 million token context window is ideal for tasks that involve long documents, entire documentation sites, or working across many browser tabs simultaneously — the agent can hold everything in context at once.
</Tip>
## Disconnecting
To disconnect your Qwen account, go to **Settings**, find the Qwen Code provider, and click **Disconnect**. Your OAuth tokens will be immediately deleted from your machine.
@@ -42,6 +42,10 @@ Welcome to BrowserOS! Let's get you set up.
## You're all set!
<Tip>
**Block ads with uBlock Origin** — Chrome dropped support for the full uBlock Origin extension, but BrowserOS brought it back. [Install it from the Chrome Web Store](https://chromewebstore.google.com/detail/ublock-origin/cjpalhdlnbpafiamejdnhcphjbkeiagm) and browse ad-free. [Learn more →](/features/ad-blocking)
Between the Lima server-prod-resources cutover (WS3) and the ContainerRuntime migration (WS6) landing, `resources/bin/third_party/` ships `limactl` instead of `podman`. The current OpenClaw runtime (`apps/server/src/api/services/openclaw/podman-runtime.ts`, `container-runtime.ts`) still invokes `podman`; it will fail to find the binary on builds cut from `dev`.
Do **not** cut a release branch off `dev` during this window. Track WS6 progress before any release cut. See `specs/bundled-vm-runtime-spec.md` + `specs/workstreams.md` for context.
Monorepo for the BrowserOS-agent -- contains 3 packages: agent-UI, server (which contains the agent loop) and controller-extension (which is used by the tools within the agent loop).
> **⚠️ NOTE:** This is only a submodule, the main project is at -- https://github.com/browseros-ai/BrowserOS
The agent platform powering [BrowserOS](https://github.com/browseros-ai/BrowserOS) — contains the MCP server, agent UI, CLI, and evaluation framework.
## Monorepo Structure
@@ -10,24 +8,27 @@ Monorepo for the BrowserOS-agent -- contains 3 packages: agent-UI, server (which
apps/
server/ # Bun server - MCP endpoints + agent loop
agent/ # Agent UI (Chrome extension)
controller-ext/ # BrowserOS Controller (Chrome extension for chrome.* APIs)
cli/ # Go CLI for controlling BrowserOS from the terminal
eval/ # Evaluation framework for benchmarking agents
| `apps/server` | Bun server exposing MCP tools and running the agent loop |
| `apps/agent` | Agent UI - Chrome extension for the chat interface |
| `apps/controller-ext` | BrowserOS Controller - Chrome extension that bridges `chrome.*` APIs (tabs, bookmarks, history) to the server via WebSocket |
| `apps/agent` | Agent UI — Chrome extension for the chat interface |
| `apps/cli` | Go CLI — control BrowserOS from the terminal or AI coding agents |
| `packages/cdp-protocol` | Auto-generated CDP type bindings used by the server |
| `packages/shared` | Shared constants used across packages |
## Architecture
-`apps/server`: Bun server which contains the agent loop and tools.
-`apps/agent`: Agent UI (Chrome extension).
-`apps/controller-ext`: BrowserOS Controller - a Chrome extension that bridges `chrome.*` APIs to the server. Controller tools within the server communicate with this extension via WebSocket.
The official Chrome extension for BrowserOS Agent, providing the UI layer for interacting with BrowserOS Core and Controllers. This extension enables intelligent browser automation, AI-powered search, and seamless integration with multiple LLM providers.
The built-in browser extension that powers BrowserOS's AI interface — new tab with unified search, side panel chat, onboarding, and settings. Built with [WXT](https://wxt.dev) and React.
> For user-facing feature documentation, see [docs.browseros.com](https://docs.browseros.com).
## Features
- **AI-Powered New Tab**: Custom new tab page with unified search across Google and AI assistants
- **Side Panel Chat**: Full-featured chat interface for interacting with BrowserOS Core
- **Side Panel Chat**: Full-featured chat interface for interacting with BrowserOS
- **Multi-Provider Support**: Connect to various LLM providers (OpenAI, Anthropic, Azure, Bedrock, and more)
- **MCP Integration**: Model Context Protocol support for extending AI capabilities
- **Visual Feedback**: Animated glow effect on tabs during AI agent operations
- **Privacy-First**: Local data handling with configurable provider settings
## How It Connects
The extension communicates with the [BrowserOS Server](../../apps/server/) running locally. The server handles the AI agent loop, MCP tools, and CDP connections — the extension provides the UI layer.
## Project Structure
```
@@ -80,47 +88,20 @@ Settings dashboard with multiple sections:
Content script that creates a visual indicator (pulsing orange glow) around the browser viewport when an AI agent is actively working on a tab.
## How Tools Are Used
### Bun
Bun is the exclusive runtime and package manager:
- All scripts use `bun run <script>` instead of npm
- Package installation via `bun install`
- Environment files automatically loaded (no dotenv needed)
- Enforced via `engines` field in `package.json`
```bash
bun install # Install dependencies
bun run dev # Development mode
bun run build # Production build
bun run lint # Run Biome linting
```
### Biome
Unified linter and formatter configured in `biome.json`:
- **Formatting**: 2-space indentation, single quotes, no semicolons
- **Linting**: Recommended rules plus custom rules for unused imports/variables
Codegen requires a GraphQL schema. By default it uses the bundled `schema/schema.graphql`, so no extra setup is needed. If you have access to the original API source, you can set the following environment variable
Codegen requires a GraphQL schema. By default it uses the bundled `schema/schema.graphql`, so no extra setup is needed. If you have access to the original API source, you can set the following environment variable:
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.