* feat(build-tools): seed dev agent tarballs
* fix: address review comments for 0423-build_agent_tarball_dev_sync
* chore(build-tools): remove dev cache sync alias
Replace the podman-based runtime with nerdctl running inside the Lima
VM introduced in the previous commit. OpenClaw is cut over to the new
VM-backed container runtime; legacy podman code paths are removed.
- New container CLI (lib/container): nerdctl ContainerCli, ImageLoader
with cache-tarball fallback, shared types
- OpenClaw: container-runtime-factory orchestrates VM lifecycle + gateway
startup; container-runtime.ts rewritten to speak nerdctl; Linux test
startup kept disabled behind the factory
- Terminal: session + routes moved onto Lima shell transport; server
wires the VM-backed runtime via main.ts
- Agent UI: simplify AgentsPage/useOpenClaw after route consolidation
- Remove podman-runtime, podman-overrides, and their tests
- Tests: container-cli, image-loader, container-runtime-factory, and
updated openclaw/terminal/main suites
Introduce a new VM runtime layer using Lima for running containerised
workloads on macOS. Lifecycle covers decompress/create/start/stop with
stubs for upgrade/reset plus version-mismatch warnings.
- Foundation modules: paths, errors, manifest, telemetry
- lima.yaml generator + typed limactl wrapper with structured debug logging
- ssh ControlMaster transport for fast in-VM commands
- Ubuntu 24.04 minimal template, containerd default, 30GiB overlay disk
- browseros-dir helpers (getLimaHomeDir, getVmStateDir, getVmDisksDir);
OpenClaw dir moves into VM state dir
- Test helpers (fake-limactl, fake-ssh, test-env), vm-smoke integration
coverage, NODE_ENV propagation for spawned server test groups
* refactor(openclaw): rename http chat client to http client
Session history is about to land on the same HTTP client. 'Chat client'
will no longer describe it, so rename the class, file, and service field
up front. No behavior change.
* feat(openclaw): add session history fetch + sse stream to http client
Adds getSessionHistory (JSON) and streamSessionHistory (SSE) to the
OpenClaw HTTP client. Both target GET /sessions/<key>/history on the
loopback gateway, reusing the same bearer-token auth as streamChat.
- 404 from the gateway surfaces as OpenClawSessionNotFoundError so
callers can map it to a typed HTTP status.
- The SSE path parses named 'history', 'message', and 'error' events
into a typed OpenClawSessionHistoryEvent union.
- AbortSignal propagates to fetch and cancels the reader mid-stream.
* feat(openclaw): expose session history over GET /claw/session/:key/history
Wire the new getSessionHistory / streamSessionHistory service methods
through a route that defaults to JSON and upgrades to SSE when the
client sends Accept: text/event-stream.
- OpenClawSessionNotFoundError lives in errors.ts alongside the other
OpenClaw errors so routes can import it from one place.
- The route propagates c.req.raw.signal into streamSessionHistory so
client disconnects cancel the upstream fetch.
- Route tests cover the JSON path (with query param forwarding), the
404 path, and the SSE framing.
* chore(openclaw): drop NaN from session history route limit param
Seeds ~/.browseros-dev/cache/vm/ from ./dist/ without touching R2, so
devs can test the server against a freshly-built tarball before anything
is published to cdn.browseros.com. Hardcodes arm64 since all devs are on
Apple Silicon; refuses to run unless NODE_ENV=development; idempotent
(skips copy on sha256 match).
Also fixes the R2_BUCKET default in .env.sample from browseros-artifacts
to browseros to match the actual bucket.
* feat(build-tools): add Lima template for BrowserOS VM
* feat(build-tools): remove build-disk pipeline and recipe directory
Task 2 verification removed the scripts, recipe directory, workflow, and package scripts. Typecheck remains green here because manifest disk fields are removed in the next task, so the plan's expected missing-import failure does not apply yet.
* feat(build-tools): rename VmManifest to AgentManifest, drop disk fields
* feat(build): stage Lima template into server resources
Verified local-resource staging with: bun scripts/build/server.ts --target=darwin-arm64 --ci. The template was copied to dist/prod/server/darwin-arm64/resources/vm/browseros-vm.yaml and included in the zip. bun run build:server:test still fails on the pre-existing R2 limactl resource with: The specified key does not exist.
* docs(build-tools): Lima template dev loop + record D9
Updated the build-tools README in this worktree. Also recorded D9 in the canonical external spec file at /Users/shadowfax/llm/code/browseros-project/grove-ref/browseros-main/specs/decisions.md, which is outside this git checkout.
* chore(build-tools): sweep orphaned references to retired disk pipeline
* chore: self-review fixes
* feat(vm-container): ship the WS1 VM disk image pipeline
New Bun/TS workspace package @browseros/vm-container that produces a
reproducible, versioned Debian 12 + Podman qcow2 disk image for arm64 and
x64, and publishes it to Cloudflare R2 under vm/<version>/ with a per-
version manifest.json and a latest.json pointer.
- virt-customize-driven build with a git-tracked recipe DSL.
- zstd-compressed artifacts; sha256 sidecars for compressed + uncompressed.
- Public surface at @browseros/vm-container/schema exposes zod-validated
VmManifest + R2 key helpers for WS4 to import; /download is a stub
landing pad for WS4 to fill in.
- Rollback on partial upload failure: any exception after the first
successful put deletes all previously uploaded keys for that version.
- GHA workflow build-vm-container.yml runs a matrix build per arch on
native runners, an x64 Lima boot smoke test, and a gated publish job.
- Full unit coverage for arch, r2-keys, manifest, recipe parser, and
publish (rollback + happy path via aws-sdk-client-mock).
* fix(vm-container): address review comments
- Split buildDisk into prepareCustomizedDisk + finalizeArtifacts for
testability.
- Replace resolvePinnedSha's sentinel-prefix check with a positive
sha256-hex regex test, switch base-image.ts placeholder to empty string.
- Drop unused R2_VM_PREFIX from .env.example; document CDN_BASE_URL
override precedence in README.
- Replace SSH host-key explicit list in recipe with `ssh_host_*` glob so
.pub keys and future key types are also removed.
- lima-boot: introduce BunRequestInit type for the unix fetch option and
reject empty limactlPath loudly.
- Extend publish test suite: mid-manifest-upload failure path verifies
both arches' qcow+sha are rolled back and latest.json is never written.
- Add missing tests: parseArch('ARM64') case-sensitivity rejection,
composeVirtCustomizeArgv unresolved-substitution pass-through.
* fix(vm-container): pin a real Debian snapshot, switch verify to SHA-512, streaming download
- Pin Debian base to bookworm/20260413-2447 with real SHA-512 values
from upstream SHA512SUMS (the sentinel placeholder never corresponded
to a real build). Debian cloud images only publish SHA512SUMS today,
so switch base-image verification to SHA-512 throughout: rename
BaseImage.sha256 → sha512, manifest field base_image_sha256 →
base_image_sha512, base_image.sha256_url → sha512_url,
debianSha256SumsUrl → debianSha512SumsUrl. Our own artifact hashes
(compressed_sha256, uncompressed_sha256, recipe_sha256) stay SHA-256.
- Fix downloadTo: previous Bun.write(dest, response) buffered the
entire 300 MB response before writing (100% CPU, empty dir). Replace
with a getReader() loop that streams chunks through Bun.file().writer().
- build CLI now auto-derives --version from today's date when omitted
(defaults to YYYY.MM.DD-dev1); explicit --version still overrides.
Broaden CALVER_REGEX to accept alphanumeric suffixes so -dev1/-rc1
tags are valid. New todayCalver() helper.
- Update GHA workflow fallback to github.run_number (shorter) instead
of run_id.
* fix(vm-container): resolve copy-in paths against recipeDir after substitution
The copy-in path resolver checked op.src.startsWith('/') before running
the {placeholder} substitution, so an absolute-after-substitution path
like {manifest_tmp} → /tmp/vm-dist/manifest-stub-arm64.json was treated
as relative and joined against recipeDir, producing a nonexistent path.
Check the *substituted* value for absoluteness via path.isAbsolute.
* fix: address review comments for 0422-ws1_vm_disk_pipeline
* fix(ci): repair vm-container workflow
* fix(ci): expose vm build logs on failure
* fix(vm-container): expose base_image_sha256 in manifest per PRD
The published manifest contract (consumed by WS4) now uses base_image_sha256
as the PRD specified. Internally the build still verifies the downloaded
Debian base against the pinned sha512 (that's what Debian actually signs in
SHA512SUMS) — then hashes the same bytes as sha256 and records that in the
manifest. One extra digest pass of a ~300 MB file; negligible.
- manifest.json: base_image_sha256 replaces base_image_sha512; sha512_url
removed (not needed — sha256 is the consumer-facing hash).
- CLI: --base-image-sha256 override validates against the locally-computed
sha256 after download.
- BuildResult.baseImage gains sha256 alongside sha512.
- Tests updated to the new field.
The auth.json bug (reviewer #2) is resolved: the source file is
recipe/auth.json and the recipe emits `copy-in auth.json:/etc/containers/`
so libguestfs writes /etc/containers/auth.json.
* ci(vm-container): fix supermin kernel-read + rename sha512 inputs to sha256
- Ubuntu 24.04 GHA runners ship /boot/vmlinuz-* as mode 0600, which blocks
libguestfs's supermin appliance builder when virt-customize runs as a
non-root user. Chmod 0644 before the build — canonical CI workaround.
- Rename workflow_dispatch input base_image_sha512 → base_image_sha256
and CLI flag --base-image-sha512 → --base-image-sha256 to match the
orchestrator's renamed override.
* ci(vm-container): give runner KVM access + install passt for libguestfs
The supermin fix got us past appliance-build, but virt-customize then hit
"passt exited with status 1". The passt networking helper misbehaves when
libguestfs falls back to TCG emulation, which happens because the runner
user isn't in the kvm group even though /dev/kvm exists on the GHA host.
- chmod 0666 /dev/kvm → libguestfs uses hardware acceleration, avoids TCG.
- install passt explicitly so the networking helper is present and current.
* ci(vm-container): disable passt to force libguestfs slirp fallback
libguestfs 1.54+ prefers passt for guest networking, but the passt binary
on GHA ubuntu-24.04 exits with status 1 when invoked from the appliance
— an AppArmor/capability issue that doesn't surface a useful diagnostic.
The reliable workaround is to remove passt so libguestfs picks QEMU's
built-in user-mode SLIRP as the network backend. SLIRP is slower but
functional and doesn't require escalated privileges.
- Guard uploaded_keys append with !dry_run so the rollback list
never contains keys for objects that were never written.
- Prefer GITHUB_ACTOR over local OS username for manifest.uploaded_by;
manifest.json is CDN-fronted so leaking a developer's login is
unnecessary (falls back to 'local').
- Extend test_windows_has_no_stale_third_party to cover bun.exe/rg.exe
too, matching the macOS forbidden-set pattern.
* feat(build): swap podman server resources for Lima (WS3)
- Upload limactl (arm64 + x64) to R2 via new 'browseros upload lima' CLI.
- Rewrite scripts/build/config/server-prod-resources.json: 2 Lima entries,
12 podman-family entries removed.
- Update codesign metadata (server_binaries.py) to add limactl, drop podman
family. Sign modules need no edits (data-driven).
- Delete orphaned podman-{vfkit,krunkit} entitlement plists.
- Release-gating note in browseros-agent/CLAUDE.md: don't cut releases off
dev between this commit and WS6 landing (OpenClaw still invokes podman).
* fix: address review comments for 0422-ws3_lima_resources
- Tighten _find_limactl_member to match exactly .../bin/limactl via
Path.parts, avoiding incidental matches like 'xbin/limactl'.
- Fall back USER -> USERNAME -> 'unknown' for uploaded_by so Windows
shells don't all record 'unknown'.
- Comment the broad except in upload_lima to explain why rollback
must fire for any mid-loop failure.
* chore: drop bun + rg from Windows sign list
These executables are already absent from server-prod-resources.json (no
Windows entries shipped); keeping them in the sign list produces
"Binary not found" warnings on every Windows build.
* feat(openclaw): dynamically allocate and persist gateway host port
The gateway container always listens on OPENCLAW_GATEWAY_CONTAINER_PORT
(18789) internally, but that port may be taken on the user's host. Allocate
a free host port on each lifecycle transition, persist it to
~/.browseros/openclaw/.openclaw/runtime-state.json, and prefer the
persisted value on subsequent starts so the mapping is stable.
Split the naming so the two sides of the -p mapping are no longer
ambiguous: the shared constant becomes OPENCLAW_GATEWAY_CONTAINER_PORT
and the service/spec/chat-client/runtime probes all use hostPort for
the mapped host-side port.
* fix(openclaw): remove duplicate Podman overrides card from status panels
* feat(openclaw): user-supplied Podman binary path override
Expose the existing `configurePodmanRuntime({ podmanPath })` knob as a UI
input on the Agents page so users blocked by the bundled gvproxy helper
discovery bug can install their own Podman (e.g. `brew install podman`)
and point BrowserOS at it.
- podman-overrides.ts: persist {podmanPath} at ~/.browseros/.openclaw/
- openclaw-service: applyPodmanOverrides/getPodmanOverrides, rebuilds
ContainerRuntime + CLI clients in place (no server restart needed)
- routes: GET/POST /claw/podman-overrides with absolute-path + existsSync
validation
- main: load override on boot, pass resourcesDir into the service so
clearing the override restores bundled fallback
- AgentsPage: PodmanOverridesCard rendered inline in the degraded /
uninitialized / error cards and as a collapsible standalone section
Dev mode is unchanged; prod gets the same lever dev has had all along.
* refactor(openclaw): address review comments for podman-path override
- extract getPodmanOverrideValidationError() to mirror the existing
getCreateAgentValidationError() pattern in openclaw.ts
- extract rebuildRuntimeClients() so applyPodmanOverrides doesn't
re-spell the three-step runtime/CLI-client reinit
- rename shadowing local path -> overridesPath in loadPodmanOverrides
* fix(openclaw): clear gateway log tail before swapping runtime
rebuildRuntimeClients replaces this.runtime but the cached stopLogTail
still closes over the old runtime's log-tail process. The existing
guard in startGatewayLogTail (if (this.stopLogTail) return) would then
short-circuit the next restart and leave the new runtime without a
tail. Clear it inside the helper so the rebuild is self-consistent
regardless of caller order.
* fix(openclaw): check podmanPath executability and note singleton mutation
- validator: after existsSync, accessSync(X_OK) so a non-executable file
fails fast at save time with a clear 400 instead of a cryptic spawn
error later. Added a matching route test.
- applyPodmanOverrides: one-line comment flagging the intentional
module-level PodmanRuntime singleton mutation so future readers know
this is by design, not an accident.
* fix: run full browseros-agent test suite
* fix: stabilize server test reporting in CI
* fix: address PR review feedback
* refactor: extract server core test runner
* refactor: group server tests by filesystem
* fix: align CI suites with server test groups
* fix: provision server env for all CI suites
* fix: stabilize ci checks
* fix: report real test counts in ci
* feat(openclaw): add CLI client
* fix(openclaw): swap service to cli client
* fix(openclaw): restore mixed json parsing
* fix(openclaw): validate agent list payloads
* fix(openclaw): simplify cli client boundary
* fix(openclaw): simplify cli client boundary
* fix(openclaw): prefer outer config json payloads
* fix(openclaw): ignore trailing config log payloads
* refactor(openclaw): bootstrap config through cli
* fix(openclaw): narrow bootstrap ownership
* fix(openclaw): avoid noop key restarts
* fix(openclaw): enforce supported provider sync
* refactor(openclaw): remove agent role contract
* fix(openclaw): migrate legacy state and apply model updates
* fix(openclaw): migrate legacy agent state
* fix(openclaw): harden state updates
* refactor: stabilize local OpenClaw bootstrap and chat auth
* fix(openclaw): propagate container env and drop legacy paths
Compose now loads provider creds from .openclaw/.env and passes the
gateway token through, so in-container CLI commands (tui, doctor,
config) authenticate correctly and the gateway process sees
OPENROUTER_API_KEY. Service ensures the state env file exists and
rewrites the compose env with the token before composeUp in setup,
start, and tryAutoStart. Podman machine gets larger defaults and the
container enables NODE_COMPILE_CACHE + OPENCLAW_NO_RESPAWN. Legacy
state migration, the unused WebSocket gateway-client, memorySearch,
and thinking defaults are removed.
Introduces release.macos.arm64.yaml for single-architecture arm64
macOS release builds. Mirrors the windows/linux single-arch pattern
(configure -> compile -> sign_macos -> package_macos -> upload),
skipping the universal_build module to avoid the x64 cross-compile
and lipo merge. Reuses the sparkle_setup step and the same
notarization env vars as the universal macOS config.
* feat(ota): bundle full server resources tree (server + third_party bins)
The OTA Sparkle payload now ships the complete resources/ tree the agent
build produced, not just browseros_server. Every third-party binary (bun,
ripgrep, podman, gvproxy, vfkit, krunkit, podman-mac-helper, win-sshproxy)
flows to OTA-updated installs so podman integration works for users on the
OTA channel, matching fresh Chromium-build installs.
Extract the per-binary sign table into build/common/server_binaries.py so
the Chromium-build sign path (modules/sign/) and OTA sign path (modules/ota/)
share a single source of truth. Adding a new third-party dep is now a
one-file edit that both paths pick up automatically; unknown executables
under resources/bin/ are a hard error at release time.
* fix(ota): address review comments on bundle signing flow
- Avoid double-zipping during notarization: add notarize_macos_zip for
pre-built Sparkle bundles so notarytool submits the zip directly
instead of re-wrapping it through ditto --keepParent (Apple's service
does not descend into nested archives). Keep notarize_macos_binary for
single-binary callers. Share credential setup + submit logic via
internal helpers.
- Fail fast on unknown executables in sign_server_bundle_macos: collect
the unknown-files list before any codesign call so a missing shared-
table entry aborts in seconds, not after a full signing round.
- Drop dead get_entitlements_path helper (no callers remain after the
bundle refactor).
* fix(ota): address PR review comments (greptile + claude)
- sign_server_bundle_macos filters to executables only (p.is_file() +
not p.is_symlink() + os.access X_OK) before applying the unknown-file
guard. Non-Mach-O files (configs, dylibs, etc.) under resources/bin/
no longer cause misleading 'unknown executable' hard failures.
- sign_server_bundle_windows now hard-errors on a missing expected
binary instead of silently skipping it. Symmetric with the macOS
guard — an incomplete bundle must not publish.
- ServerOTAModule.execute() uses tempfile.TemporaryDirectory context
managers for both the download and staging roots so they are cleaned
up on every path, including failures.
- Per-platform sign/notarize/Sparkle-sign failures now raise RuntimeError
instead of silently skipping the platform — a release pipeline can no
longer omit a target while reporting success.
- Move import os and import shutil to the top of ota/sign_binary.py.
- Drop unused log_error import from ota/server.py.
* chore: bump server
* fix(ci): add PR comment with test summary and block on failure
Add a `comment` job to the test workflow that parses JUnit XML artifacts
and posts a sticky PR comment showing pass/fail counts per suite, with
failed test names listed in a collapsible section and a link to the run.
Guards against fork PRs (read-only token) and stale overlapping runs
(skips comment if PR head has moved past our SHA).
* fix(ci): use payload SHA for staleness check, handle missing artifacts
- Replace context.sha (merge commit SHA) with
context.payload.pull_request.head.sha so the staleness guard
compares the correct values and the comment actually gets posted
- Add continue-on-error to download-artifact so cancelled runs
gracefully fall through to the "no test results" message
* fix(ci): show warning icon for zero-test suites instead of failure
* fix: isolate ACL semantic tests from Bun teardown crash
* fix: time out ACL semantic fixture subprocess
* fix: run full root test suite and repair sdk browser context
* fix: address PR review comments for 0415-fix_all_tests_and_issues
* test: temporarily skip sdk suite
* test: clarify sdk suite disable message
Pre-kill BrowserOS processes whose --user-data-dir path contains the
browseros-test- prefix before each spawnBrowser, and in the test:cleanup
hook. This prevents a crashed prior test run from leaving a headless
BrowserOS attached to a stale port, without touching the developer's
regular BrowserOS.app instance (its user-data-dir is
~/Library/Application Support/BrowserOS, which does not match).
OpenRouter's public model slugs use dots in version numbers
(e.g. `anthropic/claude-haiku-4.5`), but openclaw's model registry only
recognises the dashed form (`claude-haiku-4-5`). Passing the dotted form
makes openclaw's registry lookup miss silently — the agent turn completes
with `stopReason=stop payloads=0` and the UI shows no reply. Rewrite dots
to dashes in the model portion for openrouter providers only so
copy-pasted OpenRouter slugs resolve correctly.
Also, in development mode:
- Inject `logging.level: debug` into generated openclaw.json so the
gateway emits debug-level entries to its file log.
- Patch an existing openclaw.json on start/restart so already-provisioned
users pick up the debug setting without a reset.
- Tail the gateway container's logs into the browseros server logger so
they appear in the same stream as the rest of dev output.
* refactor: remove redundant context-overflow middleware
The middleware caught provider overflow errors and re-tried with a
naive prompt truncation, but its `nonSystem.slice()` had no awareness
of tool_use/tool_result pairing — a cut between an assistant tool_use
and the matching tool_result produces an orphaned tool_use that
providers reject with a different error.
Compaction (`createCompactionPrepareStep`) already handles this safely:
`findSafeSplitPoint` walks past tool messages to preserve pair
integrity, and the pipeline (strip binary → prune → reduce outputs →
LLM summarize → sliding window) handles every overflow path before
the request leaves the agent.
Drops 426 lines: the middleware itself, its wiring in ai-sdk-agent,
and the matching test block + helpers in compaction.test.ts.
* docs: document BROWSEROS_AI_SDK_DEVTOOLS in .env.example
Surfaces the opt-in dev flag so contributors know it exists. Captures
every LLM call to .devtools/generations.json for post-hoc inspection.
* chore: add auctor configuration
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add project-level Claude Code skills for team
Adds 14 development workflow skills (brainstorming, planning, debugging,
TDD, code review, subagent-driven development, etc.) to .claude/skills/
so all team members get them automatically on pull.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The typecheck and compile scripts failed on fresh checkouts with
TS5083 because tsconfig.json extends .wxt/tsconfig.json, which is
gitignored and only generated by 'wxt prepare'. Run wxt prepare
before tsgo so the extended config and wxt.d.ts are always in place.
Expose the 7 Klavis Strata MCP tools as CLI subcommands under
`browseros-cli strata`, so CLI users (claude-code, gemini-cli) can
discover and execute actions on 40+ external services.
Commands: check, discover, actions, details, exec, search, auth.
Includes discovery flow guidance in help text, integration tests,
and an "Integrations:" group in the root help output.
Agents connecting over MCP URL/CLI (like claude-code) had no way to know
which Klavis connectors were available or authenticated, causing them to
fall back to browser automation. This adds a connector_mcp_servers tool
that checks connection status and returns an auth URL when needed.
* fix(openclaw): compose file path after service dir move, loopback auth fallback
- Fix COMPOSE_RESOURCE path: services moved to api/services/openclaw/
so the relative path needs one more parent directory traversal
- Fix requireTrustedAppOrigin middleware: Chrome extensions cannot set
the Origin header (forbidden header name). When Origin is absent,
fall back to checking the Host header is a loopback address. The
server only binds to loopback so only local processes can reach it.
Requests with an explicit non-trusted Origin are still rejected.
* fix: request header check
* chore: remove setup openclaw button
---------
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
Move openclaw/ and terminal/ service modules from src/services/ into
src/api/services/ so all server-side services live in one directory
alongside chat-service, klavis, mcp, and sdk. Update relative imports
in moved files and all callers.
- Add tool approval system with per-category approval configuration
- Build unified Governance dashboard (renamed from Admin) with pending
approvals view and execution audit log
- Move execution history tracking into the app shell
- Extract buildChatRequestBody helper and add newtab system prompt
- Add approval config change detection for mid-conversation rebuilds
* feat: add ACL rules for per-site element-level agent restrictions
Implement Access Control List (ACL) rules that let users block the agent
from interacting with specific elements on specific websites. Rules are
defined in a new Settings > ACL Rules page and enforced server-side in
executeTool() before any input tool handler runs.
- Shared ACL types and site pattern matching (packages/shared)
- Extension storage, settings UI with rule cards and add dialog
- Server-side guard in executeTool() checking tool+page+element
- Browser class extensions for element property resolution via CDP
- Visual overlay injection (red "BLOCKED" mask) via Runtime.evaluate
- Rules transported in chat request body alongside declinedApps
* fix: address review comments for ACL rules
- Add selector-to-property matching in matchesElement (tag, id, class)
- Remove scroll from guarded tools set (read-like action)
* fix: ACL site pattern matching fails on multi-segment URL paths
The glob-to-regex conversion used [^/]* for wildcard (*) which only
matches a single path segment. "*.amazon.com/*" failed to match
"www.amazon.com/cart/smart-wagon" because the trailing * couldn't
cross the slash between "cart" and "smart-wagon".
Fix: Split URL matching into hostname vs path parts. Path wildcards
now use .* to match across slashes. Also add simple domain matching
so users can just type "amazon.com" instead of "*.amazon.com/*".
* fix: wire up ACL overlay injection after take_snapshot
applyAclOverlays was defined but never called. Now triggers after
take_snapshot completes on pages matching ACL rules, so the agent
sees red "BLOCKED" overlays on restricted elements.
* refactor: rework 0326-acl_rules based on feedback
* feat(openclaw): add foundation — paths constant, browseros-dir helper, static compose file
Add OPENCLAW_DIR_NAME to shared paths constant, getOpenClawDir() to
browseros-dir.ts, and a static docker-compose.yml resource file that
uses native .env variable substitution instead of YAML template strings.
* feat(openclaw): add PodmanRuntime container engine abstraction
Manages Podman CLI interactions: machine lifecycle (init/start/stop),
availability checks, command execution with streaming output, and
running container enumeration. Linux skips machine ops since Podman
runs natively.
* feat(openclaw): add config builder and container runtime
openclaw-config.ts: pure functions to build openclaw.json and .env files
from BrowserOS settings. Maps provider keys, sets permissive defaults
(full exec, cron, web search, MCP bridge to BrowserOS).
container-runtime.ts: compose-level abstraction over PodmanRuntime for
the browseros-openclaw project. Handles up/down/restart/pull, health
checks, .env file writes, and safe machine shutdown.
* feat(openclaw): add OpenClawService orchestrator
Main service managing the single OpenClaw container. Handles full
lifecycle (setup/start/stop/restart/shutdown), agent CRUD with config
rewrites and gateway restarts, chat proxy to /v1/chat/completions,
provider key updates, auto-start on BrowserOS boot, and status reporting.
* feat(openclaw): add API routes and server wiring
Add /api/claw/* routes for container lifecycle (setup/start/stop/restart),
agent CRUD (list/create/delete), chat proxy with SSE streaming, provider
key management, and log retrieval. Register routes in server.ts, add
OpenClaw auto-start on BrowserOS boot and graceful shutdown in main.ts.
* fix(openclaw): resolve type errors in service and podman runtime
Fix TIMEOUTS.TOOL_EXECUTION → TIMEOUTS.TOOL_CALL to match shared
constants. Fix ReadableStream undefined/null type mismatch in
PodmanRuntime.runCommand stream draining.
* feat(openclaw): add agents page UI with chat, create, and lifecycle controls
Add /agents route with AgentsPage showing OpenClaw status, agent list,
create dialog, and per-agent chat. Includes useOpenClaw hook for
server communication, AgentChat component with SSE streaming, and
sidebar navigation entry.
* feat(openclaw): add provider selector to setup flow
Add LLM provider selector using useLlmProviders hook. Filters out
OAuth-only providers, pre-selects the user's default, and passes
providerType/apiKey/modelId to the setup endpoint so OpenClaw gets
a working LLM configuration on first setup.
* feat(openclaw): per-agent provider selection
Each agent can now have its own LLM provider. The Create Agent dialog
includes a provider selector that passes providerType/apiKey/modelId
to the backend. The service writes per-agent model config to
openclaw.json and merges the API key into the container's .env file.
* fix(openclaw): write gateway auth token to openclaw.json
The gateway was returning 401 because auth.mode was set to "token"
without providing the actual token value. Now the token is written
to gateway.auth.token in openclaw.json so the gateway and our chat
proxy agree on the same token.
* feat(openclaw): add GatewayClient WebSocket RPC client
Persistent WS client for the OpenClaw Gateway protocol. Handles the
challenge → connect → hello-ok handshake (as openclaw-control-ui with
operator.admin scope), JSON-RPC with pending map + timeouts, and
auto-reconnect. Exposes typed methods for agents.list, agents.create,
agents.delete, and health.
* refactor(openclaw): simplify config to bootstrap-only, add /readyz health
Config no longer contains agents.list — agent CRUD is handled via WS RPC.
buildOpenClawConfig → buildBootstrapConfig, removed makeAgentEntry and
AgentEntry (agents managed by OpenClaw runtime). Added isReady() and
waitForReady() using /readyz for gateway readiness checks.
* refactor(openclaw): agent CRUD via WS RPC, per-agent chat targeting
Replace JSON mutation + restart with GatewayClient WS RPC calls for
agents.create, agents.delete, agents.list. Chat proxy now uses
model: "openclaw/<agentId>" for per-agent targeting. Setup writes
bootstrap config once then creates "main" agent via WS after gateway
starts. Container restarts only when a new provider env var is added.
* fix(openclaw): use agentId field in setup response mapping
Fix type error: GatewayAgentEntry uses agentId not id.
* fix(openclaw): log service progress through server logger
* feat(openclaw): WS streaming, device auth, MCP port fix (#687)
* feat(openclaw): WS streaming, device auth, MCP port fix
- Fix GatewayClient WS handshake: add Ed25519 device identity signing,
Origin header, mode: cli (mode: ui requires device identity always)
- Add auto device pairing flow: generate client identity, attempt WS
connect (triggers pending), approve via openclaw CLI, reconnect
- Replace HTTP /v1/chat/completions proxy with WS-based streaming that
surfaces tool calls, thinking blocks, and text deltas
- Add chatStream() to GatewayClient returning ReadableStream of typed
OpenClawStreamEvent (text-delta, thinking, tool-start/end, lifecycle)
- Update chat route to stream WS events as SSE to the extension
- Pass actual server port to OpenClaw config (fixes MCP bridge in dev)
- Rewrite AgentChat.tsx with turn-based model using Message/MessageContent
components matching sidepanel pattern, with tool batching logic that
groups consecutive tools and breaks on text/thinking (same as sidepanel)
- Add execInContainer() to ContainerRuntime for CLI commands
- Fix gateway response field mapping (id→agentId, agents.list/create)
- Skip creating main agent if gateway auto-creates it
* fix(openclaw): retry WS connect on signature expired (Podman clock skew)
Podman VM clock drifts when Mac sleeps, causing Ed25519 signature
validation to fail with "device signature expired" on auto-start.
Add connectGatewayWithRetry() that restarts the container (resyncs
clock) and re-approves the device if needed.
* fix(openclaw): address PR review — stream cleanup, error handling
- Fix silent catch in setup(): only swallow "pairing required" and
"signature expired" errors, re-throw everything else
- Guard JSON.parse in approvePendingDevice(): check exit code and
wrap parse in try/catch with descriptive error messages
- Add try/finally in chat SSE route: reader.cancel() on disconnect
- Add cancel callback to chatStream ReadableStream: restores
ws.onmessage when stream is cancelled (prevents handler leak)
---------
Co-authored-by: shivammittal274 <56757235+shivammittal274@users.noreply.github.com>
* fix: enable agent interaction with elements inside iframes
Fetch accessibility trees from all frames via Page.getFrameTree() +
per-frame Accessibility.getFullAXTree(frameId), so iframe elements
appear in snapshots with valid backendNodeIds. Pages without iframes
take the original single-call path with zero overhead.
Update snapshot tree builders to walk multiple RootWebArea roots from
merged multi-frame trees. Extract same-origin iframe content in the
markdown walker; show [iframe: url] placeholder for cross-origin.
* fix: namespace AX nodeIds by frameId to prevent cross-frame collisions
CDP AXNodeId values are frame-scoped — each frame's accessibility tree
starts its own counter from 1. Prefix nodeId and childIds with frameId
before merging so the nodeMap in snapshot builders never overwrites
nodes from a different frame.
* docs: add uBlock Origin install info to getting started and ad-blocking pages
Chrome dropped support for the full uBlock Origin extension — highlight
that BrowserOS brings it back and make it easy to install from both the
getting started guide and the dedicated ad-blocking page.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: revert Kimi partnership UI, restore daily limit survey
Remove Kimi/Moonshot AI partnership branding from the rate limit
banner, provider card, provider templates, and LLM hub. Restore
the original survey CTA on daily limit errors. Moonshot AI remains
as a regular provider template without the "Recommended" badge.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address Greptile review comments
- Guard survey CTA with !isCreditsExhausted to avoid showing it for
credits-exhausted users who already see "View Usage & Billing"
- Remove dead kimi-launch feature flag files (kimi-launch.ts,
useKimiLaunch.ts)
- Remove unused KIMI_RATE_LIMIT analytics events
- Remove VITE_PUBLIC_KIMI_LAUNCH from env schema and .env.example
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The merged PR (#661) injected custom entries into filteredModels, but
cmdk auto-scrolls to its first selected CommandItem, pushing the custom
entry out of view. Fix by using forceMount on a separate CommandGroup
and resetting scroll to top on every keystroke via requestAnimationFrame.
* feat: show custom model ID as first option in model selector
When typing in the model dropdown, the user's exact input now appears as the
first selectable row, followed by fuzzy search suggestions. This makes entering
custom model IDs intuitive — previously the option was hidden behind a
zero-results-only Enter shortcut that fuzzy search almost always prevented.
* fix: correct is_custom_model flag and prevent duplicate analytics events
- Use modelInfoList check instead of hardcoding is_custom_model: true in
the Enter key handler
- Add stopPropagation to prevent cmdk's root keydown handler from also
firing onSelect, which caused duplicate MODEL_SELECTED_EVENT emissions
* fix: install linux sysroot in configure, not via gclient hook
`gn gen` was failing on the arm64 leg with `Missing sysroot
(//build/linux/debian_bullseye_arm64-sysroot)`. The previous design
relied on `git_setup` writing `target_cpus` to `.gclient` so that
`gclient sync`'s DEPS hook would download the cross-arch sysroot. That
chain breaks for any chromium_src that was synced before cross-arch
support landed (the hook is gated on .gclient state at sync time) and
for partial pipeline runs that skip git_setup entirely. Nothing in
configure declared or verified its sysroot precondition.
Make configure self-healing: on Linux, invoke
`build/linux/sysroot_scripts/install-sysroot.py --arch=<target>`
directly before `gn gen`. install-sysroot.py is idempotent (stamp file
+ SHA check), fast when already installed, and decoupled from .gclient
— it's exactly what the failing assertion's error message recommends.
The script accepts our arch names directly: `x64` translates to `amd64`
internally via ARCH_TRANSLATIONS, and `arm64` is a valid pass-through.
Also temporarily pin release.linux.yaml to x64 only while we validate
the sysroot bootstrap end-to-end. Flip back to `[x64, arm64]` once
arm64 is green.
* chore: pin release.linux.yaml to arm64-only for sysroot bootstrap test
x64 already builds cleanly — the failing leg is arm64 cross-compile from
an x64 host. Pin the config to arm64 to exercise the new
install-sysroot.py path in configure without burning time on x64.
Flip back to [x64, arm64] once arm64 is green.
* feat(server): cache klavis createStrata to unblock /chat hot path
Conversation creation in /chat was blocking on a Worker-proxied
klavisClient.createStrata round-trip every time the user had any
managed Klavis app connected. The 5s KLAVIS_TIMEOUT_MS in the
ai-worker proxy existed specifically to bound this latency, but
the same cap also caused user-visible 504s on /klavis/servers/remove
since Strata DELETE operations routinely take >5s. Without caching
we couldn't raise the timeout without regressing chat creation.
This adds an in-process cache for Strata createStrata responses,
keyed by (browserosId, hashed sorted-server-set) and gated by a 1h
TTL. The cache stores only immutable JSON metadata (strataServerUrl,
strataId, addedServers); per-session MCP clients continue to be
opened and disposed by AiSdkAgent exactly as before, which keeps
the cache concurrency-safe by construction.
Cache invalidation has two layers: (a) the cache key embeds the
server set, so adding/removing apps naturally produces a different
key; (b) POST /klavis/servers/add and DELETE /klavis/servers/remove
explicitly call invalidate(browserosId) after their underlying
Klavis API call succeeds, as defense-in-depth.
Other changes:
- Consolidates klavis-related services into a new
apps/server/src/api/services/klavis/ directory; moves
register-klavis-mcp.ts -> strata-proxy.ts and adds strata-cache.ts
there. lib/clients/klavis/ stays unchanged.
- Refactors KlavisClient.removeServer into a low-level
deleteServersFromStrata(strataId, servers) primitive. The
cache-lookup + delete + invalidate orchestration moves up into
routes/klavis.ts where it belongs, eliminating the lib->api
layering inversion the original removeServer would have introduced.
- Uses Bun.hash (xxhash64) for fixed-width 16-hex-char keys, with
serverKey verified on read to make collision risk strictly zero.
- Dedupes concurrent fetches via in-flight Promise sharing, with
identity-checks before delete to avoid races between invalidate()
and a racing replacement insert.
Follow-up (separate PR): bump KLAVIS_TIMEOUT_MS to 30000 in
ai-worker/wrangler.toml so /klavis/servers/remove stops 504-ing.
* fix: address greptile review comments for klavis strata cache
- Drop dead `invalidated` field on InflightEntry. It was added to
support a "discard post-resolution if invalidated" check that I
later replaced with identity-checked deletes during self-review,
but I forgot to remove the field and the misleading comment
referencing it. Simplify Map<string, InflightEntry> to plain
Map<string, Promise<CacheEntry>>.
- Lower cache miss log from info to debug. Misses fire on every new
conversation; matching the existing debug-level for hits.
- Stop routing the /klavis/servers/remove handler through
klavisStrataCache.getOrFetch. The chat hot path keys its cache by
the user's full enabled-server set (e.g. hash('Gmail,Linear')),
so a single-server lookup here (hash('Gmail')) is guaranteed to
miss, write a spurious entry, and then have it immediately
cleared by invalidate() on the next line. Call createStrata
directly to recover the strataId, mirroring the original
removeServer flow.
`release.linux.yaml` now declares `architecture: [x64, arm64]` and the
runner loops the entire pipeline once per architecture. depot_tools
fetches both Linux sysroots automatically — `git_setup` idempotently
ensures `target_cpus = ['x64', 'arm64']` is in `.gclient` before
`gclient sync`, so cross-compiling arm64 from an x64 host just works.
The resolver returns `List[Context]` (single-element for the common
single-arch case), and `build/cli/build.py` loops `execute_pipeline` over
the per-arch contexts. Modules stay 100% arch-agnostic — no new
orchestration module, no new YAML schema beyond the list form.
Also fix a cross-compile bug in `build/modules/package/linux.py`: the
appimagetool binary must match the BUILD machine's arch (it executes
locally), not the target arch. Split into a host-keyed
`LINUX_HOST_APPIMAGETOOL` lookup vs the existing target-keyed
`LINUX_ARCHITECTURE_CONFIG`. Target arch is still passed to appimagetool
via the `ARCH` env var.
- build/common/resolver.py: scalar OR list `architecture` -> List[Context]
- build/cli/build.py: loop pipeline per arch, log multi-arch headers
- build/config/release.linux.yaml: `architecture: [x64, arm64]`
- build/modules/setup/git.py: idempotent `target_cpus` edit on Linux
- build/modules/package/linux.py: host vs target appimagetool split
- build/modules/package/linux_test.py: cover the host/target split
The --compile-only and --ci flags served overlapping purposes for CI
builds. Remove --compile-only entirely since --ci already handles the
CI use case (skip R2, skip prod env validation, local zip packaging)
and --no-upload covers the upload-skipping use case for full builds.
The server release CI workflow fails on ubuntu-latest because
patch-windows-exe.ts requires Wine to run rcedit. Thread the existing
--ci flag through compileServerBinaries so Windows PE metadata patching
is skipped in CI mode with a warning log.
* feat: add server release workflow
* fix: address PR review comments for 0331-add_server_release_workflow
* refactor: rework 0331-add_server_release_workflow based on feedback
* refactor: rework 0331-add_server_release_workflow based on feedback
* feat(cli): skip self-update prompts for package manager installs
Checks BROWSEROS_INSTALL_METHOD env var (npm, brew) and skips automatic
update checks. Users should use their package manager's update mechanism.
FormatNotice now shows the appropriate upgrade command based on install method.
* feat(cli): add npm bin wrapper for browseros-cli
* feat(cli): add npm postinstall script to download platform binary
Downloads the correct platform binary from GitHub releases during npm
install, verifies SHA256 checksums, and extracts to .binary directory.
* feat(cli): add npm package metadata and README
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: move npm package files to correct monorepo path
The bin wrapper and postinstall were created at apps/cli/npm/ instead of
packages/browseros-agent/apps/cli/npm/. Moves them to the correct location.
* style: use node: protocol for builtin module imports
* feat(cli): add Makefile npm targets and release workflow npm publish step
Adds npm-version and npm-publish Makefile targets for version sync.
Adds Node.js setup and npm publish step to the release workflow.
Adds npm/npx install instructions to release notes template.
* fix(cli): fail on missing checksum entry and limit redirect depth
- Abort if checksums.txt downloaded but archive entry is missing
- Warn if checksums.txt itself failed to download
- Cap redirect depth at 5 to prevent stack overflow on circular redirects
* fix(cli): match install.sh checksum behavior — warn instead of abort
The existing shell installer (install.sh) warns and continues when the
checksum entry is missing from checksums.txt. Match that behavior in the
npm postinstall to avoid unnecessary install failures. Both files come
from the same GitHub release, so the checksum is a corruption check,
not a strong security boundary.
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The model picker in NewProviderDialog rendered inline, causing dialog
resizing and lacked keyboard navigation. Replace it with a Popover +
Command (shadcn Combobox) pattern and add fuse.js for fuzzy search.
- Replace custom ModelPickerList with Popover + Command dropdown
- Add fuse.js for fuzzy model search (replaces string.includes)
- Add MODEL_SELECTED_EVENT and AI_PROVIDER_UPDATED_EVENT analytics
- Enrich PROVIDER_SELECTED_EVENT with model_id in chat sessions
* feat: add browseros-cli self-updater
* fix: address review comments for 0327-cli_self_updater
* fix: address PR review comments for 0327-cli_self_updater
* fix: replace goreleaser with Makefile-based release build
Remove .goreleaser.yml (required Pro license for monorepo field) and
consolidate cross-compilation into `make release`. CI now uses the same
Makefile target, fixing a bug where POSTHOG_API_KEY was missing from
release ldflags.
* fix: address critical self-updater bugs from code review
- Fix SHA256 checksum mismatch: verify archive checksum before extraction
instead of verifying extracted binary against archive hash (was always
failing). Add VerifyChecksum() and integration test.
- Fix JSON field name mismatch: TypeScript was emitting camelCase
(publishedAt, archiveFormat) but Go expected snake_case
(published_at, archive_format). Manifest parsing was silently broken.
- Add decompression size limit (256 MB) to prevent zip/gzip bombs.
- Don't update LastCheckedAt on transient errors so retry happens on
next CLI invocation instead of waiting 24h.
* feat: add PostHog usage analytics to CLI
Add anonymous command-level analytics to browseros-cli using the PostHog
Go SDK. Tracks which commands are executed, their success/failure status,
and duration — no PII or person profiles.
- New analytics package with Init/Track/Close singleton
- Distinct ID resolves from server's browseros_id (server.json), falls
back to CLI-generated UUID (~/.config/browseros-cli/install_id)
- API key injected at build time via ldflags (dev builds = silent no-op)
- Server now writes browseros_id into server.json for cross-surface
identity correlation
* fix: address PR review feedback for #603
- Return "unknown" for unrecognized args in commandName to avoid
sending arbitrary user input to PostHog
- Revert goreleaser to {{ .Env.POSTHOG_API_KEY }} (intentional hard
fail — release builds must have the key set)
- go mod tidy to fix posthog-go direct/indirect marker
- Add POSTHOG_API_KEY to .env.production.example
* feat: upload CLI binaries to CDN during release and gate workflow to core team
- Extend scripts/build/cli/upload.ts with uploadCliRelease() that pushes
archives + checksums to R2 under versioned (cli/v{VERSION}/) and latest
(cli/latest/) paths, plus a version.txt for lightweight latest resolution
- Update scripts/build/cli.ts entry point with --release/--version/--binaries-dir
flags (existing no-args behavior preserved for upload:cli-installers)
- Rewrite install.sh and install.ps1 to fetch from cdn.browseros.com instead of
GitHub releases API — eliminates rate limits and API dependency
- Add environment: release-core to release-cli.yml for core-team gating via
GitHub environment protection rules
- Add Bun setup + CDN upload step to the workflow between build and GitHub release
* fix: address review feedback for PR #602
- Make loadProdEnv return empty map when .env.production is absent so
pickEnv falls through to process.env in CI (Greptile P1)
- Add semver format validation for version string in install.sh and
install.ps1 to guard against malformed CDN responses
- Pass inputs.version via env var instead of inline ${{ }} interpolation
to prevent command injection in workflow shell
* test: temporarily allow release workflow on any branch
* fix(cli): restore main-only guard, remove goreleaser dependency
Replaces GoReleaser (Pro-only monorepo feature) with plain go build.
Tested: RC release created successfully on branch with all 6 binaries.
* fix(cli): fix hdiutil mount detection, update README with install/launch/init flow
* test: temporarily allow release workflow on any branch
* fix(cli): restore main-only guard, remove goreleaser dependency
Replaces GoReleaser (Pro-only monorepo feature) with plain go build.
Tested: RC release created successfully on branch with all 6 binaries.
* fix(cli): remove -quiet from hdiutil so mount point is detected
* fix: add refresh indicator to chat history when fetching latest conversations
Show a non-blocking "Fetching latest conversations" indicator at the top
of the history list while the cached data is being refreshed. Users can
still interact with the cached conversation list during the refresh.
* perf: reduce chat history query payload — fetch last 2 messages instead of 5
The conversation list only displays the last user message as a preview.
Fetching 5 messages per conversation was wasteful — each message contains
the full UIMessage object (tool calls, reasoning, etc.) multiplied by
50 conversations per page. Reduced to last 2 which is sufficient to
find the last user message in a user→assistant exchange.
* perf: use first+DESC instead of last+ASC to push LIMIT down to SQL
PostGraphile's `last: N` doesn't map to SQL LIMIT — it uses a padded
LIMIT 10 and slices in application code. Changing to `first: 2` with
ORDER_INDEX_DESC generates a true SQL LIMIT 2, reducing rows scanned
from 500 to 100 per page (50 conversations × 2 vs 10 messages each).
No UX impact — extractLastUserMessage() filters by role regardless
of message order.
* chore: update react query packages
* feat: replace localforage with idb-keyval
* fix: remove filesystem tools when no workspace is selected
- Make workingDir optional on ResolvedAgentConfig
- Remove resolveSessionDir() fallback that always created a session dir,
masking the no-workspace state and keeping filesystem tools available
- Gate buildFilesystemToolSet() on workingDir being defined
- Add workspace change detection mid-conversation — rebuilds the agent
session when workspace is added, removed, or switched (same pattern
as existing MCP server change detection)
- download_file falls back to tmpdir() when no workspace is set
- Memory/soul tools are unaffected — they use ~/BrowserOS/ paths
* fix: sanitize message history when session rebuilds with different tools
When a session is rebuilt due to workspace or MCP changes, the carried-over
message history may contain tool parts for tools that no longer exist in
the new session. The AI SDK validates messages against the current toolset
and rejects parts with no matching schema.
- Add toolNames getter to AiSdkAgent exposing registered tool names
- Add sanitizeMessagesForToolset() to strip tool parts referencing
removed tools from carried-over messages
- Apply sanitization in both MCP and workspace session rebuilds
* fix: prepend tool-change context to user message on session rebuild
When workspace or MCP integrations change mid-conversation, prepend a
[Context: ...] block to the user's message explaining what changed.
This prevents the LLM from hallucinating tool usage based on patterns
in the carried-over conversation history.
Context messages vary by change type:
- Workspace removed: lists unavailable filesystem tools, suggests
selecting a working directory
- Workspace added: confirms filesystem tools are available with path
- Workspace switched: notes the new working directory
- MCP changed: notes that some integration tools may have changed
Only fires on the first message after a rebuild. Invisible in the UI.
* fix: make MCP change context specific about which apps were added/removed
Diff the old and new MCP server keys to produce specific context like:
- "The following app integrations were disconnected: Gmail, Slack."
- "The following app integrations were connected: Linear."
instead of a generic "some tools may no longer be available" message.
* refactor: extract shared rebuildSession helper in ChatService
Eliminates the duplicated 20-line dispose→create→sanitize→store flow
that existed separately in both the MCP and workspace change-detection
blocks.
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* test: add sanitizeMessagesForToolset test suite
Tests for the message sanitization that runs when a session rebuilds
with a different toolset (workspace or MCP change mid-conversation):
- Preserves messages with no tool parts
- Preserves tool parts when tool is in the toolset
- Strips tool parts when tool is NOT in the toolset
- Strips multiple removed tool parts from same message
- Keeps browser tools while removing filesystem tools
- Removes messages that become empty after stripping
- Preserves non-tool parts (reasoning, step-start, file)
- Returns same references when no filtering needed
- Handles empty message array and empty toolset
* style: fix biome formatting in chat-service.ts
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
* feat: isolate new-tab agent navigation from origin tab
Add origin-aware navigation isolation so the agent never navigates
away from the new-tab chat UI. This is a two-layer defense:
1. Prompt adaptation: When origin is 'newtab', the system prompt's
execution and tool-selection sections are rewritten to prohibit
navigating the active tab and default all lookups to new_page.
2. Tool-level guards: navigate_page and close_page reject attempts
to act on the origin tab when in newtab mode, returning an error
that teaches the agent to self-correct.
The client now sends an `origin` field ('sidepanel' | 'newtab')
instead of injecting a soft NEWTAB_SYSTEM_PROMPT that LLMs could
ignore. Backwards compatible — defaults to 'sidepanel'.
Closes TKT-592, addresses TKT-564
* test: add newtab origin navigation guard tests
- 14 new prompt tests verifying the system prompt adapts correctly
for newtab vs sidepanel origin (execution rules, tool selection table,
absence of conflicting single-tab guidance)
- 6 new integration tests for navigate_page and close_page guards:
rejects origin tab in newtab mode, allows non-origin tabs, allows
all tabs in sidepanel mode, backwards compatible with no session
- Simplify CLI section: remove confusing MCP jargon, clarify it works
from terminal and AI coding agents
- Replace "point the CLI at your MCP server" with plain language
- Add Vertical Tabs to the features list
* feat(cli): add install scripts for macOS, Linux, and Windows
Bash script (install.sh) for macOS/Linux and PowerShell script
(install.ps1) for Windows. Both download the correct platform binary
from GitHub Releases with checksum verification, version resolution,
and PATH setup.
* fix(cli): address PR review comments for install scripts
- Add checksum verification to install.ps1 using Get-FileHash
- Add warnings on all checksum skip paths in install.sh
- Use grep -F (fixed-string) instead of regex for filename matching
- Add ?per_page=100 to GitHub API call in install.ps1
- Use random temp directory name in install.ps1 to avoid collisions
* fix(cli): address installer review feedback
* fix(cli): use full path for dist artifacts in release step
* test: temporarily allow release workflow on any branch
* fix(cli): restore main-only guard, remove goreleaser dependency
Replaces GoReleaser (Pro-only monorepo feature) with plain go build.
Tested: RC release created successfully on branch with all 6 binaries.
* fix(cli): update goreleaser tag_prefix to match browseros-cli-v* format
* fix(cli): replace goreleaser with plain go build for releases
GoReleaser free version cannot parse prefixed tags (browseros-cli-v*).
monorepo.tag_prefix is a Pro-only feature.
Replaced with direct go build + gh release create:
- Builds all 6 targets with go build (verified locally)
- Creates tar.gz/zip archives with checksums
- Uses gh release create to publish
- No external tool dependency
GoReleaser free cannot parse slash-prefixed tags (cli/v0.0.1) as semver.
Switch to browseros-cli-v0.0.1 format which is valid semver after
stripping the prefix. Remove the monorepo config (GoReleaser Pro only).
* ci(cli): change release workflow to manual dispatch from main
- Trigger via Actions UI with a version input (e.g. "0.1.0")
- Only runs on main branch
- Creates git tag cli/v<version> automatically
- Then GoReleaser builds all 6 binaries and creates the GitHub Release
* feat: add scoped release notes, changelog PR, and idempotent tags to CLI workflow
- Add concurrency group to prevent parallel releases
- Add scoped release notes from commits touching the CLI directory
- Pass release notes to goreleaser via --release-notes flag
- Make tag creation idempotent for safe re-runs
- Tag the saved release SHA, not HEAD after branching
- Add CHANGELOG.md and auto-update via PR with auto-merge
- Add pull-requests: write permission
---------
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* feat: add release workflow for agent extension
Adds a workflow_dispatch workflow that builds the WXT extension,
creates a .zip for sideloading, generates scoped release notes with
contributors and PR links, creates a GitHub release with the zip
attached, and opens an auto-merge PR to update CHANGELOG.md.
* fix: correct API URL to api.browseros.com
* fix: remove duplicate PR numbers and contributors from extension release notes
Apply the same fixes from the agent-sdk workflow:
- Skip PR number if already in commit subject (squash merges)
- Remove custom Contributors section (GitHub auto-generates one)
- Clean up unused variables
* fix: use absolute path for extension zip in release upload
* fix: wxt zip already builds, use correct output path
- Remove separate build step since wxt zip runs the build internally
- Fix zip path from .output/*.zip to dist/*-chrome.zip
* fix: run codegen before wxt zip to generate graphql types
* feat: add release workflow for agent extension
Adds a workflow_dispatch workflow that builds the WXT extension,
creates a .zip for sideloading, generates scoped release notes with
contributors and PR links, creates a GitHub release with the zip
attached, and opens an auto-merge PR to update CHANGELOG.md.
* fix: correct API URL to api.browseros.com
* fix: remove duplicate PR numbers and contributors from extension release notes
Apply the same fixes from the agent-sdk workflow:
- Skip PR number if already in commit subject (squash merges)
- Remove custom Contributors section (GitHub auto-generates one)
- Clean up unused variables
* fix: use absolute path for extension zip in release upload
* fix: wxt zip already builds, use correct output path
- Remove separate build step since wxt zip runs the build internally
- Fix zip path from .output/*.zip to dist/*-chrome.zip
- Skip adding PR number if already present in the commit subject
(squash merges include "(#123)" automatically)
- Remove custom Contributors section since GitHub auto-generates one
with avatars at the bottom of every release
Add a compile-only mode to the server build pipeline for CI/CD
environments that don't have R2 credentials. The --compile-only flag
skips resource staging and upload, producing only compiled binaries.
* feat: create GitHub release with changelog on agent-sdk publish
After publishing to npm, the workflow now:
- Tags the commit as agent-sdk-v<version>
- Generates release notes from commits that modified the agent-sdk
directory since the last agent-sdk release tag
- Creates a GitHub release with those notes
First release will show "Initial release" since no previous tag exists.
* feat: update CHANGELOG.md on agent-sdk release
Add a CHANGELOG.md for @browseros-ai/agent-sdk and update the release
workflow to prepend a versioned entry with the release notes before
creating the GitHub release. The changelog is committed to main
automatically.
* fix: address review issues in agent-sdk release workflow
- Add explicit permissions: contents: write
- Replace sed with head/tail for safe CHANGELOG insertion (fixes
double-quote and backslash corruption in commit messages)
- Handle empty release notes with "No notable changes." fallback
- Make git tag idempotent for workflow reruns (2>/dev/null || true)
* fix: use PR with auto-merge for changelog updates
Direct push to main fails due to branch protection requiring PRs.
Instead, create a branch, open a PR, and auto-merge via squash.
* feat: add contributors and PR links to agent-sdk release notes
Release notes now include PR numbers (linked automatically by GitHub),
GitHub usernames for each commit author, and a contributors section
at the bottom. All scoped to commits that modified the agent-sdk path.
* fix: reorder release steps and fix tag/idempotency issues
- Capture release SHA before any branching so the tag always points
to the main commit that was built and published to npm
- Reorder: generate notes → publish → tag/release → changelog PR
(changelog is lowest-stakes, runs last)
- Make tag push and release create idempotent for safe re-runs
(fall back to gh release edit if release already exists)
- Add || true to gh pr merge --auto in case auto-merge is not enabled
- Explicit git checkout main before creating changelog branch
* fix: explicit error handling for tag/release and contributor dedup
- Replace silent || true guards with explicit checks that log what's
happening (tag exists, remote tag exists, release exists) so errors
are visible instead of swallowed
- Fix contributor dedup: use grep -qw (word match) instead of grep -qF
(substring match) so "dan" isn't excluded when "dansmith" exists
* fix: exclude current version tag when finding previous release
On re-runs, the current version's tag already exists on the remote, so
PREV_TAG resolves to it and git log produces empty output. Filter it
out so release notes are generated against the actual previous version.
* ci: prevent concurrent agent-sdk release runs
Add concurrency group so multiple dispatches queue instead of racing
on the same tag/release/PR.
* feat(cli): production-ready CLI with auto-launch, install, and cross-platform builds
- init: accept URL argument and --auto flag for non-interactive setup
- install: new command to download BrowserOS app for current platform
- launch: auto-detect and launch BrowserOS when server is not running
- discovery: prefer server.json (live) over config.yaml (may be stale)
- errors: actionable messages guiding users to init/install
- goreleaser: cross-platform builds for 6 targets (darwin/linux/windows × amd64/arm64)
- ci: GitHub Actions workflow to release CLI binaries on cli/v* tag push
* fix(cli): check health status code and add progress dots during launch
- Health check in newClient() now verifies HTTP 200, not just no error
- waitForServer prints dots during the 30s poll so users know it's working
* refactor(cli): make launch an explicit command, remove auto-launch from newClient
- launch: new explicit command to find and open BrowserOS app
- launch: probes server.json, config, and common ports before launching
- launch: if already running, reports URL instead of launching again
- init --auto: uses port probing to find running servers
- install --deb: errors on non-Linux instead of silently downloading DMG
- error messages: guide users to launch/install/init explicitly
- removed: auto-launch from newClient() — CLI never does something surprising
* fix(cli): platform-native detection, launch, and install for all OSes
Detection (isBrowserOSInstalled):
- macOS: uses `open -Ra` to query Launch Services (no hardcoded paths)
- Linux: checks /usr/bin/browseros (.deb), browseros.desktop, AppImage search
- Windows: checks %LOCALAPPDATA%\BrowserOS\Application\BrowserOS.exe
and HKCU/HKLM uninstall registry keys
Launch (startBrowserOS):
- macOS: `open -b com.browseros.BrowserOS` (bundle ID, not path)
- Linux: `browseros` binary, AppImage, or `gtk-launch browseros`
(fixed: was using xdg-open which opens by MIME type, not desktop files)
- Windows: runs BrowserOS.exe from known Chromium per-user install path
(fixed: was using `cmd /c start BrowserOS` which doesn't resolve)
Install (runPostInstall):
- macOS: hdiutil attach → cp -R to /Applications → hdiutil detach
- Linux: chmod +x for AppImage, dpkg -i instruction for .deb
- Windows: launches installer exe
- --deb flag now errors on non-Linux platforms
Removed auto-launch from newClient() — CLI never does surprising things.
Sources verified from:
- packages/browseros/build/common/context.py (binary names per platform)
- packages/browseros/build/modules/package/linux.py (.deb structure, .desktop file)
- packages/browseros/chromium_patches/chrome/install_static/chromium_install_modes.h
(Windows base_app_name="BrowserOS", registry GUID, install paths)
- /Applications/BrowserOS.app/Contents/Info.plist (bundle ID)
* fix: broaden connection error detection for main page and sidepanel
The connection error check required both "Failed to fetch" AND
"127.0.0.1" in the error message. On the main page, the browser
only produces "Failed to fetch" without the IP, so users saw a
generic "Something went wrong" instead of the troubleshooting link.
Broaden detection to also match "localhost" and bare "Failed to fetch"
errors that don't contain an external URL. Also pass providerType in
NewTabChat so provider-specific errors render correctly.
Closes#526
* fix: simplify connection error detection
All chat requests go through the local BrowserOS agent server, so any
"Failed to fetch" error is always a local connection issue. Remove the
unnecessary 127.0.0.1/localhost/URL checks.
* fix: pass providerType to agentUrlError ChatError instances
Port conflicts are expected — Chromium retries with a different port.
These errors were flooding Sentry (14k+ events) without user impact.
- handleStartupError: move Sentry.captureException below the
port-in-use check so it only fires for unexpected startup errors
- handleControllerStartupError: skip Sentry capture for port errors
- index.ts: exit early for port errors before Sentry capture
- Change dialog width from sm:max-w-2xl (672px) to sm:w-[70vw] sm:max-w-4xl
so it takes 70% of viewport width, capped at 896px
- Add overflow-x-auto on table wrappers so wide tables scroll horizontally
instead of being clipped
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: integrate models.dev for dynamic LLM provider/model data (#TKT-657)
Replace hardcoded model lists with data sourced from models.dev so new
providers and models appear automatically when the community adds them.
- Add build script (scripts/generate-models.ts) that fetches models.dev/api.json
and outputs a compact JSON with 10 providers and 520 models
- Replace hardcoded MODELS_DATA (50 models) with dynamic models.dev lookups
- Add searchable model combobox (Popover + Command) replacing plain Select dropdown
- Enrich provider templates with models.dev metadata (context window, image support)
- Keep chatgpt-pro, qwen-code, browseros, openai-compatible as hardcoded providers
* fix: address review — remove ollama-cloud mapping, fix default models, remove dead code
- Remove ollama from PROVIDER_MAP (ollama-cloud has cloud models, not local)
- Add ollama to CUSTOM_PROVIDER_MODELS with empty list (users type custom IDs)
- Update defaultModelIds to ones that exist in models.dev data:
openrouter → anthropic/claude-sonnet-4.5
lmstudio → openai/gpt-oss-20b
bedrock → anthropic.claude-sonnet-4-6
- Remove dead isCustomModel export
- Regenerate models-dev-data.json (9 providers, 486 models)
* fix: model suggestion list focus/dismiss behavior
- List only opens when input is focused or user types
- Clicking a model selects it and closes the list
- Clicking outside (blur) dismisses the list
- onMouseDown preventDefault on list items prevents blur race condition
* refactor: extract ModelPickerList component with proper open/close UX
- Collapsed state: Select-like trigger showing selected model + chevron
- Expanded state: search input + scrollable filtered list, inline
- Click outside or Escape to close, Enter to submit custom model
- Extracted as separate component (reduces dialog nesting, testable)
- No more setTimeout hacks for blur handling
* chore: remove plan doc from repo
* docs: add setup guides for ChatGPT Pro, GitHub Copilot, and Qwen Code
Add individual OAuth setup guide pages with step-by-step screenshots
for each provider. Add "Use Your Existing Subscription" section to the
Bring Your Own LLM page with card links to each guide. Register pages
in docs navigation.
* docs: add ChatGPT Pro setup screenshots
* docs: use custom provider icons for OAuth setup cards
* docs: inline SVG icons in provider cards for dark mode support
* docs: place provider icons above card titles
* feat: improve rate limit UX, usage page, and provider selector
- Show "Add your own provider for unlimited usage" CTA when BrowserOS
credits are exhausted or daily limit is reached
- Fix credit exhaustion detection to match actual error message
- Improve Usage page: remove disabled Add Credits button, add "Coming
soon" badge, add "Want unlimited usage?" section linking to providers
- Add "+ Add Provider" button at bottom of chat provider selector dropdown
* fix: use asChild pattern for Button+anchor in usage page
Replace nested <a><Button> (invalid HTML) with Button asChild
pattern per shadcn/ui convention.
* feat: UI improvements for OAuth dialog, provider badges, and events docs
- Replace OAuth device code toast with a proper Dialog showing the code
prominently with a copy button (GitHub Copilot, Qwen Code, ChatGPT Pro)
- Add "New" badge on provider template cards for ChatGPT Plus/Pro,
GitHub Copilot, and Qwen Code with orange border highlight
- Add events.md documenting all analytics events across the platform
* fix: add verificationUri to DeviceCodeDialog for popup-blocked fallback
Add verificationUri to PendingDeviceCode interface and pass it from
both handleClientAuth and handleServerAuth. Render a fallback "Open
verification page" link in DeviceCodeDialog so users can navigate
to the auth page if the popup was blocked.
- Add MCP promo banner on AI providers page with "New" badge and
"66+ tools" highlight, linking to /settings/mcp
- Add Quick Setup section on MCP settings page with copy-paste
commands for Claude Code, Gemini CLI, Codex, Claude Desktop, OpenClaw
- Consolidate MCP settings: move restart button inline with server URL,
remove separate MCP Server Settings card
- Add analytics event for promo banner clicks
* feat(eval): show mean score instead of pass/fail in report and viewer
* feat(eval): integrate NopeCHA CAPTCHA solver into eval pipeline
Add CAPTCHA detection and waiting so screenshots capture post-solve state.
Run headed with xvfb on CI since headless breaks extension content scripts.
- Add CaptchaWaiter module (detect reCAPTCHA/hCaptcha/Turnstile, poll until solved)
- Add optional `captcha` config block to EvalConfigSchema
- Wait for CAPTCHA solve before screenshot in single-agent and orchestrator-executor
- Patch NopeCHA manifest with API key before launching workers
- Fix CAPTCHA_EXT_DIR path (was pointing one level too high)
- Remove --incognito (extensions don't run in incognito; fresh user-data-dir isolates)
- CI: install xvfb, run headed via xvfb-run, pass NOPECHA_API_KEY secret
* fix: remove daily rate-limit middleware
The daily conversation rate limit is no longer needed. Remove the
middleware, RateLimiter class, fetch-config, error type, shared
constants, DB schema table, and integration tests.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove unused getDb() method
No longer needed after rate-limiter removal.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The eval's single-agent was passing raw task.query as the prompt,
without browser context (active tab URL, title). The agent didn't
know which page it was on, causing it to ask "which website?" instead
of browsing.
Use formatUserMessage() (same as chat-service.ts) to include browser
context in the prompt. Re-export formatUserMessage from agent/tool-loop.
* fix: prevent deleted scheduled tasks from reappearing after sync
When a scheduled task was deleted, the sync function would see the
remote job missing locally and re-add it, undoing the delete. Fix by
tracking pending deletions in storage so the sync function deletes
them from the backend instead of re-adding them locally.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: use read-modify-write for pending deletions to prevent concurrent clobber
Re-read pendingDeletionStorage before write-back and only remove
resolved IDs, preserving any new entries added by concurrent
removeJob calls during the sync's network I/O.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The test workflow captured exit codes but never failed the job, so PR
checks always showed green even when tests failed. Exit with the
captured code in the summarize step so each suite properly reports
pass/fail. Not a required check, so failures remain non-blocking.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(eval): switch to ubuntu-latest runner, add OE-Clado config
- Switch workflow from self-hosted Mac Studio to ubuntu-latest
- Install BrowserOS Linux .deb in CI (no self-hosted runner needed)
- Add browseros-oe-clado-weekly.json config for orchestrator-executor
- Fix report chart to show date+time (not just date)
- Make BROWSEROS_BINARY configurable via env var
* feat(eval): add NopeCHA captcha solver extension to eval runs
- Auto-load NopeCHA extension in eval Chrome instances
- Works in incognito + headless mode
- CI workflow downloads NopeCHA before eval
- extensions/ directory gitignored (downloaded at runtime)
* feat(eval): per-config concurrency — different configs run in parallel
* feat(eval): remove concurrency limit — all runs execute in parallel
* ci: run browseros tests on pull requests
* refactor: rework 0320-github_action_for_tests based on feedback
* refactor: rework 0320-github_action_for_tests based on feedback
* chore: add CI artifacts to .gitignore
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove mikepenz/action-junit-report to fix check suite misattribution
The JUnit report action creates check runs that GitHub associates with the
CLA check suite instead of the Tests check suite, causing test reports to
appear under "CLA Assistant" in the PR checks UI.
Remove the action and rely on job status + step summary + artifact upload
for test result visibility.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(eval): weekly eval pipeline with R2 uploads and trend dashboard
Add infrastructure for running weekly evaluations and tracking score
trends over time:
- Auto-generated output dirs: results/{config-name}/{timestamp}/
Each eval run gets its own timestamped folder, nothing is overwritten.
- upload-run.ts: uploads eval results to Cloudflare R2. Supports
uploading a specific run or all un-uploaded runs for a config.
- weekly-report.ts: generates an interactive HTML dashboard from R2
data. Config dropdown, trend chart with hover tooltips, searchable
runs table. Groups runs by config name.
- viewer.html: client-facing 3-column run viewer (task list,
screenshots with autoplay, agent stream with messages.jsonl).
Shows performance grader axis breakdown with per-axis scores.
- browseros-agent-weekly.json: weekly benchmark config (kimi-k2p5,
webbench-2of4-50, 10 workers, performance grader, headless).
- eval-weekly.yml: GitHub Actions workflow with cron (Saturday 6am)
and manual trigger. Runs on self-hosted Mac Studio runner.
Concurrency group ensures only one eval runs at a time.
- Dashboard updates: load previous runs, messages.jsonl viewer,
grade badges show percentages, async stream loading.
- Grader updates: timeout 30min, max turns 100, DOM content
verification guidance for performance grader.
* fix(eval): address Greptile review — injection, nested dirs, escaping
- Fix script injection in eval-weekly.yml: pass github.event.inputs
through env var instead of interpolating into shell
- Fix /api/runs to enumerate nested results/{config}/{timestamp}/ dirs
- Fix /api/load-run to allow single-slash run names (config/timestamp)
- Add HTML escaping for R2-sourced values in weekly-report.ts
- Escape axis names in viewer.html renderAxesBreakdown
* fix(eval): fix biome lint — non-null assertion, template literals
* fix(eval): fix biome errors — replace var with let, fix inner function declaration
* fix(eval): address Greptile P2 issues
- isRunDir: check all subdirs for metadata.json, not just first 3
- eval-runner: guard configPath for dashboard-driven runs (fallback to 'eval')
- load-run: default unknown termination_reason to 'failed' not 'completed'
* feat(eval): make BROWSEROS_BINARY configurable via env var
The OAuth callback server on port 1455 was bound eagerly at startup,
crashing the server if another BrowserOS instance was already running.
Rewrite as a lazy class (OAuthCallbackServer) that:
- Only binds port 1455 when the user initiates a ChatGPT Pro login
- Sends GET /cancel to any existing server on the port first, then
retries up to 5 times (follows Codex CLI's cancel+retry pattern)
- Exposes /cancel endpoint so other instances/tools can cancel us
- Releases the port after the OAuth callback arrives
- Device-code providers (GitHub Copilot, Qwen) never touch port 1455
This allows running eval, dev instances, and multiple BrowserOS
instances without port conflicts. OAuth login works on whichever
instance initiates it — the others continue without OAuth.
* feat: auto-discover server port via ~/.browseros/server.json
Server writes its port to ~/.browseros/server.json on startup so the CLI
can auto-discover the server URL without requiring `browseros-cli init`.
Discovery chain: BROWSEROS_URL env > config.yaml > server.json > error
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address review feedback for PR #504
- Use synchronous unlinkSync in stop() since process.exit() fires
immediately after, abandoning any pending async operations
- Wrap writeServerConfig in try/catch so a write failure doesn't crash
a healthy server for a convenience feature
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: type server discovery config and add version metadata
Add ServerDiscoveryConfig interface to @browseros/shared and enrich
server.json with server_version, browseros_version, and chromium_version.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: normalize URL from server.json for consistency
All other URL sources (env var, config.yaml) pass through
normalizeServerURL; apply the same to the server.json path.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add voice recording UI with waveform overlay to new tab search bar
Add a microphone button to the NewTab search bar that opens a fullscreen
recording overlay powered by react-voice-visualizer. The overlay shows a
real-time waveform visualization during recording, recording time, and a
stop button. On completion, the audio is transcribed via the existing
gateway endpoint and the transcript auto-navigates to inline chat.
Changes:
- Extract transcribeAudio() to shared lib/voice/transcribe-audio.ts
- Add VoiceRecordingOverlay component with react-voice-visualizer
- Add Mic button to NewTab search bar
- Track analytics via existing NEWTAB_VOICE_* events
- Handle cancel (backdrop click) vs submit (stop button) correctly
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address PR review comments for voice recording overlay
- Reset processingRef on transcription error to prevent stuck state
- Use stable callback refs to prevent useEffect re-runs from inline
arrow function props (fixes timer reset and unnecessary re-processing)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: replace voice overlay with inline sidepanel-style voice UI
Remove react-voice-visualizer dependency and VoiceRecordingOverlay.
Instead use the same inline voice pattern as the sidepanel ChatInput:
- Waveform bars replace the search input during recording
- Mic/stop/loading button states in the search bar
- Transcript populates the search input on completion
- Voice error shown inline below the search bar
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: add build smoke test to catch compile failures
Compiles the server binary (darwin-arm64) and verifies --version outputs
the correct version from package.json. Uses an empty resource manifest
and stub env vars so the test runs without R2 access or real secrets.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: address review feedback for PR #511
- Derive build target from process.platform/arch for CI portability
- Include binary stderr in --version assertion for better diagnostics
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sharp is a native C module (libvips) whose .node binaries can't be
embedded in Bun compiled executables. It was imported at the top level
in copilot-fetch.ts, crashing the entire server at startup.
Replace with jimp (pure JavaScript, zero native deps) which bundles
cleanly into compiled binaries. Same resize algorithm preserved.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add Qwen Code as OAuth LLM provider with refactored OAuth hooks
Add Alibaba Qwen Code as a third OAuth provider using Device Code flow
with PKCE. Free tier: 2,000 requests/day, up to 1M token context.
Refactoring:
- Extract useOAuthProviderFlow hook (eliminates ~180 lines of duplicated
OAuth logic from AISettingsPage for ChatGPT Pro + Copilot + Qwen)
- Extract resolveOAuthConfig in config.ts (shared resolver for all OAuth
providers, parameterized by provider name, default model, refresh flag)
- Generalize token-manager device code flow to support PKCE
(code_challenge/code_verifier) and form-urlencoded content type
New code:
- Qwen Code provider config with PKCE + form encoding flags
- Provider factories (both provider.ts and provider-factory.ts)
- Extension UI (template card, models, analytics, dialog)
* fix: use portal.qwen.ai as API base URL for OAuth tokens
DashScope (dashscope.aliyuncs.com) expects Alibaba Cloud API keys,
not OAuth tokens from chat.qwen.ai. The correct endpoint for OAuth
Bearer tokens is portal.qwen.ai/v1.
* fix: correct Qwen Code model IDs and context windows
- coder-model (1M context): virtual alias that routes to best model
- qwen3-coder-plus (1M): was incorrectly 131K
- qwen3-coder-flash (1M): new, speed-optimized variant
- qwen3.5-plus (1M): was incorrectly 1048576 (power-of-two vs decimal)
- Removed qwen3-coder-next (local/self-hosted, not available via OAuth)
- Default model changed to coder-model (auto-routes server-side)
* fix: move Qwen device code request to extension (bypasses WAF)
Alibaba WAF blocks server-side requests to chat.qwen.ai. Move the
initial device code request to the extension (browser context with
cookies), then hand off the deviceCode + codeVerifier to the server
for background polling via new POST /oauth/:provider/poll endpoint.
* fix: persist OAuth flow-started flag in sessionStorage
The flowStartedRef was lost when the component remounted (e.g. user
navigated to onboarding then back to settings). Use sessionStorage
to persist the flag so auto-create works after navigation.
* revert: remove sessionStorage for OAuth flow flag
Revert to simple useRef pattern matching the original ChatGPT Pro
implementation. The auto-create works when the user stays on the
AI settings page during auth.
* revert: move Qwen back to server-side device code flow
WAF block was temporary (rate-limiting), not permanent. Server-side
fetch to chat.qwen.ai now works. Reverted client-side device code
approach — Qwen now uses the same clean server-side flow as Copilot.
Removed: clientSideDeviceCode config, startClientSideDeviceCode(),
POST /oauth/:provider/poll endpoint, startDeviceCodePolling().
* feat: add WAF detection, rate-limit protection, and token storage endpoint
- Detect WAF captcha responses (HTML instead of JSON) in device code
request and token polling, with user-friendly error messages
- Add 30s cooldown on "USE" button to prevent rapid clicks triggering WAF
- WAF-blocked poll requests silently retry instead of aborting
- Add POST /oauth/:provider/token endpoint for storing externally-provided
tokens (useful for future fallback flows)
- Add storeTokens() method to OAuthTokenManager
- Pass server error messages through to extension toast notifications
* refactor: remove 30s cooldown, simplify OAuth hook
The hook is now identical for all providers — server handles retries
via activeDeviceFlows.delete(). Removed flowStartedAtRef cooldown
that was blocking legitimate retries.
* feat: client-side OAuth for Copilot and Qwen Code
Move device code OAuth flow to the extension for GitHub Copilot and
Qwen Code. The extension makes requests using Chrome's network stack,
which bypasses Alibaba WAF TLS fingerprint detection that blocks
server-side Bun/Node.js fetch.
New files:
- client-oauth.ts: Client-side device code + PKCE + token polling
Changes:
- useOAuthProviderFlow: handleClientAuth() for providers with clientAuth
config, handleServerAuth() for others (ChatGPT Pro)
- AISettingsPage: clientAuth config for Copilot and Qwen Code
- WAF detection: opens provider site for captcha solving on block
Server-side device code flow preserved as fallback (token-manager.ts,
providers.ts). Token storage via POST /oauth/:provider/token endpoint.
* fix: export OAuthProviderFlowConfig type, fix typecheck errors
- Export OAuthProviderFlowConfig interface so AISettingsPage can use it
instead of duplicating the type inline
- Fix string | null → string | undefined for agentServerUrl parameter
Add CHATGPT_PRO_SUPPORT and GITHUB_COPILOT_SUPPORT feature flags gated
on minServerVersion 0.0.77. Hide template cards and provider type
dropdown options when the server doesn't support the OAuth endpoints.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add model selector to newtab search bar
Add AI provider/model selector button to the newtab homepage footer bar,
matching the existing button aesthetics (Workspace, Tabs, Apps). Reuses
ChatProviderSelector popover from sidepanel. Users can now see and change
their AI provider before starting a conversation from the newtab page.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: clean up newtab footer with icon-only buttons
Reduce visual clutter in the search bar footer by converting Provider,
Workspace, and Tabs buttons to compact icon-only buttons (8x8). Text
labels and chevron indicators are removed — native title tooltips
provide discoverability on hover. Apps button on the right keeps its
text label per user preference.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: add hover-expand labels to newtab footer icon buttons
Replace static title tooltips with smooth hover-expand animation —
buttons show icon-only by default, text label slides out on hover
via max-w transition. Gives a clean compact look while keeping
labels discoverable.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: revert workspace/tabs to full text, keep provider hover-expand only
Restore full text labels for Workspace and Tabs buttons. Only the
provider selector uses the compact icon + hover-expand pattern.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: simplify provider selector to plain icon button
Remove hover-expand animation, use a simple icon-only button with
native title tooltip for the provider selector.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add GitHub Copilot as OAuth-based LLM provider
Add GitHub Copilot as a second OAuth provider using the Device Code flow
(RFC 8628). Users authenticate via github.com/login/device, and the server
polls for token completion. Supports 25+ models through a single Copilot
subscription.
Key changes:
- Device Code OAuth flow in token manager (poll with safety margin)
- Custom fetch wrapper injecting Copilot headers + vision detection
- Provider factory using createOpenAICompatible for Chat Completions API
- Extension UI with template card, auto-create on auth, and disconnect
* fix: address PR review comments for GitHub Copilot OAuth
- Validate device code response for error fields (GitHub can return 200
with error payload)
- Store empty refreshToken instead of access token for GitHub tokens
- Add closeButton to Toaster for dismissing device code toast
* fix: add github-copilot to agent provider factory
The chat route uses a separate provider-factory.ts (agent layer) from the
test-provider route (llm/provider.ts). Added createGitHubCopilotFactory
to the agent factory so chat works with GitHub Copilot.
* fix: add github-copilot to provider icons, models, and dialog
- Add Github icon from lucide-react to providerIcons map
- Add 8 Copilot models (GPT-4o, Claude, Gemini, Grok) to models.ts
- Add github-copilot to NewProviderDialog zod enum, validation skip,
canTest check, and OAuth credential message
* fix: reorder copilot models with free-tier models first
Put models available on Copilot Free at the top (gpt-4o, gpt-4.1,
gpt-5-mini, claude-haiku-4.5, grok-code-fast-1), followed by
premium models that require paid Copilot subscription.
* fix: set correct 64K context window for Copilot models
Copilot API enforces a 64K input token limit regardless of the
underlying model's native context window. Updated all model entries
and the default template to 64000 so compaction triggers correctly.
* fix: use actual per-model prompt limits from Copilot /models API
Queried api.githubcopilot.com/models for real max_prompt_tokens values.
GPT-4o/4.1 have 64K, Claude/gpt-5-mini have 128K, GPT-5.x have 272K.
Also updated model list to match what's actually available on the API
(e.g. claude-sonnet-4.6 instead of 4.5, added gpt-5.4/5.2-codex).
* feat: resize images for Copilot using VS Code's algorithm
Large screenshots cause 413 errors on Copilot's API. Resize images
following VS Code's approach: max 2048px longest side, 768px shortest
side, re-encode as JPEG at 75% quality. Uses sharp for server-side
image processing.
* fix: address all Greptile P1 review comments
- Add .catch() on fire-and-forget pollDeviceCode to prevent unhandled
rejection crashes (Node 15+)
- Add deduplication guard (activeDeviceFlows Set) to prevent concurrent
device code flows for the same provider
- Add runtime validation of server response in frontend before calling
window.open() and showing toast
- Remove dead GITHUB_DEVICE_VERIFICATION constant from urls.ts
* fix: upgrade biome to 2.4.8, fix all lint errors, and address review bugs
- Upgrade biome from 2.4.5 to 2.4.8 (matches CI) and migrate configs
- Fix image resize: only re-encode when dimensions actually change
- Fix device code polling: retry on transient network errors instead of aborting
- Allow restarting device code flow (clear old flow instead of throwing 500)
- Fix pre-existing noNonNullAssertion and noExplicitAny lint errors globally
* fix: address Greptile P2 review — image resize and config guard
- Fix early-return guard: check max/min sides against their respective
limits (MAX_LONG_SIDE/MAX_SHORT_SIDE) instead of both against SHORT
- Preserve PNG alpha: detect hasAlpha and keep PNG format instead of
unconditionally converting to lossy JPEG
- Keep browserosId guard in resolveGitHubCopilotConfig consistent with
ChatGPT Pro pattern (safety check that caller context is valid)
* feat: update Copilot models to full list from pricing page, default to gpt-5-mini
Added all 23 models from GitHub Copilot pricing page. Ordered with
free-tier models first (gpt-5-mini, claude-haiku-4.5), then premium.
Changed default from gpt-4o to gpt-5-mini since it's unlimited on
Pro plan and has 128K context (vs gpt-4o's 64K limit).
* fix(skills): read-only view mode for built-in skills
- SkillCard shows Eye icon + "View" for built-in, Pencil + "Edit" for user
- SkillDialog in read-only mode: disabled fields, no toolbar on markdown
editor, "View Skill" title, "Close" button, no "Update Skill"
- Hide tip section in read-only mode
* fix(skills): use react-markdown for read-only skill view
Replace MDXEditor with react-markdown for viewing built-in skills.
MDXEditor chokes on code fences, angle brackets, and image syntax
causing content truncation. react-markdown handles standard markdown
correctly with no rendering issues.
* Revert "feat: convert settings to popup dialog (#477)"
This reverts commit 42aa0ff1ef.
* fix: address review feedback for PR #498
- Remove erroneous SETTINGS_PAGE_VIEWED_EVENT tracking from SidebarLayout
(was firing on every non-settings page navigation)
- Fix mobile settings sidebar not closing on route change by merging
setMobileOpen(false) into the pathname-dependent analytics useEffect
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: select text and pass to sidepanel
* fix: lint issues
* fix: persist selection across tabs
* fix: review comments
* fix: change when the selection is cleared
* feat: sanitize url
* fix(skills): UI section separation and fix find-alternatives rendering
- Split skills page into "My Skills" (user) and "BrowserOS Skills" (built-in) sections
- Fix find-alternatives SKILL.md — replace angle bracket placeholders with curly
braces to prevent MDXEditor from parsing them as JSX and rendering empty content
* fix(skills): bump find-alternatives to v1.1 for CDN sync
* feat: updated chat ui from homepage
* fix: vertical scroll
* fix: horizontal scroll issue
* fix: lint issues
* fix: header width
* fix: message input from home to chat
* feat: created sidebar header support in new tab chat
* fix: remove history from new tab chat
* fix: remove the shared element transition
* fix: lint issues
* fix: review comments
* fix: defer the sendMessage callback
* fix: all code concerns
* fix: preserve state of chat on homepage
* fix: review comments
* fix(skills): separate built-in and user skills into distinct directories
- Move built-in skills to ~/.browseros/skills/builtin/, user skills stay in root
- Unify seed + sync into single syncBuiltinSkills() function, delete seed.ts
- Preserve user's enabled/disabled state during remote sync version updates
- Add catalog reconciliation — remove built-in skills dropped from remote catalog
- Fallback to bundled defaults per-skill when remote sync fails
- One-time migration moves existing default skills from root to builtin/
- Add builtIn field to SkillMeta, determined by directory (not metadata)
- UI shows "Built-in" badge, hides delete button for built-in skills
- Reject deletion of built-in skills in service layer
- Check both dirs for ID collision on skill creation
* fix(skills): address review — dedup by id, guard applyEnabled regex
- loader.ts: deduplication now keys on skill.id (directory slug) not
skill.name (display name), preventing silent drops on name collision
- remote-sync.ts: applyEnabled checks if regex matched before writing,
logs warning if remote content lacks an enabled field
* fix(skills): reconciliation preserves bundled defaults, delete returns 403
- reconcileRemovedSkills now keeps DEFAULT_SKILLS IDs in the safe set,
preventing delete-then-reinstall cycle that lost enabled:false state
- DELETE /skills/:id returns 403 for built-in skills instead of 500
* refactor(skills): simplify syncBuiltinSkills to single clean pass
Build content map (bundled + remote), iterate once, preserve enabled,
reconcile deletions. Removes 7 helper functions, 70 lines of code.
* refactor(skills): extract syncOneSkill, patch content before writing
- syncBuiltinSkills is now 15 lines: build map, iterate, clean up
- syncOneSkill: flat, patches enabled state before writing (single write)
- setEnabled: pure function for content patching
- removeObsoleteSkills: extracted from inline block
* feat: convert settings page to popup dialog, move workflows to main nav
Replace the dedicated settings page layout (SettingsSidebarLayout) with a
modal dialog (SettingsDialog) that opens on top of the current page. Settings
are now accessible via a dialog triggered from the main sidebar, eliminating
the confusing dual-sidebar navigation pattern.
- Create SettingsDialog with tabbed left panel and content area
- Move Workflows into main sidebar navigation (feature-gated)
- Remove /settings/* routes (except /settings/survey)
- Delete SettingsSidebarLayout and SettingsSidebar components
- Update backward compatibility redirects
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: setup new urls for the dialog box
* fix: dialog close button
* fix: settings analytics
* fix: address review comments
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* feat: add ChatGPT Pro OAuth as LLM provider
Adds OAuth 2.0 (Authorization Code + PKCE) flow so users can authenticate
with their ChatGPT Pro subscription to power BrowserOS's agent, matching
the pattern used by Codex CLI, OpenCode, and Pi.
Server:
- OAuth token lifecycle (PKCE, exchange, refresh, SQLite storage)
- Dedicated callback server on port 1455 (Codex client ID registration)
- Codex fetch wrapper routing API calls to chatgpt.com/backend-api
- Config resolution + provider factories for all code paths (chat, test, refine)
Extension:
- ChatGPT Pro template card with OAuth flow trigger
- Status polling hook + auto-create provider on auth success
- Model list with Codex-supported models (gpt-5.x-codex family)
* fix: address Greptile PR review comments
- Wire OAuth callback server stop handle into onShutdown (P1: port 1455 leak)
- Guard against missing refresh token + clear stale tokens on failed refresh (P1)
- Add logger.warn to silent catch in codex-fetch body mutation
- Document JWT trust assumption in parseAccessTokenClaims
- Source model ID from provider template instead of hard-coding
* simplify: remove unnecessary OAuth shutdown wiring and useCallback
- Revert OAuthHandle interface — callback server port releases on process exit
- Remove stopCallbackServer from shutdown flow (dead code)
- Remove all useCallback from useOAuthStatus per CLAUDE.md guidance
* style: add readonly modifiers and braces per TS style guide
* docs: add E2E test screenshots for ChatGPT Pro OAuth
* fix: strip item IDs from Codex requests to fix multi-turn conversations
* fix: preserve function_call_output IDs in Codex requests
* fix: resolve Codex store=false + tool-use incompatibility
- Pass providerOptions { openai: { store: false } } to ToolLoopAgent
so the AI SDK inlines content instead of using item_reference
- Strip item IDs and previous_response_id in codex-fetch (safety net)
- Use .responses() model (Codex only speaks Responses API format)
* fix: remove non-Codex model gpt-5.2 from chatgpt-pro model list
* fix: strip unsupported Codex params and update model list
- Strip temperature, max_tokens, top_p from Codex requests (unsupported)
- Add all available Codex models including gpt-5.4, gpt-5.2, gpt-5.1
* chore: remove screenshots containing email
* feat: enable reasoning events for ChatGPT Pro Codex models
* chore: set reasoning effort to high for ChatGPT Pro
* feat: add configurable reasoning effort and summary for ChatGPT Pro
- Add reasoningEffort (none/low/medium/high) and reasoningSummary
(auto/concise/detailed) dropdowns in the Edit Provider dialog
- Pass through extension → chat request → agent config → providerOptions
- Defaults: effort=high, summary=auto
* fix: strip max_output_tokens from Codex requests (fixes compaction)
* fix: address Greptile P1 issues
- Fix default model fallback: gpt-4o → gpt-5.3-codex (Codex endpoint)
- Clear stale tokens on refresh failure (prevents infinite retry loop)
- Only auto-create provider after explicit OAuth flow, not on page load
- Add catch block to auto-create effect with error toast
* feat: add remote skill download and auto-sync
Download default skills from remote catalog on first setup with
bundled fallback when offline. Background sync every 45 minutes
checks for new/updated skills without overwriting user-customized
ones. Tracks installed defaults via content hashes in a local
manifest file.
* feat: make skills catalog URL configurable and add generation script
Add SKILLS_CATALOG_URL env var (following CODEGEN_SERVICE_URL pattern)
with fallback to the default constant. Add script to generate
catalog.json from bundled defaults for static hosting.
* feat: add R2 upload script and use cdn.browseros.com for catalog URL
Add upload-skills-catalog.ts that generates and uploads catalog.json
to Cloudflare R2 (same infra as existing build artifacts). Update
default catalog URL to cdn.browseros.com/skills/v1/catalog.json.
* test: add E2E tests for remote skill sync against live CDN
* fix: address code review findings — security, validation, DRY
- Add path traversal protection via safeSkillDir in writeSkillFile
and readSkillContent (reuses existing validation from service.ts)
- Add runtime type guards for catalog JSON and manifest JSON parsing
- Fix seedFromRemote to return false on partial failure so bundled
fallback kicks in
- Add per-skill error handling in syncRemoteSkills so one bad skill
doesn't crash the entire sync
- Wire stopSkillSync into Application.stop() shutdown path
- Extract version from frontmatter in seedFromBundled instead of
hardcoding '1.0'
- Consolidate duplicated logic: reuse installSkill/writeSkillFile/
contentHash/saveManifest from remote-sync.ts in seed.ts
- Extract shared catalog generation into scripts/catalog-utils.ts
* test: add flow tests for all four sync scenarios against live CDN
* refactor: remove redundant scripts and inline catalog generation
Drop generate-skills-catalog.ts, catalog-utils.ts, and
e2e-remote-sync.test.ts (covered by flows.test.ts). Inline
catalog generation into upload-skills-catalog.ts.
* test: add full E2E server flow test against live CDN
Tests all 7 steps of the real server lifecycle: fresh seed from CDN,
no-op sync, user edit preservation, skill reinstall, custom skill
protection, background timer firing, and second startup skip.
* chore: remove e2e-server-flow test
* fix: address Greptile review — entry validation, size limit, DRY, no-op saves
- Validate individual skill entries in catalog (id, version, content
must all be strings) not just the top-level shape
- Add 1MB response size limit on catalog fetch to prevent resource
exhaustion from compromised/misconfigured CDN
- Skip manifest save when sync cycle had no changes (avoids
unnecessary disk I/O every 45 minutes)
- Share extractVersion via remote-sync.ts export, remove duplicate
from seed.ts
* fix: prevent bundled fallback from overwriting partial remote seeds
When seedFromRemote partially fails, the bundled fallback now skips
skills already in the manifest (installed by the partial remote
seed). Also adds Content-Length early check before downloading the
full catalog response body.
* fix: run sync immediately on startup, not just on interval
Previously the first sync fired 45 minutes after boot. Now
startSkillSync runs one sync immediately so returning users
get skill updates right away.
* refactor: simplify sync — remote always wins, remove manifest
Remote catalog is the source of truth. If a skill exists in the
catalog, its version is compared against local frontmatter and
overwritten when newer. No manifest file, no content hashes.
User-created skills (IDs not in catalog) are never touched.
* fix: skip bundled skills already installed by partial remote seed
* chore: remove unreliable Content-Length check
* chore: remove size limit checks, fetch timeout is sufficient
* feat: add "Rewrite with AI" prompt refinement for scheduled tasks
Add a lightweight /refine-prompt endpoint that uses generateText to
rewrite rough scheduled task prompts into clear, actionable instructions.
The UI adds a sparkle-icon button next to the Prompt label in the
NewScheduledTaskDialog with loading state, undo support, and disabled
state when the textarea is empty.
* fix: clear stale undo ref on dialog re-open and pass providerId to refinePrompt
- Reset originalPromptRef when dialog opens and on form submit to
prevent stale "Undo rewrite" button on re-open
- Accept optional providerId in refinePrompt() so the form's selected
provider is used for refinement instead of always the system default
* fix: hide undo rewrite link while refinement is in flight
* fix: reset isRefining state on dialog re-open
* fix: ignore stale refine-prompt responses after dialog re-open
Use a request generation counter so that if the dialog is closed and
re-opened while a rewrite is in flight, the stale response is silently
discarded instead of overwriting the fresh form state.
* fix: invalidate stale refine requests on dialog reopen and rename to kebab-case
- Increment refineRequestIdRef on dialog open so in-flight requests
from a previous session are discarded when they complete
- Rename refinePrompt.ts to refine-prompt.ts per CLAUDE.md file naming
* feat: add voice input to agent chat sidebar
Allow users to record voice and transcribe to text in the chat input.
Mic button shows when input is empty, waveform visualizer during recording,
transcription via OpenAI (llm.browseros.com/api/transcribe).
- Extract shared useVoiceInput hook to lib/voice/
- Time-domain waveform bars that bounce per-frequency-band
- Bar height capped to fit input container
- Analytics events for recording lifecycle
* fix: address review — add fetch timeout, await stopRecording, deduplicate VoiceInputState
- Add AbortSignal.timeout(30s) to transcription fetch
- Await stopRecording() and track analytics after completion
- Export VoiceInputState from useVoiceInput, import in consumers
* fix: await startRecording before tracking, narrow SurveyChat effect deps
- Await startRecording() so analytics only fires after mic permission granted
- Narrow SurveyChat useEffect dependency from [voice] to [voice.transcript, voice.isTranscribing]
* fix: analytics only tracks on success, clean up stream on failure, type API response
- startRecording returns boolean; track(RECORDING_STARTED) only fires on success
- Catch block cleans up MediaStream tracks and AudioContext on partial failure
- Type transcription API response with TranscribeResponse interface
* fix: keep mic button always visible alongside send button
Mic and send are now separate buttons, both always visible.
Mic is disabled while AI is streaming. Send is disabled during
recording/transcribing. Buttons are no longer absolutely positioned
inside the textarea — they sit beside it in the flex row.
* fix: keep mic button always visible inside input alongside send
Both mic and send buttons are always visible inside the input field,
positioned on the right side (ChatGPT-style). Mic is disabled while
AI is streaming. Send is disabled during recording/transcribing.
* fix: remove unreachable CSS branch in recording waveform div
* feat: add CDP UI inspector script for dev self-testing
* fix: address code review feedback for inspect-ui script
- Use Delete key (not Backspace) to match server's keyboard.ts clearField
- Add windowId resolution to open-sidepanel (chrome.sidePanel.open requires it)
- Make target matching case-insensitive
- Replace process.exit(1) in eval with thrown error for proper cleanup
- Add comment referencing DEV_PORTS source of truth
* docs: add self-testing workflow for UI changes via CDP inspector
* fix: runtime fixes for inspect-ui discovered during live testing
- Remove Input.enable (domain has no enable method)
- Add DOM.getDocument before DOM operations (required by protocol)
- Use BrowserOS-specific sidePanel.browserosToggle API instead of
standard chrome.sidePanel.open (side panel starts disabled)
- Enable side panel with setOptions before toggling
* feat: add test-ui skill for visual testing of agent extension UI
Adds a Claude Code skill that lets the agent visually test both
surfaces of the BrowserOS extension:
- New tab page (app.html) — left sidebar with Home, Scheduled Tasks,
Settings, Skills, Memory, Soul, Connect Apps
- Right side panel (sidepanel.html) — chat interface
Includes all gotchas discovered through real testing: randomized ports,
fresh profile onboarding redirect, stale element IDs after navigation,
BrowserOS-specific sidePanel APIs, DOM.getDocument requirement.
* feat: add press_key, scroll, hover, select_option, wait_for to inspect-ui
Brings inspect-ui.ts to parity with server's MCP input tools:
- press_key: key combos like Enter, Control+A, Meta+Shift+P
(ported from keyboard.ts pressCombo)
- scroll: up/down/left/right with configurable amount
- hover: hover over element by ID for tooltip/hover state testing
- select_option: select dropdown option by value or visible text
(ported from browser.ts selectOption)
- wait_for: poll for text or CSS selector with 10s timeout
Updated skill documentation with new commands and examples.
* docs: prefer snapshot over screenshot, add holistic debugging guidance
- Add snapshot vs screenshot guidance table — prefer snapshot for
structural checks, screenshot only for visual/layout verification
- Add server log checking instructions ([agent], [server], [build] tags)
- Add JS error checking via eval
- Add API connectivity verification
- Add common issues troubleshooting table
- Update all examples to use snapshot as default verification
* fix: address Greptile review feedback
- Replace process.exit(1) with process.exitCode + return in cmdWaitFor
to allow async CDP cleanup in finally blocks
- Fix cmdScroll enabling Runtime instead of Page domain
- Add BROWSEROS_EXTENSION_ID env var override for extension ID
- Align CLAUDE.md dev server command with SKILL.md canonical command
take_snapshot only used the AX tree, which misses custom components
(cursor:pointer divs, onclick handlers, etc.) that lack ARIA roles.
These elements appeared as role="generic" and were invisible to the agent.
Changes:
- Merge findCursorInteractiveElements into snapshot() so take_snapshot
catches cursor:pointer, onclick, and tabindex elements
- Add DisclosureTriangle to INTERACTIVE_ROLES for <summary> elements
- Use aria-label as text fallback in cursor detection for icon-only buttons
- Fix dedup bug in enhancedSnapshot that was silently dropping all
cursor-detected elements by checking against all AX node IDs instead
of only already-included output IDs
- Add hover_at, type_at, drag_at coordinate tools to server
- Add hoverAt, typeAt, dragAt methods to Browser class
- Export server internals (browser, tool-loop, registry) for eval imports
- Copy eval app from enterprise repo with agents, graders, runner, dashboard
- Nest eval-targets inside apps/eval
- Adapt sessionExecutionDir → workingDir for current server API
- Add biome ignore for dashboard HTML to prevent lint breaking onclick handlers
* feat: add get_console_logs tool to surface browser console output
Captures Runtime.consoleAPICalled, Runtime.exceptionThrown, and
Log.entryAdded CDP events per page with a FIFO ring buffer (500 entries).
- ConsoleCollector: per-page buffers with O(1) session routing via Map lookup
- Session-aware CDP event dispatching (onSessionEvent) in CdpBackend
- Log.enable() added alongside Runtime.enable() in attachToPage
- Single tool with level hierarchy, text search, limit, and clear params
- Buffer clears on main-frame navigation, cleaned up on page close
* fix: address review — handle session re-attach, remove dead code
- ConsoleCollector.attach() now updates session mapping on re-attach
instead of early-returning, preventing silent event drops after
target detach/re-attach (e.g. tab crash, cross-process navigation)
- Remove unused clearConsoleLogs() and ConsoleCollector.clear()
* feat: add per-task LLM provider selection for scheduled tasks
Allow users to choose which AI provider a scheduled task runs with,
using the same ChatProviderSelector component from the new-tab page.
Falls back to the global default provider when none is selected or
if the selected provider has been deleted.
* fix: lint issues
* chore: updated to latest schema.graphql file
---------
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
The AI SDK can produce assistant messages with empty parts (parts:[]) when
a stream is aborted, and providers reject assistant messages with empty text
content. This adds a validation utility that filters both cases before
sending messages to createAgentUIStreamResponse and when persisting them.
Mintlify deploys docs by cloning the repo but does not run `git lfs
pull`. The `.gitattributes` rule `docs/images/** filter=lfs` caused
all doc images to be stored as ~130-byte LFS pointer files, which
Mintlify served as-is — breaking every image on the site.
Removing the LFS rule and re-adding the files as regular git blobs
fixes all images without changing any paths or MDX files.
Also fixes broken Slack link placeholder in troubleshooting page.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Images in docs/images/ are served as broken 130-byte placeholders by
Mintlify CDN. Co-locating images with the MDX file (matching the
working pattern in features/workflow/ and features/cowork/) bypasses
this issue. Also fixes the Slack link placeholder.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: fallback to default BrowserOS provider when provider is null
When the extension first loads, provider config is loaded async from
storage. If a chat request fires before loading completes (race
condition), provider is null and the server receives provider: undefined,
causing a Zod validation error. This adds a fallback to
createDefaultBrowserOSProvider() in both chat paths (sidepanel and
scheduled tasks) so provider.type is always defined.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: fallback to first provider when default provider ID is stale
When defaultProviderId in storage doesn't match any loaded provider
(e.g. after Kimi/Moonshot rollout), selectedProvider was null causing
provider: undefined in chat requests. Now falls back to providers[0].
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: repair stale defaultProviderId in storage on load
When the stored default provider ID doesn't match any loaded provider,
write back the corrected ID (providers[0].id) to storage so it doesn't
silently persist across sessions.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Comment out non-working Canva and Exa integrations from the OAuth MCP
servers list and remove their imports/icon mappings from the UI.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: replace rate limit CTAs with Kimi/Moonshot partnership links
Comment out old "Learn more" and "take a quick survey" links on the
daily limit error banner. Replace with Kimi API key docs link and
direct Moonshot AI platform link for conversion tracking.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove partnership tagline from rate limit banner
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The Docs link in the settings sidebar was using the Info icon (circle
with "i"). Changed it to BookOpen which is the standard icon for
documentation links.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Track docs/images/** and docs/videos/** with Git LFS
- Add packages/browseros/build/tools/ to .gitignore
- Remove appimagetool-x86_64.AppImage from version control (downloaded on demand by build script)
* fix: scheduled task agent not using hidden window for new pages
The agent prompt only told the agent to pass windowId with `new_page`
but not `new_hidden_page`, which the agent prefers for background work.
The agent also had no instruction against closing or replacing its
dedicated hidden window, causing pages to scatter across uncontrolled
windows.
Expanded the scheduled task prompt rules to:
- Cover both `new_page` and `new_hidden_page` windowId requirement
- Forbid closing the dedicated hidden window
- Forbid creating new windows
- Added `new_hidden_page` to tool reference for MCP consumers
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove duplicate hidden window creation from scheduled task frontend
The server's ChatService already creates a hidden window for scheduled
tasks (chat-service.ts:99-126), but the frontend (scheduledJobRuns.ts)
was also creating a minimized Chrome window that the server immediately
overwrote. This caused two windows to be created per scheduled task run,
with only one being used.
Removed from scheduledJobRuns.ts:
- chrome.windows.create() call
- 1-second race condition delay hack (FIXME)
- chrome.windows.remove() cleanup
- windowId/activeTab params to getChatServerResponse()
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* chore: bump server version
* fix: remove dead getCdpToolReference and unused prompt exports
The getCdpToolReference function was always excluded by the AI SDK agent
(tool schemas are injected by the SDK itself) and never used by the MCP
server (which has its own MCP_INSTRUCTIONS). Also removes unused exports
getSystemPrompt and PROMPT_SECTION_KEYS.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* chore: bump server version
* fix: move session dirs to ~/.browseros/sessions and update skill paths
Session directories now live under ~/.browseros/sessions/{conversationId}/
instead of executionDir/sessions/. Adds 30-day cleanup for stale sessions
at server startup. Updates 6 default skills to reference the working
directory instead of hardcoding ~/Downloads/.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: rename sessionExecutionDir to workingDir across server
Consistent naming for the per-conversation working directory:
- ResolvedAgentConfig.sessionExecutionDir → workingDir
- ToolDirectories.executionDir → workingDir
- resolveExecutionPath() → resolveWorkingPath()
- buildBrowserToolSet param: executionDir → workingDir
Server-level executionDir (DB, logs) unchanged.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address PR review — restore emoji folder name, refresh session mtime
- Revert "Read Later" back to "📚 Read Later" to avoid creating
duplicate bookmark folders for existing users
- Touch session dir mtime on each message via utimes() so cleanup
correctly reflects last activity, not just directory creation time
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address PR review round 2 — remove dead executionDir, fix emoji
- Remove executionDir from ChatServiceDeps and ChatRouteDeps since
resolveSessionDir now uses getSessionsDir() directly
- Fix missed emoji in notification format template
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
safeSkillDir() used a hardcoded `/` in the startsWith path traversal
check. On Windows, path.resolve() returns backslash paths, so the check
always failed — blocking getSkill, createSkill, updateSkill, deleteSkill.
Replace `${skillsDir}/` with `${skillsDir}${sep}` using path.sep from
node:path, which returns `\` on Windows and `/` on POSIX.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: persist default kimi hub provider to BrowserOS prefs on first load
When VITE_PUBLIC_KIMI_LAUNCH is enabled, loadProviders() returned default
Kimi provider in-memory but never saved it to the BrowserOS pref. The
browser's C++ code reads the pref directly and found it empty, so Kimi
didn't appear in the toolbar until the user manually edited and saved.
Now loadProviders() persists defaults and ensureKimiFirst() additions to
the pref, keeping the browser in sync with what the extension UI shows.
Fixes#428
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: use reference equality for ensureKimiFirst change detection
Address PR review: reference check (normalized !== providers) is more
semantically precise than length comparison since ensureKimiFirst returns
the same reference when unchanged.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Return a friendly JSON response when users curl GET /mcp instead of
an opaque 503. Narrows the catch-all .all() to .post() since the MCP
Streamable HTTP transport only needs POST for stateless servers.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add missing patches to features.yaml
Add 37 patch files from chromium_patches/ that were not tracked in
features.yaml. Creates 3 new features (cdp-api, vertical-tabs,
crash-reporter) and adds missing files to 3 existing features
(chromium-ui-fixes, side-panel-fixes, first-run).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* test: split sparkle third-party from mac-sparkle-updater
Move third_party/sparkle/ into its own feature since the Sparkle
framework is downloaded on-the-fly during build, not a permanent
patch in the tree.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: minor
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Skills page navigation is now hidden when the server version is below
0.0.73, matching the gating pattern used for Memory, Soul, and Workflows.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: move skills into main page navigation
Mirror the soul move pattern (166f6e1b) — promote Skills from
settings sidebar to primary navigation at /home/skills. Adds
backward-compat redirect from /settings/skills.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove missing dismiss-popups skill reference
The SKILL.md file doesn't exist on disk, causing a module
resolution error at server startup.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: bootstrap 12 default agent skills for new users
Seed common browser automation skills (summarize, research, extract data,
fill forms, dismiss popups, screenshots, organize tabs, compare prices,
save page, monitor changes, read later, manage bookmarks) into
~/.browseros/skills/ on first startup when no user skills exist.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: populate skill edit dialog with existing content
The edit dialog form fields were empty because Radix Dialog's
onOpenChange doesn't fire when the open prop changes programmatically.
Replace the handleOpenChange wrapper with a useEffect that syncs form
state whenever editingSkill changes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: correct tool names in default skill instructions
- memory_save → memory_write (actual tool name in memory toolset)
- delete_bookmark → remove_bookmark (actual tool name in registry)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: move skill content from TS template literals to separate SKILL.md files
Replace the monolithic defaults.ts (738-line file with escaped template
literals) with individual SKILL.md files per skill. Uses Bun's text
import (`with { type: 'text' }`) to inline content at bundle time.
Adds md.d.ts for TypeScript module resolution.
Much easier to read and edit skill content as plain markdown.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add build:server:test and start:server:test scripts for local binary testing
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: refresh agent skills settings UI
* fix: address PR review comments for 0311-skills_ui_refresh
* feat: enhance default skills with file persistence, HTML reports, and add find-alternatives
Rewrite deep-research, extract-data, compare-prices, manage-bookmarks, and
read-later skills to follow a structured phase-based workflow. Key changes:
- All research skills now save data incrementally to disk instead of
accumulating in memory
- Add HTML report generation (light theme) with source links for
deep-research, extract-data, and compare-prices
- Use hidden windows and parallel tabs (max 10) for multi-source extraction
- Simplify read-later to just bookmark + PDF save
- Simplify manage-bookmarks to max 3-5 top-level folders with confirmation
- Add new find-alternatives skill for product alternative research with
1-5 star ranking
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: simplify skills page rendering
* fix: clean-up skill
* fix: address review feedback for PR #478
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add core memory viewer and editor to newtab
Adds a new Memory page (/home/memory) that lets users view and
inline-edit their agent's core memories (CORE.md). Includes server
API endpoints (GET/PUT /memory) with Zod validation, React Query
hook with optimistic updates, and example prompts to teach the
agent through conversation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: improve memory examples with browser-aware prompts
Replace tech-specific examples with universal ones that leverage
the agent's browser tools — learning from bookmarks, summarizing
browsing history, reading open tabs, and setting communication
preferences.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: hide focus grid on memory page, same as soul page
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: reword history example to understand user, not just summarize
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: reset stale save error on edit/cancel, surface server errors
Address Greptile review:
- Reset mutation state in handleEdit/handleCancel/handleCreate to
prevent stale error from reappearing on re-entry to edit mode
- Parse server response body on save failure to show actual error
message (e.g. Zod validation) instead of generic "Failed to save"
* fix: cap memory viewer height with internal scroll
Long CORE.md content now scrolls within the card (max 480px) instead
of expanding the entire page. Applies to both read and edit modes.
* fix: polish memory viewer scroll UX
- Use viewport-relative max height (60vh) instead of fixed 480px
- Add styled-scrollbar for thin, themed scrollbar in both modes
- Add bottom fade gradient to hint at more content below
- Fixes width misalignment caused by system scrollbar stealing space
* feat: customize agent personality
* fix: reset soul with right types
* chore: use rpc client for setting personality
* fix: validation for new endpoint
* fix: compaction config for small context windows (≤32K)
Raise COMPACTION_SMALL_CONTEXT_WINDOW from 16K to 32K so models like
Haiku 4.5 (30K context) use proportional 50% reserve instead of the
fixed 20K reserve. Also scale fixedOverhead for small contexts (capped
at 40% of context window) to prevent the doom loop where overhead alone
triggers compaction on every step.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs: add compaction tuning guidance to limits constants
Explain the relationship between SMALL_CONTEXT_WINDOW and
FIXED_OVERHEAD so devs know the 24K minimum constraint when
tweaking these values.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add window focus listener in ChatFooter that focuses the textarea when
the side panel receives focus. Handles both initial open (via
document.hasFocus check on mount) and re-focus scenarios (via window
focus event). Guards against stealing focus from other interactive
elements.
Companion Chromium fix: side_panel_coordinator.cc now always calls
RequestFocus() in PopulateSidePanel(), not just when there's no
previous entry — ensuring the side panel WebContents receives focus
on every open/toggle.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add 2-stage pruning to compaction pipeline before LLM summarization
Add two new lightweight stages to the compaction prepareStep pipeline that
recover context tokens cheaply before falling back to expensive LLM
summarization:
- Stage 2: Use AI SDK's pruneMessages to remove old tool call/result
pairs beyond the last 6 messages entirely
- Stage 3: Replace remaining tool output values with short placeholders
("[Cleared — N chars]") while preserving tool call structure and IDs
Both stages re-estimate tokens from message content (not stale step
usage) after modifying messages. The existing LLM summarization and
sliding window fallback remain as Stage 4.
Also adds estimateTokensForThreshold() helper, clearToolOutputs()
function, and COMPACTION_PRUNE_KEEP_RECENT_MESSAGES /
COMPACTION_CLEAR_OUTPUT_MIN_CHARS constants.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: reorder compaction pipeline — truncate before clear, protect recent tools
- Stage 0: Check threshold, return untouched when under (no data loss)
- Stage 1: Prune old tool call/result pairs beyond last 6 messages
- Stage 2: Truncate large tool outputs to 15K chars (keeps partial content)
- Stage 3: Clear old tool outputs with placeholders, protect last 2
- Stage 4: LLM-based compaction with sliding window fallback
clearToolOutputs now accepts keepRecentCount parameter (default 2) to
skip the N most recent tool messages from clearing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: limits fixes
* fix: address review — preserve toKeep context, derive test values from constants
- When Stage 3 (clearToolOutputs) doesn't resolve overflow, pass
truncated (not cleared) messages to Stage 4 so toKeep retains
meaningful tool outputs for the agent's immediate context
- Add comment explaining intentional conservatism in post-prune
token estimation (step usage is stale, must re-estimate safely)
- Refactor computeConfig tests to derive expected values from
AGENT_LIMITS constants instead of hardcoding magic numbers
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The system prompt referenced `browser_open_tab` which was renamed to
`new_page`. This caused models to infer a `browser_*` naming convention
and call non-existent tools like `browser_navigate`, resulting in
MCP error -32602.
Fixes TKT-540
Add changelog entry for BrowserOS v0.42.0 featuring SOUL.md, vertical tabs,
long-term memory, and Chromium 146 update. Include screenshots from the
GitHub release.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: new tools for breadcrumbs
* feat: setup scheduled task card
* feat: added dismiss cooldown
* chore: update prompt
* fix: support api key tool
* fix: prompt text to limit nudges
* fix: scheduled tasks card
* fix: update nudges prompt
* feat: skip nudges when user dismisses nudge
* fix: ensure nudges only show if they are not dismissed
* Revert "fix: ensure nudges only show if they are not dismissed"
This reverts commit d825254698829b8e9941aae7873bd440027d0c74.
* Revert "feat: skip nudges when user dismisses nudge"
This reverts commit 12b552b454d10ec4209b88668fc48681423ff6fc.
* Revert "fix: update nudges prompt"
This reverts commit 80b7520b953b4d3cbed2ed477b9e508e39938dca.
* feat: update agent with mcp when new mcp connection is added
* feat: created connect apps option as a blocking card system
* feat: schedule tasks passive without dismiss
* fix: nudges and prompt texts
* fix: biome lint errors
* fix: review comments
* fix: resolve comments
* fix: review comments
* fix: review comments
* fix: auto resolve state
* fix: eliminate the race where the async delete could resolve after the
new session
* feat: track ignored apps list
* fix: empty response text object on message reply
* feat: sync previously connected mcps
* feat: sync integrations with klavis
* feat: account for unauthenticated connections
* fix: analytics events
* fix: typescript issues
* fix: klavis client issue
* fix: invalid mcps causing entire responses from failing
* fix: prompt with card for integrations when the integration fails
* fix: prompt structure to support declined apps
* fix: refresh session on mcp changes
* feat: add agent skills system with catalog, loader, and UI
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: return 500 for server errors in PUT/DELETE skill routes
Previously both handlers returned 404 for all errors, masking filesystem
failures (disk full, permission denied) as "not found". Now only
"not found" errors return 404; everything else returns 500.
* fix: align SKILL.md format with agentskills.io spec
- Move `enabled` and `version` into `metadata` field (spec only allows
name, description, license, compatibility, metadata, allowed-tools)
- Frontmatter `name` now matches directory name (lowercase kebab-case)
- Human-readable name stored in `metadata.display-name`
- Add index signature to SkillMetadata for arbitrary string keys
- Validate frontmatter with type guard in getSkill (remove unsafe cast)
- updateSkill now preserves existing frontmatter fields (license, etc.)
- Tighten buildSkillMd param from Record<string, unknown> to SkillFrontmatter
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- truncateToolOutputs: handle all output.type variants (text, json,
content) by checking output.value directly instead of branching on
type. The old code missed type 'content' (array of content parts),
causing 1M+ char tool results to pass through untouched.
- estimateTokens: change chars/4 to chars/3 — HTML/Markdown content
tokenizes at ~3.14 chars/token empirically, not 4.
- COMPACTION_FIXED_OVERHEAD: 5K → 12K to account for system prompt
(~2.5K tokens) + tool definitions as JSON Schema (~8-9K tokens).
- Apply truncateToolOutputs in prepareStep (Stage 0) before token
estimation, not just during summarization.
* fix: support artifact-extracted directory structure in OTA binary discovery
The download_resources system now extracts server binaries into
platform-specific subdirectories (e.g., darwin-arm64/resources/bin/),
but the OTA module only looked for flat binary names. This adds
find_server_binary() which checks both layouts, keeping backward
compatibility with --binaries while supporting the new structure.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: download server binaries from R2 instead of requiring --binaries
Remove the --binaries flag from `ota server release`. The module now
downloads artifact zips from artifacts/server/latest/ in R2, extracts
them, then signs and packages as before. This eliminates the need to
have mono build output locally.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: robust compaction with Pi-style token counting + overflow middleware
Root cause: getCurrentTokenCount() returned stale inputTokens from the
previous step, ignoring new tool results added to messages since that
step. A large tool output (DOM snapshot, page content) caused a token
jump that bypassed the compaction threshold check, leading to
context_length_exceeded errors (322K tokens sent, model max 262K).
Layer 1 — Accurate token counting (proactive):
- Adopt Pi coding agent's additive approach: base(inputTokens) +
outputTokens + estimate(trailing tool results)
- Trailing tool results are estimated by walking backwards from end of
messages array until a non-tool message is found
- Falls back to full estimation with safety multiplier when no real
usage data is available (first step of a turn)
Layer 2 — Context overflow middleware (reactive):
- LanguageModelV3Middleware that wraps doGenerate/doStream
- Catches context_length_exceeded errors at the model call level
- Truncates prompt (keeps system messages + most recent non-system
messages targeting 60% of context window)
- Retries the model call once
Verified end-to-end with real model (Gemini Flash Lite via OpenRouter)
on 16K context window: 4 compactions triggered correctly across 8
steps, no context_length_exceeded errors.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: adopt Pi-style overflow detection patterns + fix truncation edge case
- Replace 6 generic substring matches with 17 provider-specific regex
patterns from Pi coding agent (Anthropic, OpenAI, Google, xAI, Groq,
OpenRouter, Bedrock, Copilot, llama.cpp, LM Studio, MiniMax, Kimi,
Mistral, z.ai)
- Fix truncatePrompt edge case: when the last message alone exceeds the
target, keepFrom was never updated → empty non-system messages. Now
always keeps at least the most recent non-system message.
- Add runtime guard for LanguageModelV3 cast in ai-sdk-agent.ts
- Add tests for false-positive rejection and truncation edge case
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The Kimi K2.5 model supports a 256,000 token context window, not
128,000. Updated the provider template and model config to reflect
the correct value.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: return element coordinates in tool responses and DPR in screenshots
- click, hover, fill, drag now return resolved coordinates in response text
- take_screenshot returns devicePixelRatio for mapping coordinates to pixels
- Coordinates are in CSS pixels; multiply by DPR to get screenshot pixels
* fix: use Promise.allSettled in screenshot to prevent DPR eval from aborting capture
Runtime.evaluate for devicePixelRatio can fail on PDF pages or
chrome-extension pages. Using Promise.allSettled ensures the screenshot
still succeeds, falling back to DPR=1.
* feat: gate Moonshot AI provider behind VITE_PUBLIC_KIMI_LAUNCH flag
Hide all Moonshot/Kimi provider UI when the launch flag is off:
- Filter moonshot from provider templates and type dropdown
- Gate Kimi flare badges in HubProviderRow
- Gate Kimi auto-insertion in LLM hub storage
- Add analytics events for Kimi API key configuration and guide clicks
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: allow editing existing moonshot providers when launch flag is off
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add search provider settings page with 5 engine options
Allow users to select their preferred search engine (Google, DuckDuckGo,
Bing, Brave Search, Yahoo) from a new settings page. The selected provider
drives search suggestions, search URL navigation, placeholder text, and
analytics tracking. Replaces all hardcoded Google references with the
stored preference. Adds Brave Search support, replacing Yandex.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add error handling for search provider storage writes
Write to storage before updating React state so UI never diverges from
persisted value on failure. Add try/catch in the settings page to show
an error toast if the write fails.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: migrate stale 400k context window for browseros provider
Existing installations cached the old 400k default in extension storage.
Always normalize the browseros provider's contextWindow to 200k on load,
matching the current default and preventing compaction from failing.
* fix: add browseros-auto model with 200k context length
* fix: setup migrations using the migrations api for context window size
---------
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* fix: anchor agent to active tab page ID from browser context
Generalize the scheduled-task page anchoring instruction to all tasks.
The agent now always uses the page ID from Browser Context instead of
calling get_active_page or list_pages, preventing it from operating
on the wrong tab.
* fix: add chatMode guard and scope windowLine to scheduled tasks
- Skip page-context section in chat mode where list_pages is allowed
- Only show windowId instruction for scheduled tasks (hidden window)
The app icon was oversized in the macOS Dock because the source icon
filled the entire 1024x1024 canvas with no padding. Apple's macOS Big
Sur+ HIG requires ~100px padding on each side (artwork at 824x824
within 1024x1024 canvas). Resized the source icon and regenerated all
platform icons.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: integrate models.dev registry for auto-populated model defaults
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: fall back to upstream provider for model registry lookup
When the browseros meta-provider is used, the registry lookup now
also tries the upstream provider (e.g., openrouter, anthropic) so
that BrowserOS-hosted models get correct context window and image
support defaults.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add Object.hasOwn guards to prevent prototype chain lookup
Addresses Greptile review: bracket notation on the registry object
could return prototype-chain properties for keys like __proto__ or
constructor, bypassing the 404 guard in the route handler.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add browseros-cli Go CLI for browser automation
Implements a full-featured CLI that communicates with the BrowserOS MCP
server over JSON-RPC 2.0 / StreamableHTTP. Covers all 54 MCP tools across
10 categories with a hybrid command structure (flat verbs for hot-path
commands, grouped noun-verb for resource management).
- MCP client with initialize + tools/call pattern, thread-safe request IDs
- Dual output: human-readable default, --json for structured/piped usage
- Implicit active page resolution with --page override
- 21 command files: open, nav, snap, click, fill, scroll, eval, ss, pdf,
dom, wait, dialog, pages, window, bookmark, history, group, health, info
- Cobra CLI framework with fatih/color for terminal formatting
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* test: add end-to-end integration tests for browseros-cli
Go integration tests gated by `//go:build integration` that exercise the
CLI binary against a running BrowserOS server. Tests build the binary,
run commands via exec.Command, and verify JSON output.
Covers: health, version, page lifecycle (open → text → snap → eval →
screenshot → nav → reload → close), active page, info, error handling,
and invalid page ID rejection. Skips gracefully when no server is running.
Run with: go test -tags integration -v ./...
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add init command and fix MCP client bugs
- Add `browseros-cli init` command that prompts for the server URL,
verifies connectivity, and saves to ~/.config/browseros-cli/config.json
- Config priority: --server flag > BROWSEROS_URL env > config file > default
- Fix Accept header: include text/event-stream (required by StreamableHTTPTransport)
- Fix nil args: send empty object {} instead of null for tools with no params
- Update error messages to suggest `browseros-cli init` on connection failure
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs: add README for browseros-cli with setup, usage, and testing guide
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: always send arguments object in MCP tools/call
Go's json omitempty omits empty maps, causing the arguments field to be
missing from tools/call requests. The MCP SDK requires arguments to be
an object (even empty {}), not undefined. Remove omitempty from the tag.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: update help menu to be have groups
* refactor: replace hand-rolled MCP client with official Go SDK
Switch from custom JSON-RPC implementation to the official
github.com/modelcontextprotocol/go-sdk. This removes all hand-rolled
protocol types (jsonrpcRequest, jsonrpcResponse, RPCError, etc.) and
uses the SDK's StreamableClientTransport with DisableStandaloneSSE
for clean CLI process lifecycle.
Also adds URL normalization/validation, config command, and
updates init/README to reference YAML config.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add server-level instructions that get injected into the LLM system
prompt when external MCP clients (Claude Desktop, Cursor, Gemini CLI)
connect. Covers browser automation workflow, Klavis integration
discovery, and auth flow guidance.
* feat: add inline chat experience to new tab page
Bring the full sidepanel chat experience to the new tab page. When
users select an AI suggestion from the search bar, the page transitions
inline to a full chat view instead of opening the sidepanel.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove unnecessary comments from NewTab.tsx
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address PR review comments
- Move NEWTAB_CHAT_STARTED_EVENT tracking to startInlineChat where it
actually fires (was dead code in NewTabChat handleSubmit)
- Add NEWTAB_CHAT_RESET_EVENT tracking to handleNewConversation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: gate newtab chat behind NEWTAB_CHAT_SUPPORT feature flag
When the flag is off (BrowserOS < 0.40.0), falls back to opening the
sidepanel via openSidePanelWithSearch (previous behavior). In dev mode
all features are enabled, so inline chat works during development.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add newtab origin context to chat system prompt
When chatting from the new tab page, the AI is instructed to open
content in new tabs rather than navigating the current tab, keeping
the user's new tab page accessible.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The AI SDK agent (v2) was allowing all 54 browser tools in chat mode,
while the Gemini agent correctly restricted to 6 read-only tools.
Extract CHAT_MODE_ALLOWED_TOOLS to a shared constant and filter
browser tools in AiSdkAgent.create() when chatMode is true.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: expose Klavis MCP tools to external MCP clients
Connect to Klavis Strata at server startup and register discovered tools
on each per-request McpServer instance. This lets external MCP clients
(Claude Code, Gemini CLI) access Klavis-proxied integrations (Gmail,
Slack, GitHub, etc.) alongside browser tools.
- Add register-klavis-mcp.ts with connectKlavisProxy() and registerKlavisTools()
- Wire KlavisProxyHandle through server.ts -> mcp routes -> mcp-server
- Use structured logging and proper type imports
* fix: forward Klavis tool schemas and add shutdown cleanup
- Use zod-from-json-schema to convert Strata's JSON Schema to Zod,
so MCP clients see proper parameter names, types, and required fields
- Close Klavis proxy transport on server shutdown
- Move per-request Klavis tool registration logging to debug level
- Use proper type imports instead of inline import() types
- Fix connectKlavisProxy return type (never returns null)
* fix: add timeout to Klavis MCP connect/listTools and log shutdown errors
* fix: clear timeout timer and pre-compute Klavis tool schemas at startup
* fix: use client.close() instead of transport.close() for proper cleanup
* feat: update to 146, fix clean
* fix: update all 16 failed patches for Chromium 146.0.7680.31
- Update BASE_COMMIT to 4d3225104176d (Chromium 146)
- Shift BrowserOS command IDs to avoid upstream 40300-40302 conflict
- Fix settings BUILD.gn and menu patches for upstream removals
- Shift syncable prefs IDs to 100379-100380 after upstream additions
- Migrate theme patch from theme_service_factory.cc to theme_service.cc
(RegisterProfilePrefs moved upstream)
- Fix toolbar_actions_model.cc for upstream API changes
- Fix toolbar_pref_names.cc for upstream base::ListValue usage
- Fix ui_features.cc/.h for removed kPopupBrowserUseNewLayout
- Fix api_sources.gni for new upstream entries
- Shift infobar delegate ID to 132
- Shift extension histogram values by +4 (1961-1985)
- Shift api_permission_id kBrowserOS to 265
- Update histogram enums.xml to match shifted values
- Delete chromium_install_modes.cc patch (file removed in 146)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: enable vertical tabs
* feat: default browseros theme
* chore: bump PATCH and OFFSET
* fix: update extensions-manifestv2 series patch for Chromium 146
Regenerated the patch from a clean diff against 146.0.7680.31 to fix
line number offsets and context mismatches in extensions_ui.cc.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: update vertical_tab_strip_state_controller patch for Chromium 146
Upstream refactored includes and renamed NotifyStateChanged to
NotifyModeChanged. Regenerated patch with correct context.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: update default theme to neutral gray (136,136,136)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: rename base::Value::Dict/List to base::DictValue/ListValue for Chromium 146
Chromium 146 moved base::Value::Dict and base::Value::List to top-level
classes base::DictValue and base::ListValue. Updated all 23 patch files.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: regenerate browseros_prefs.cc patch (fix corrupt trailing newline)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: update patches for Chromium 146 build API changes
- browseros_action_utils.h: remove nonexistent base/containers/contains.h include
- chrome_content_browser_client.cc: PrivateNetworkRequestPolicyOverride → LocalNetworkAccessRequestPolicyOverride
- extension_updater.cc: InstallStageTracker::Get → InstallStageTrackerFactory::GetForBrowserContext
- toolbar_actions_model.cc: base::Contains → std::ranges::contains
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add SOUL_SUPPORT feature flag to capabilities system requiring
minServerVersion 0.0.67. Hides "Agent Soul" nav item in settings
sidebar for older servers that lack the /soul endpoint.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
## Summary
- Add `VITE_PUBLIC_KIMI_LAUNCH` feature flag controlling Kimi partnership branding
- BrowserOS provider card shows "Powered by Kimi K2.5 from Moonshot AI" badge and "Extended usage limits for the next 2 weeks!" when flag is on
- Moonshot/Kimi highlighted as "Recommended" in provider templates
- LLM Hub defaults to Kimi, ChatGPT, Claude, Gemini (with legacy defaults migration)
- Kimi hub row shows "Powered by Moonshot AI" flare
- Model selector locked to kimi-k2.5
- "How to get a Kimi API key" link in provider dialog
- Moonshot provider fully integrated across frontend and backend
* fix: refactor SDK BrowserService to use Browser class directly
The tools system was completely rewritten with new tool names and response
formats. BrowserService was calling non-existent MCP tools (browser_get_active_tab,
browser_navigate, etc.) that returned structuredContent which no longer exists.
Replaced MCP HTTP client calls with direct Browser class method calls:
- getActiveTab → browser.getActivePage() / browser.listPages()
- getPageContent → browser.contentAsMarkdown()
- getScreenshot → browser.screenshot()
- navigate → browser.goto() with tabId/windowId resolution
- getPageLoadStatus → browser.listPages() with isLoading check
- getInteractiveElements → browser.snapshot() / browser.enhancedSnapshot()
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address PR review — consistent tabId guard and remove dead PageContent type
- Change `if (tabId)` to `if (tabId !== undefined)` in navigate() to match
the guard style used for windowId and elsewhere in the file
- Remove orphaned PageContent interface no longer imported after refactor
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
SIGQUIT (Ctrl+\) was not in the signal notify list, causing Go's default
handler to dump goroutines. On macOS ARM64 this triggers a known runtime
bug where semasleep panics on the signal stack.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add "don't show again" checkbox to JTBD survey popup
Mirrors the ImportDataHint pattern — adds a checkbox that permanently
suppresses the survey popup when checked and dismissed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: persist dontShowAgain when user clicks Take Survey
Addresses Greptile review — if the checkbox is checked and the user
clicks "Take Survey", persist the flag before opening the survey so
the popup won't reappear if the survey tab is closed without starting.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: show "don't show again" only after 2nd popup, increase interval to 10 msgs
- Track shownCount in storage, only show checkbox on 3rd+ appearance
- Increase MESSAGE_THRESHOLD from 5 to 10 messages between popups
- Add DONT_SHOW_AGAIN_AFTER constant (2) for configurability
- Pass showDontShowAgain through the component chain
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: thread dontShowAgain through onTakeSurvey to avoid duplicate analytics
Addresses Greptile review — previously clicking "Take Survey" with the
checkbox checked would fire both dismissed and clicked events. Now the
dontShowAgain flag is threaded through onTakeSurvey, which persists it
without firing a dismiss event.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The --new flag creates a fresh temp profile directory but WXT's
chromiumProfile was hardcoded to /tmp/browseros-dev, ignoring it.
Pass BROWSEROS_USER_DATA_DIR env var from the Go dev tool and read
it in web-ext.config.ts.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: filter out messages with empty parts to prevent follow-up crash
When an assistant response is interrupted or errors before producing content,
a UIMessage with empty parts remains in the chat state. On the next send, the
AI SDK validates all messages and rejects the empty-parts message with
"Message must contain at least one part". This filters them out when not
streaming and adds a safety guard in formatConversationHistory.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: filter empty-parts messages before persisting to storage
Addresses race condition where the save effect could persist messages
with empty parts before the cleanup effect's state update applies.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: limit claude code review to PR creation and @claude comments
Reduces unnecessary action runs and token usage by only triggering the
review on initial PR open, and re-running when @claude is mentioned.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: restrict @claude trigger to trusted contributors
Only repo owners, org members, and collaborators can invoke the review
via @claude comments, preventing external users from consuming token quota.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: consolidate claude workflows and auto-run on PR creation
Remove separate claude-code-review.yml and add pull_request trigger
to claude.yml so it runs automatically on PR open without needing
@claude in the body.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: restore author_association guard on issue_comment trigger
The consolidation commit dropped the author_association check from the
issue_comment condition. Without it, any external commenter could invoke
Claude and consume token quota. Restores the guard to limit triggers to
OWNER, MEMBER, and COLLABORATOR.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: apply author_association guard to review comment triggers
Extends the OWNER/MEMBER/COLLABORATOR check to pull_request_review_comment
and pull_request_review events, preventing external users from triggering
Claude via review comments.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: gate previousConversation array format behind BrowserOS 0.41.0.0
Older servers reject the array format for previousConversation with a
ZodError ("Expected string, received array"). Gate the feature behind
BrowserOS >= 0.41.0.0 which bundles server >= 0.0.64 that accepts both
array and string formats.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: use minServerVersion 0.0.64 for previousConversation gate
Server version is the direct indicator of schema support, more accurate
than using BrowserOS version as a proxy.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: fall back to string format for previousConversation on old servers
Instead of omitting previousConversation entirely on servers < 0.0.64,
serialize the conversation history as a "role: content" string which
old servers accept via their z.string() schema.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* chore: bump PATCH and OFFSET
* fix: add AppArmor profile and improve .deb packaging for Ubuntu 23.10+
Ship an AppArmor profile with the .deb package that grants the
`userns` permission, fixing the fatal sandbox crash on Ubuntu 23.10+
and other distros that restrict unprivileged user namespaces via
AppArmor (closes#165).
Also adds: Qt5/Qt6 shim libraries for native file dialogs on KDE,
update-alternatives registration for default browser selection,
prerm cleanup script, and Provides/Recommends metadata.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: correct CDN download path for .deb and add multi-size icons
Update .deb download path from lowercase "browseros.deb" to "BrowserOS.deb"
to match the URL advertised in README (cdn.browseros.com/download/BrowserOS.deb).
Also install icons at all available sizes instead of only 256x256.
Closes#368
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add multi-size icons and AppStream metainfo to .deb package
Install product icons at all standard hicolor sizes (16, 22, 24, 32,
48, 64, 128, 256) instead of only 256px, so desktop environments can
pick the appropriate resolution for panels, menus, and task switchers.
Ship AppStream metainfo at /usr/share/metainfo/browseros.metainfo.xml
so GNOME Software, KDE Discover, and other software centers can
discover and display BrowserOS in their catalogs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: guard postinst update-alternatives with $1=configure check
Matches prerm's pattern — only register alternatives during normal
configure, not during dpkg error-recovery paths (abort-upgrade, etc.)
where /usr/bin/browseros may not exist yet.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add bun binary signing for macOS and Windows
Register the bun runtime binary in the code signing pipelines so it gets
properly signed and notarized alongside browseros_server and codex.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add bun runtime download and copy resource configs
Add bun binary entries for all platform/arch combos (macOS arm64/x64,
Linux arm64/x64, Windows x64) to download from R2 and copy into the
Chromium build output alongside browseros_server.
Also adds the server bundle (index.js) download and copy entries.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add get_dom and search_dom tools for HTML DOM inspection
Add two new observation tools:
- get_dom: Returns raw HTML of a page or scoped element via CSS selector
- search_dom: Fuzzy searches DOM elements by text, attributes, IDs, and
class names using Fuse.js with extended search syntax support
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: use CDP DOM protocol instead of injected scripts for DOM tools
Replace Runtime.evaluate-based approach with native CDP DOM methods:
- get_dom uses DOM.getDocument + DOM.querySelector + DOM.getOuterHTML
- search_dom uses DOM.performSearch + DOM.getSearchResults + DOM.describeNode
- Remove fuse.js dependency (CDP performSearch handles text/CSS/XPath natively)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* test: add comprehensive tests for get_dom and search_dom tools
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: resolve text nodes to parent elements in searchDom
CDP performSearch returns text nodes (nodeType 3) for plain text queries.
describeNode does not populate parentId, so use resolveNode + callFunctionOn
to get parentElement, then requestNode to obtain the parent's nodeId.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add limit bounds validation and searchId leak prevention
- Add .int().min(1).max(200) to search_dom limit parameter
- Wrap searchDom result processing in try/finally to ensure
discardSearchResults is always called
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Tests were passing raw Chrome tabIds to group_tabs and ungroup_tabs tools,
but the Zod schemas expect pageIds (MCP-layer page IDs). The tabIds field
was silently stripped during validation, causing both tests to fail.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add new CDP tools for links, hidden pages/windows, show/move
- get_page_links: extract deduplicated links from a page via evaluate
- new_hidden_page: open a hidden tab for background automation
- create_hidden_window: create a hidden window for background automation
- show_page: restore a hidden page back into a visible window
- move_page: move a tab to a different window or position
- Default includeLinks to false in get_page_content
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: use AX tree for get_page_links, add tests, fix test scripts
- Refactor get_page_links to use accessibility tree instead of raw JS
evaluate — more reliable for role="link" elements and shadow DOM
- Add extractLinkNodes() to snapshot.ts and getPageLinks() to browser.ts
- Add tests for get_page_links (constructed HTML with dedup/filtering),
new_hidden_page, show_page, move_page, create_hidden_window
- Fix root package.json test scripts to match server's actual scripts
- Update CLAUDE.md test docs to reflect current structure
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: move ChatV2Service to api/services layer and add resolvePageIds
Move ChatV2Service from agent/tool-loop/ to api/services/ where it
belongs as a service-layer concern. Add resolvePageIds() to convert
Chrome tab IDs to internal page IDs before they reach the agent,
fixing undefined pageId issues in browser automation tools.
Clean up server.ts by removing the USE_TOOL_AGENT flag, SessionManager,
and old chat route import — both /chat and /chat-v2 now directly use
createChatV2Routes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address review comments for chat-v2-service
- Fix TOCTOU race: derive isNewSession inside the creation block
instead of separate has()/get() calls
- Log warning when resolvePageIds can't map a tab ID
- Deduplicate tab IDs with Set before resolving
- Remove redundant null check on session in onFinish
- Add license header
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: update bun.lock
* fix: skip resolvePageIds for scheduled tasks to prevent pageId corruption
Scheduled tasks build browserContext with internal page IDs from
browser.newPage(), not Chrome tab IDs. The unconditional second
resolvePageIds() call was passing these internal IDs to resolveTabIds()
which expects Chrome tab IDs, causing the lookup to fail and overwrite
correct pageIds with undefined.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add biome-ignore comments for noExcessiveCognitiveComplexity on compaction.ts
and grep.ts, and noExplicitAny on filesystem test helpers.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: generalized compaction prompts with split turn handling
Replace browser-specific XML prompts with domain-agnostic markdown format.
Add split turn detection and parallel summarization for large single-turn
conversations. Switch compaction from generateText to streamText for
Fireworks API compatibility. Add comprehensive unit and E2E tests (84 total).
* fix: address code review issues for compaction (PR #391)
Enforce COMPACTION_MAX_SUMMARIZATION_INPUT cap, extract shared
callSummarizer helper, add runtime type guard for experimental_context,
move magic constants to AGENT_LIMITS, and remove dead constants.
* fix: cap truncatedTurnPrefix input to maxSummarizationInput
Apply the same sliding window cap to turn prefix messages that was
already applied to toSummarize, preventing unbounded LLM input for
long single-turn conversations with many tool calls.
* fix: reduce browseros-auto default context window to 200K
The 400K setting caused compaction to trigger at ~383K, but the actual
model limit is 262K. Conversations hit the hard limit before compaction
could kick in.
* feat: replace flaky TypeScript dev:watch with Go CLI (devwatch)
The Bun-based scripts/dev/start.ts orchestrator had fundamental issues with
WXT when launched via `bun run --filter` with cwd manipulation. This replaces
it with a Go CLI at tools/devwatch/ that provides:
- Process supervision with auto-restart on crash
- Colored log streaming with [tag] prefixes
- Automatic port discovery (--new flag)
- Fresh user-data directory creation
- Process group management for clean shutdown (SIGTERM → SIGKILL escalation)
- CDP readiness polling before starting the server
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: run agent codegen before wxt and add force-kill on double Ctrl+C
- Run graphql-codegen if generated/graphql/ doesn't exist, matching the
agent's own `dev` script behavior
- Second Ctrl+C sends SIGKILL to all process groups and exits immediately,
so you're never stuck in a restart loop
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add run.sh wrapper that checks for Go and prompts to install
If Go isn't installed, shows a clear message with install instructions
(brew install go / go.dev/dl). Also skips rebuilding if the binary
already exists and main.go hasn't changed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: show double Ctrl+C hint at startup
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: swap ANSI escape codes for fatih/color
Adds proper TTY detection, NO_COLOR env var support, and cleaner
color API. Also improves help output with bold/dim styling.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: restructure devwatch into browseros-dev CLI with cobra subcommands
Expands the single-file devwatch into a modular CLI with three subcommands:
- `watch` — dev environment with process supervision (port of devwatch)
- `test` — start test env, run bun test, clean up (replaces TS test helpers)
- `cleanup` — kill ports + remove orphaned temp dirs (replaces cleanup.sh)
Shared Go packages for browser lifecycle (CDP polling, arg building),
server health checks (health + extension status), and process management
(managed proc, port killing, streaming, monorepo root finding).
Fixes PR #389 feedback:
- Add timeout after SIGKILL in Stop() to prevent indefinite hang
- Fix run.sh freshness check to detect changes in all .go files
- Add double Ctrl+C force-kill to test command
- Guard test cleanup with sync.Once to prevent race condition
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: replace run.sh freshness logic with Makefile
Make handles timestamp-based dependency tracking natively. The Makefile
rebuilds only when any .go file, go.mod, or go.sum is newer than the
binary. run.sh just checks for Go, calls make, and execs the binary.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: use fresh browser context for selected tabs on each message
Previously, session.browserContext (set on the first message) always
took precedence via the nullish coalescing operator. On subsequent
messages with different tab selections, the new selectedTabs from the
request were silently ignored.
Now normal messages always use request.browserContext so freshly
selected tabs are included. Scheduled tasks still use the stored
session context to preserve the hidden window's pageId/windowId.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: use singleton transport for MCP route
MCP SDK 1.26.0 added a strict guard in Protocol.connect() that throws
"Already connected to a transport" if called when already connected.
The previous code created a new transport per request and called
connect() each time, causing every request after the first to fail
with -32603 Internal server error.
Move transport creation outside the request handler and add
isConnected() check per @hono/mcp docs pattern.
* fix: per-request MCP server+transport for SDK 1.26.0 compat
MCP SDK 1.26.0 patched a security vulnerability (GHSA-345p-7cg4-v4c7)
where sharing a singleton McpServer across requests could leak
cross-client response data via message ID collisions.
Create fresh McpServer + StreamableHTTPTransport per request:
no shared state, no race conditions, no ID collisions.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The agent had no knowledge of its working directory, so it couldn't
reference created files by absolute path or help users locate them.
Pass sessionExecutionDir into buildSystemPrompt for both AiSdkAgent
and GeminiAgent so the prompt includes a <workspace> section with
the resolved directory path.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Previously, session.browserContext (set on the first message) always
took precedence via the nullish coalescing operator. On subsequent
messages with different tab selections, the new selectedTabs from the
request were silently ignored.
Now normal messages always use request.browserContext so freshly
selected tabs are included. Scheduled tasks still use the stored
session context to preserve the hidden window's pageId/windowId.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: new onboarding flow
* feat: co-ordinate the sign in and import hints
* fix: ux on step one
* fix: make custom option friendlier
* feat: added required fields
* feat: setup step two redirection
* fix: remove copy url button
* feat: store profile info from onboarding
* feat: sync onboarding profile to api
* feat: show confetti when the onboarding completes
* fix: change the options in onboarding demo
* feat: setup missing analytics events
* fix: lint issues
* ci: fix typescript error
* fix: sign in hint
* fix: restore glow overlay for CDP-based tools
After migrating to CDP tools, glow broke because the hook looked for
input.tabId (controller tools) while CDP tools use input.page (pageId).
- Server: add getTabIdForPage() to Browser, include tabId in tool output
- Client: extract tabId from output, fall back to active Chrome tab
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: use ToolResultMetadata for tabId resolution
Move tabId resolution from tool-adapter into the framework layer:
- response.ts: add ToolResultMetadata interface with tabId field
- framework.ts: auto-resolve pageId→tabId after tool execution
- tool-adapter.ts: just forward metadata (no domain logic)
This makes metadata available to all ToolResult consumers, not just
the AI SDK adapter, and the metadata bag is extensible for future fields.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add todo
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: replace pi-mono filesystem tools with native Bun/Node.js implementation
Remove @mariozechner/pi-coding-agent and @mariozechner/pi-agent-core
dependencies that caused bun compile issues (tree traversal, package.json
resolution). Reimplement all 7 filesystem tools (read, write, edit, bash,
grep, find, ls) using only Bun and Node.js built-in libraries.
- No external binary dependencies (no ripgrep, fd, etc.)
- Cross-platform: Linux, macOS, Windows
- 107 tests covering all tools and utilities
- Pure JS grep/find using Bun.Glob and async directory walking
* fix: add explicit ENOENT handling in grep tool stat() call
Add a BibTeX @software citation block to README.md between
Credits and Stargazers sections, with authors Nithin Venkat Sonti,
Nikhil Venkat Sonti, and the BrowserOS team.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: ensure scheduled tasks open in hidden tab
* fix: update scheduled task result in the UI
* fix: remove unnecessary useEffect
* fix: race condition with deleteSession
Instead of a hardcoded experimentId=daily_limit, randomly assign users
to one of four survey direction buckets (competitor, switching, workflow,
activation) matching the round 2 survey pattern.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Copy dev workflow skills (dev, dev1-start through dev7-pr, dev-debug,
ts-style-review) to project .claude/skills/ so they're available to all
contributors. Excludes twitter agent and browseros browser skills.
Update .gitignore to track .claude/skills/ and .claude/commands/.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: update to support more klavis MCP servers
* fix: minor icon fix
* fix: normalize klavis mcp auth flow compatibility
* feat: add API key auth flow for Klavis MCP servers
Servers that use API key authentication (Stripe, Cloudflare, Brave
Search, Exa, Mem0, Resend, Mixpanel, PostHog, Postman, Zendesk,
Intercom) were failing with "Failed to add app" because the frontend
only handled OAuth flows. This adds the complete API key auth path:
- Backend: apiKeyUrls in StrataCreateResponse, submitApiKey() method,
/servers/submit-api-key route
- Frontend: ApiKeyDialog component, useSubmitApiKey hook, ConnectMCP
updated to show dialog for API-key servers instead of opening OAuth
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove broken success check in Klavis submitApiKey
The Klavis /mcp-server/instance/set-auth endpoint returns
{ message: "Authentication updated successfully." } without a
success field. Our code checked `data.success` which was always
undefined, causing API key auth to fail even when Klavis accepted
the key. The request() method already throws on non-2xx responses,
so the explicit check was redundant and incorrect.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add Round 2 direction parameter to JTBD survey frontend
Thread direction parameter from popup trigger through URL params to the
survey chat API. Randomly assign one of 4 investigation directions
(competitor, switching, workflow, activation) when the in-app popup
triggers, encoding it as experimentId=r2_{direction} for analytics.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: remove direction param, encode in experimentId instead
Direction is now encoded entirely in experimentId (e.g., "r2_competitor").
Remove the separate direction URL param and prop threading — the backend
derives direction from experimentId. Simplifies the frontend to only
set experimentId with a random direction on popup trigger.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: setup
* fix: compact workflow tidbits within streamed assistant parts
feat: collapse workflow tidbit status messages in graph chat
* Revert "fix: compact workflow tidbits within streamed assistant parts"
This reverts commit f5fa6d6b7a480dfc001ede6de7949f45c7777f37.
* fix: collapse workflow tidbit status messages in graph chat
Tidbit messages (jokes/status ending with ...) during workflow execution
now replace each other in place instead of stacking as separate chat
bubbles. Handles both consecutive tidbit messages and multiple tidbit
text parts within a single streamed message.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: compact multi-line tidbits within a single text part
Tidbits arrive as text-deltas accumulated into a single text part
(e.g. "Generating workflow…\nReticulating splines…\n..."). The previous
fix only handled separate parts and separate messages but not multiple
tidbit lines within one part. Added compactTidbitLinesInPart to trim
multi-line tidbit text to just the last line.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Added max-h-[85vh] and overflow-y-auto to DialogContent component
to enable scrolling when dialog content exceeds viewport height.
This fixes the scheduled task dialog not showing scroll when
content is too long.
https://claude.ai/code/session_01CP8aUnunJpW9mYwTbt3gpt
Co-authored-by: Claude <noreply@anthropic.com>
* chore: baseline setup
* fix: resolve stale closure bug in LLM Hub provider management
saveProvider and deleteProvider were wrapped in useCallback with
[providers] dependency, building updated arrays from the closure-captured
providers state. When adding a provider then deleting another, the delete
callback could have a stale providers array that didn't include the newly
added one — causing the new provider to be lost when written to storage.
Fix: read current state from persistent storage via loadProviders()
before every mutation, matching the pattern used in useLlmProviders.ts.
Remove useCallback wrappers since they no longer depend on providers state.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: inject stop button to pages controlled by agent (#334)
* chore: baseline setup
* feat(agent): When the agent is running, right now we inject an orange glow. See the `apps/age
Task ID: TOiaMuDz
* fix: clean up agent storage
* fix: improve the stop button style
* fix: type issues with stopAgentStorage
---------
Co-authored-by: BrowserOS Coding Agent <coding-agent@browseros.com>
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* fix: resolve stale closure bug in LLM Hub provider management
saveProvider and deleteProvider were wrapped in useCallback with
[providers] dependency, building updated arrays from the closure-captured
providers state. When adding a provider then deleting another, the delete
callback could have a stale providers array that didn't include the newly
added one — causing the new provider to be lost when written to storage.
Fix: read current state from persistent storage via loadProviders()
before every mutation, matching the pattern used in useLlmProviders.ts.
Remove useCallback wrappers since they no longer depend on providers state.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: BrowserOS Coding Agent <coding-agent@browseros.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
* chore: baseline setup
* feat(agent): When the agent is running, right now we inject an orange glow. See the `apps/age
Task ID: TOiaMuDz
* fix: clean up agent storage
* fix: improve the stop button style
* fix: type issues with stopAgentStorage
---------
Co-authored-by: BrowserOS Coding Agent <coding-agent@browseros.com>
Co-authored-by: Dani Akash <DaniAkash@users.noreply.github.com>
saveProvider and deleteProvider used useCallback with [providers]
dependency, causing a stale closure bug. When adding a new provider
then deleting another, the delete callback still referenced the old
providers array (before the add), losing the newly added provider.
Now reads current state from storage before each mutation, matching
the pattern used in useLlmProviders. Also removes unnecessary
useCallback wrappers per project conventions.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Check session existence in ChatService before passing previousConversation
to the agent. Only pass it for new sessions — existing sessions already
have real conversation history in the GeminiClient.
Automatically detect whether custom MCP servers use Streamable HTTP or
SSE transport by probing with a POST request before creating the config.
- Add detectMcpTransport() utility that probes the server endpoint
- If POST returns 200 with JSON/event-stream, use Streamable HTTP
- If POST returns 404/405 or fails, fall back to SSE transport
- Cache detection results per URL with 1-hour TTL
- Skip caching for transient errors (5xx, network failures)
Known servers (browseros-mcp, klavis-strata) skip detection and use
Streamable HTTP directly.
* fix: incorrect tool call for getting page snapshot
* feat: let llm know the page is loaded after enrichment is complete
* feat: improve prompt to prevent calling getActiveTab
* feat: added enrichment to the get_load_status tool
* fix: tips
* fix: show tips only 1/5 times
* fix: guard against empty tips array in getRandomTip
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: biome exhaustive deps in SurveyChat voice effect
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: remove wrapped controller tools and enrich context with windowid
* refactor: remove windowid from all the tools
* feat: pass window id to mcp server via request headers
* feat: enrich possible toolcalls to reduce roundtrip
* feat: show scheduled tasks tab if job runs are empty
* chore: switch tabs after creating new tasks
* feat: provide option to cancel and retry scheduled tasks
* feat: provide option to retry and cancel jobs on the popups
* chore: fix minor race condition between window cleanup and job status
update
* fix: keep previous data in chat history
* feat: use react query for restoring conversation messages
* fix: loading issue with chat history
* fix: use state instead of ref for the restoredConversationId
* fix: handle not found scenario on both local and remote restoration
* Revert "fix: handle not found scenario on both local and remote restoration"
This reverts commit d4725134087af047fe18bc6519f5ad5244104544.
* fix: handle conversation not found scenario
* chore: added a loading indicator for the chat history page
* chore: reset restored conversation id state
* feat: do not create tab groups for scheduled tasks
* chore: simplify system prompt to make excluding steps easier
* chore: consistent prompt composer
* feat: created auth client
* feat: created login page for testing auth
* feat: setup logout page
* feat: setup graphql codegen
* feat: setup graphql + react query utils
* feat: setup queryprovider with localforage
* feat: created auth provider
* feat: update claude.md
* feat: documents for bulk conversation upload
* chore: install missing package
* fix: setup codegen to scan for .ts files
* chore: setup check conversation query
* feat: upload conversation by profileId
* chore: upload messages in batches
* feat: account for edge cases in conversation upload
* feat: delete uploaded conversations from localstorage
* feat: load conversation history from api
* feat: implement delete conversation using graphql
* feat: delete confirmation for conversation history
* fix: issue with clearing conversations after upload
* feat: implement pagination for graphql chat history
* chore: update CLAUDE.md
* chore: update claude.md
* feat: save conversations to server
* fix: handle streaming check on remote conversation save
* feat: restore conversation from graphql
* fix: timestamp issue on the chat history page
* feat: sync llm providers from background script
* feat: update llm providers on change via background script
* chore: added a try catch block
* feat: display incomplete providers in separate UI
* feat: delete provider on server when initiated by user
* feat: setup scheduled tasks storage to sync to graphql
* feat: auto run sync in background script
* fix: sync all keys of scheduled tasks based on updatedAt timestamp
* feat: added login dropdown on the sidebar
* feat: simplify sidenav header
* feat: update header design after login
* feat: setup profile page
* feat: added back button to profile page
* fix: scrollbar flash in profile page
* feat: finish login handshake
* feat: clear storage on logout
* fix: logout page style
* feat: added tooltip to encourage user to sign in
* feat: added back button to login page
* fix: upload logic for profile picture
* feat: account for profile name in sidebar branding
* chore: set file upload url from backend request
* chore: remove default placeholder from profile component
* chore: sync with main
* Revert "chore: sync with main"
This reverts commit 77e06b894ce30235d1bfa31c8e2699b34df423a5.
* Reapply "chore: sync with main"
This reverts commit dd921d97cc9794d1872e13689c881f68e4dfee47.
* chore: updated lock file
* fix: run codegen before build:ext
* fix: run codegen before build:gent
* fix: remove hardcoded localhost header in magic link
---------
Co-authored-by: Nikhil Sonti <nikhilsv92@gmail.com>
* fix: use source files for agent-sdk during development
Export src/index.ts directly in workspace mode so the server can import
without requiring a build step. publishConfig overrides exports to use
dist/ when publishing to npm.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* fix: onboarding try it
* fix: summarize current page
* fix: ask browser os opens in agent mode
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* feat: agent mode on or off
* fix: cleaner whitelist for chat mode
* fix: cleaner whitelist for chat mode
* feat: agent mode with tooltip
* feat: agent mode chat mode final UI
* feat: previous conversation history
* fix: re-enable the DELETE endpoint
* fix: make bun run start:server show lgos
* fix: minor text change
* fix: keep 16k context window size
* fix: use message ref to get access to full restored messages (when create prev conversation history)
* fix: don't run watchdog in dev-mode
* Revert "fix: re-enable the DELETE endpoint"
This reverts commit 9cbbbab6768c7c412c8f65bd88643e2856fa5169.
---------
Co-authored-by: Nikhil Sonti <nikhilsv92@gmail.com>
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: add timeout and window based mutex to improve speed
* fix: move suspense boundary closer to corresponding pages
* fix: pre-resolve the client via singleton to speed up the clientPromise
* feat: apply theme background faster with plain script
* chore: update biome version
* feat: make rpc client persist promise with useMemo and remove loading
text
* fix: replace dvh with vh
* fix: replace dvh with vh in create graph
* fix: import clean-up + unit test for transformCode
* feat: improve formatter
* feat: grep interactive tool
* fix: simple, detailed, full formatter options
* fix: viewport legend
* fix: add vscode launch.json for debugging
* fix: grep show before and after, also click before type/clear
* feat: move to bun plugin to intercept WASM
* feat: new build/server.ts with refactored
* fix: clean-up source map dirs before build
* fix: remove elide for build
* fix: clean-up source map ordering
* feat: v1 ui for the file selector
* feat: integrate with browseros.choosePath API
* feat: gate workspace folder for 0.36.0.4 as requires new browserOS.choosePath API
* fix: add default folder option
* fix: clean-up old code
* feat: create conversations storage hook
* feat: save conversation hook
* feat: created chat layout
* feat: created chat history button
* feat: setup chat history view links
* chore: updated placeholder
* fix: width of the chat history screen
* feat: provide navigation from history page back to conversation page
* fix: issue with restoring conversation id
* chore: do not update history when content doesn't change
* feat: mark active conversation id
* fix: syncing the conversation id ref
* feat: improve the logic for node width
* feat: use dagre to display loops
* chore: use animated dots for loops
* feat: create graph using cytoscape
* feat: use cytoscape html label
* feat: setup dynamic label height and width
* feat: set reasonable zoom levels
* feat: use theme colors for nodes
* feat: use mutation observer to change color schemes
* feat: implement dark mode with pure css
* chore: remove unused libraries
* fix: sanitize label with dompurify
* feat: add support for jtbd agent to accept max turns and experiment id as query params
* fix: add jtbd agent integration with workflow
* fix: change message threshold to 5
* fix: tempDir is executionDir and create per session execution dir
* fix: move create() in gemini-agent to top
* fix: log(debug) directories
* fix: chat routes bug
* feat: support userSessionDir in /chat request schema
* fix: clean-up un-used types
* fix: lint errors
- moved chatprovider selector to a shared component
- reimplement chat header as it was simple and we can have graph mode specific options there instead of reusing chat header from sidepanel
* feat: custom node component
* feat: create resizable panels for graph ui
* feat: setup hono rpc on agent
* feat: created getClient util
* feat: created rpc client provider
* chore: reafctor agent sdk
* chore: created usechat hook
* chore: graph create update endpoint return ai sdk stream
* chore: graph create update endpoint return ai sdk stream
* chore: graph create update endpoint return ai sdk stream
* chore: graph create update endpoint return ai sdk stream
* feat: graph chat component
* feat: integrate input field
* feat: make getActionForMessage optional
* feat: integrate chat messages ui
* feat: update graph canvas with latest message
* feat: support editing graph with new message
* feat: create chat test function
* fix: created chat test api integration
* chore: remove background window state
* chore: improve agent ui stream
* chore: print error
* feat: create workflow storage
* feat: created workflows screen on options page
* feat: added error handling to workflows chat
* chore: ignore graph code generation folder
* fix: provide a better header title name
* fix: buttons accessibility on graph canvas
* feat: improve test and save workflow button state
* chore: provide autofocus to the workflow header
* feat: setup save and edit options on the workflow
* feat: open the workflow in edit mode
* fix: use sentry to capture server exception
* feat: integrate run workflow using dialog box
* feat: display errors in the run dialog box
* fix: use rpc client to delete workflows
* feat: fix panel sizes on graph creation
* fix: provide suspense fallback boundary for the options page
* feat: auto fitview on graph updates
* fix: node colors in the graph
* chore: make minimap movable
* feat: provide styling to react flow controls
* fix: missing imports
* fix: pass personalization to workflow runs
* feat: provide back button in workflow page
* feat: added confirmation when leaving workflow page without saving
* feat: provide animation to nodes
* feat: autofit canvas to resizepanel size
* feat: added workflows to newtab page
* fix: typescript lint errors
* feat: enforce bun version
* fix: typecheck command
---------
Co-authored-by: shivammittal274 <mittal.shivam103@gmail.com>
* feat: v0.1 jtbd popup for users
* feat: v0.2 jtbd popup based on messages sent
* fix: clean up previous chat status and added comment
* chore: change threshold to 15
* fix: show popup only when every N messages
* fix: set survey taken only after clicking start on welcome page
* feat: v0.1 of voice transcription for JTBD survey
Add voice input capability to the JTBD Product Survey chat:
- useVoiceInput hook for audio recording and transcription
- VoiceInputButton component for mic/stop/loading states
- Waveform visualization during recording
- Integration with BrowserOS gateway transcription endpoint
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* style: make voice button orange like send button
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* chore: refactor jtbd agent
* chore: udpate text
* fix: clean up stop recording if stopped midway
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* fix: replace bun install with bun ci in workflow files
* chore: update react router
* chore: update posthog
* fix: vulnerable package versions
* Revert "fix: replace bun install with bun ci in workflow files"
This reverts commit 2924fe496fc340555506d305e57b81cb87d45dae.
* fix: add debug logging for start:dev
* feat: use eventsource-parser for schedule tasks
* fix: remove reasoning traces, minor UI updates for schedule task
* fix: bug with textdelta
* fix: controller-ext is built separately
* fix: remove un-used scripts in agent/
* fix: rename to assistant
* fix: add build scripts
* feat: new start:dev to start both
* fix: update gitignore
* feat: --new-ports support for dev:start
* feat: update start-all to support port and new data dir
* fix: add help insturctions for start:dev
* chore: refactoring
* fix: return all response parts from tool execution
Previously, handleToolExecution only returned responseParts[0], causing
data loss when tools returned multiple parts. This fix:
- Changes ToolExecutionResult.part to ToolExecutionResult.parts (array)
- Returns all responseParts instead of just the first one
- Spreads all parts into toolResponseParts in processToolRequests
This workflow runs a daily security audit on the codebase, checking for vulnerabilities and sending the results to Slack. It includes steps for checking out the code, setting up Bun, installing dependencies, running the audit, parsing results, and notifying via Slack.
* feat: support browserOS server version in capabilities
* feat: add personalisation support flag
* fix: gate personalisation based on server support
* fix: gitignore minor
* fix: clean-up passing logger, bad pattern it's singleton
* feat: refactor main.ts (#148)
* fix: logger in main
* feat: refactor chat route and split into service (#149)
* fix: logger in chatserver
* feat: scheduled tasks base ui
* chore: fix biome version
* fix: type issues
* chore: remove use callback
* chore: refactor scheduleStorage types
* feat: create storage hooks for job & job runs
* feat: integrate listing with store
* feat: schedule tasks dialog integration
* feat: integrate view and runs
* feat: sync alarm state
* fix: check for enabled jobs in alarm state
* feat: createAlarmFromJob utility
* feat: updated edit hooks to update alarms
* feat: getChatServerResponse util
* feat: run jobs in schedule
* feat: update job run stat with storage
* feat: discard old runs over 15
* feat: provide graph mode entry
* feat: footer link with scheduler option
* feat: use a nicer loader for task runs
* feat: schedule results component
* feat: scheduler results in new tab page
* feat: nicer date formatting with dayjs
* feat: use run-result-dialog for displaying run results in new tab
* chore: delete mocked storage methods
* chore: remove unused code
* chore: remove all job runs when a job is deleted
* feat: use shadcn elements for schedule results component
* feat: render results in markdown view
* chore: added important update on logic sharing
* chore: remove loading state in scheduledtaskslist
* feat: run the background job in a unfocused window
* feat: provide mcp options to the background scheduled tasks
* chore: clean up stale jobs on chrome restart or update
* fix: background window not cleaned up on error
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* chore: fix type issues
---------
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* feat: agent-sdk outline
* feat: unit tests for agent-sdk
* feat: implement /sdk routes
* feat: integration test for agent-sdk with server
* feat: ENV to disble headless mode for testing
* feat: act() integration test working
* chore: refactor package/shared to have constants/ and /types separately
* feat: verify() and extract() sdk APIs
* feat: extract() use remote endpoint for extraction
* feat: verify() implemented - lazy parsing to avoid strong schema checks
* fix: remove generateStructuredOutput as not models support it
* fix: clean-up LLM types and use zod schema
* fix: typecheck vitetest error
* fix: remove directly calling GeminiAgent in sdk act()
* fix: lefthook for refactor warning
* fix: refactor routes/sdk to move business logic out
* chore: fix monorepo setup
1) use single .env.development file at the root
2) update package.json to contain commands to start server and agent
3) rename "Assistant" package name to "agent"
4) rename HTTP_MCP_PORT to SERVER_PORT
* chore: update README
* chore: update .env.example
* ci: update dependabot to focus on security
Added open-pull-requests-limit, enabled beta ecosystems (for bun support) and only allow only security updates
* chore: fix whitespaces
* ci: update dependency groups to only apply to security-updates
* feat: use pino logger, use logger interface across ext and server
* fix: no need prefixes in logger as we parse stack trace
* chore: update claude.md
* fix: clean-up old docs
* feat: refactored test utils
* fix: clean-up dev scripts and move to scripts/dev
* fix: clean-up script
* fix: refactor tests into properly controller tests and cdp tests
* feat: import all the missing tests before refactor
* fix: biome errors for tests
* fix: few type errors and add exceptiosn
* fix: few more type errors
* fix: remove agent port from tests
* fix: exclude tests from tsconfig, bun run tests natively
* fix: mcpServer test now waits for extension connected
- Delete apps/server/src/mcp/server.ts and index.ts (replaced by http/routes/mcp.ts)
- Delete apps/server/src/agent/http/HttpServer.ts, types.ts, index.ts (replaced by http/)
- Move ChatRequestSchema and related types to http/types.ts
- Update imports in GeminiAgent.ts, agent/types.ts, agent/index.ts
- Remove deprecated exports from agent/index.ts
- Remove commented out startMcpServer and startAgentServer functions from main.ts
- Add routes/chat.ts with POST /chat and DELETE /chat/:conversationId
- SSE streaming with abort detection via honoStream.onAbort()
- Rate limiting for BrowserOS provider
- Session management via SessionManager
- Reuses existing GeminiAgent execution logic
- Add routes/mcp.ts using StreamableHTTPTransport from @hono/mcp
- Per-request transport to prevent JSON-RPC request ID collisions
- Reuse tool registration logic from existing MCP server
- Security check with isLocalhostRequest() using Bun server.requestIP()
- Supports enableJsonResponse for JSON responses (not SSE)
- Add routes/provider.ts with Zod validation for provider testing
- Add routes/klavis.ts with all Klavis OAuth endpoints
- Update server.ts to compose new routes
* feat: refactor packages into single project
* feat: created apps directory
* chore: removed duplicate packages
* fix: delete package-lock.json
since project uses bun
* feat: mcp support
* feat: mcp support added
* feat: third party mcp support
* feat: third party mcp support
* feat: mcp support extended to all oauth urls and user integrations
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* feat: fetch daily rate limit from the gateway
* chore: survey link for usage limit
* fix: remove initial query from rate limiter table to keep it simple (as it is not required)
Fixes "unexpected tool_use_id found in tool_result blocks" API errors that
occur after conversation compression removes one half of a tool_use/tool_result pair.
Root cause: The existing filter logic checked if tool_use IDs had matching
tool_results (and vice versa), but when filtering orphans, the IDs were not
removed from the tracking sets. This caused corresponding counterparts in
later Contents to pass through the filter, creating mismatched pairs.
Changes:
- Add cascading deletion: when filtering an orphan tool_result, also delete
its ID from allToolResultIds so later tool_uses with that ID are filtered
- Add cascading deletion: when filtering an orphan tool_use, also delete
its ID from allToolCallIds so later tool_results with that ID are filtered
- Add mergeConsecutiveToolMessages() to combine split tool messages into a
single message, satisfying the API requirement that all tool_results must
immediately follow their tool_use in one message
- Add comprehensive test coverage for orphan filtering scenarios
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* feat: support reading config from TOML file
* fix: wip toml config
* refactor: one config, merged from args, config and config.toml example
* fix: update package.json to have bun start:with_toml
* docs: add quick toml explaination
* refactor: clean-up /init endpoint, we'll use TOML to pass config
* fix: make reconnect interval every 5s
* fix: make host as 127.0.0.1 as some localhost can resolve to ipv6
* feat: make controller-ext check the port each time it reconnects
Switch from x64-modern (requires AVX2) to x64-baseline (SSE4.2 only)
for Linux and Windows builds. This fixes the "Illegal instruction"
crash on pre-Haswell Intel CPUs (Ivy Bridge, Sandy Bridge) and
pre-Excavator AMD CPUs that lack AVX2 support.
Fixes: MCP server crashes with SIGILL on Ivy Bridge CPUs
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
- Request only accepts contextWindowSize
- GeminiAgent computes compressionThreshold internally using fixed 0.75 ratio
- Follows YAGNI principle - no need to expose compressionRatio to UI
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* vercel ai adpater for gemini cli
* tests fixed based upon v5
* remove logic for normalisation for openai (not needed)
* tests fixed based upon v5
* agent core logic
* fix: logger to truncate only in console, write full log to file
* fix: logs dir and proper env parsing
* feat: add focus event to switch the primary controller
* adding resources-dir arg and using that for finding codex binary
* write logs to resource-dir
* handle default executable path for codex
* fix: code-sdk-ts build to have bun
* update to use browseros config
* adding skipGitRepocheck and other configs
* new codex binary integration
* refactor agentConfig
* default eventGaptimeout is 120s
* minor updates
* update env
* fix: gateway gets the config and passes to AgentConfig
Changed mcp.servers to mcp_servers to match Codex CLI config format.
The Codex CLI expects MCP server configuration to use mcp_servers
(underscore) not mcp.servers (dot) in config.toml. This fixes
programmatic MCP configuration via -c CLI flags.
Changes:
- Use mcp_servers instead of mcp.servers
- Clear global config first with -c mcp_servers={}
- Set individual properties with dotted notation
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
* rename PORT to AGENT_PORT
* rename WebsocketManger to ControllerBridge
* update the log info
* fix: rename wsManager to controllerBridge
* update Logger to use common/Logger
* fix: logger, unify and standarize the naming
* remove standalone agent
* rename to controller-based, cdp-based, cleaner imports in main and claude-sdk
* refactor: main.ts
* refactor: .env
* update controller-ext manifest
* add extension-controller build commands in main package.json
* remove controller-ext environments and move to constants
* update package.json build commands
* fix: controller-ext webpack to combine files for production
* webpack: enable console logs for controller-ext for now in prod
* update README
* adding agent-port arg and updating test
* fix: commander --help issue
* fix: mcp server package mis-match
* add browseros starting for test
* integrate test added
* fix tests to use BrowserOS
* monorepo: core
* monorepo: tools and server
* mono: repo refactor
* moved tests, removed old files
* update server tests
* agent server location and TBD
* fix formatting
* add new workflows
* rename core to common, mcp-server, to mcp, agent-server to agent
* remove nodejs tests
* test: add simple GitHub Actions workflow for running tests on PR
* test workflow
* feat: add test coverage reporting to GitHub Actions workflow
- Run tests with --coverage flag to generate coverage reports
- Display coverage summary in PR comments
- Upload coverage artifacts for analysis
- Show coverage in GitHub Actions summary
* simple test workflow
description: "You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation."
---
# Brainstorming Ideas Into Designs
Help turn ideas into fully formed designs and specs through natural collaborative dialogue.
Start by understanding the current project context, then ask questions one at a time to refine the idea. Once you understand what you're building, present the design and get user approval.
<HARD-GATE>
Do NOT invoke any implementation skill, write any code, scaffold any project, or take any implementation action until you have presented a design and the user has approved it. This applies to EVERY project regardless of perceived simplicity.
</HARD-GATE>
## Anti-Pattern: "This Is Too Simple To Need A Design"
Every project goes through this process. A todo list, a single-function utility, a config change — all of them. "Simple" projects are where unexamined assumptions cause the most wasted work. The design can be short (a few sentences for truly simple projects), but you MUST present it and get approval.
## Checklist
You MUST create a task for each of these items and complete them in order:
2.**Offer visual companion** (if topic will involve visual questions) — this is its own message, not combined with a clarifying question. See the Visual Companion section below.
3.**Ask clarifying questions** — one at a time, understand purpose/constraints/success criteria
4.**Propose 2-3 approaches** — with trade-offs and your recommendation
5.**Present design** — in sections scaled to their complexity, get user approval after each section
6.**Write design doc** — save to `.llm/specs/YYYY-MM-DD-<topic>-design.md` and commit
7.**Spec self-review** — quick inline check for placeholders, contradictions, ambiguity, scope (see below)
8.**User reviews written spec** — ask user to review the spec file before proceeding
9.**Transition to implementation** — invoke writing-plans skill to create implementation plan
## Process Flow
```dot
digraph brainstorming {
"Explore project context" [shape=box];
"Visual questions ahead?" [shape=diamond];
"Offer Visual Companion\n(own message, no other content)" [shape=box];
**The terminal state is invoking writing-plans.** Do NOT invoke frontend-design, mcp-builder, or any other implementation skill. The ONLY skill you invoke after brainstorming is writing-plans.
## The Process
**Understanding the idea:**
- Check out the current project state first (files, docs, recent commits)
- Before asking detailed questions, assess scope: if the request describes multiple independent subsystems (e.g., "build a platform with chat, file storage, billing, and analytics"), flag this immediately. Don't spend questions refining details of a project that needs to be decomposed first.
- If the project is too large for a single spec, help the user decompose into sub-projects: what are the independent pieces, how do they relate, what order should they be built? Then brainstorm the first sub-project through the normal design flow. Each sub-project gets its own spec → plan → implementation cycle.
- For appropriately-scoped projects, ask questions one at a time to refine the idea
- Prefer multiple choice questions when possible, but open-ended is fine too
- Only one question per message - if a topic needs more exploration, break it into multiple questions
- Focus on understanding: purpose, constraints, success criteria
**Exploring approaches:**
- Propose 2-3 different approaches with trade-offs
- Present options conversationally with your recommendation and reasoning
- Lead with your recommended option and explain why
**Presenting the design:**
- Once you believe you understand what you're building, present the design
- Scale each section to its complexity: a few sentences if straightforward, up to 200-300 words if nuanced
- Ask after each section whether it looks right so far
- Cover: architecture, components, data flow, error handling, testing
- Be ready to go back and clarify if something doesn't make sense
**Design for isolation and clarity:**
- Break the system into smaller units that each have one clear purpose, communicate through well-defined interfaces, and can be understood and tested independently
- For each unit, you should be able to answer: what does it do, how do you use it, and what does it depend on?
- Can someone understand what a unit does without reading its internals? Can you change the internals without breaking consumers? If not, the boundaries need work.
- Smaller, well-bounded units are also easier for you to work with - you reason better about code you can hold in context at once, and your edits are more reliable when files are focused. When a file grows large, that's often a signal that it's doing too much.
**Working in existing codebases:**
- Explore the current structure before proposing changes. Follow existing patterns.
- Where existing code has problems that affect the work (e.g., a file that's grown too large, unclear boundaries, tangled responsibilities), include targeted improvements as part of the design - the way a good developer improves code they're working in.
- Don't propose unrelated refactoring. Stay focused on what serves the current goal.
## After the Design
**Documentation:**
- Write the validated design (spec) to `.llm/specs/YYYY-MM-DD-<topic>-design.md`
- (User preferences for spec location override this default)
- Use elements-of-style:writing-clearly-and-concisely skill if available
- Commit the design document to git
**Spec Self-Review:**
After writing the spec document, look at it with fresh eyes:
1.**Placeholder scan:** Any "TBD", "TODO", incomplete sections, or vague requirements? Fix them.
2.**Internal consistency:** Do any sections contradict each other? Does the architecture match the feature descriptions?
3.**Scope check:** Is this focused enough for a single implementation plan, or does it need decomposition?
4.**Ambiguity check:** Could any requirement be interpreted two different ways? If so, pick one and make it explicit.
Fix any issues inline. No need to re-review — just fix and move on.
**User Review Gate:**
After the spec review loop passes, ask the user to review the written spec before proceeding:
> "Spec written and committed to `<path>`. Please review it and let me know if you want to make any changes before we start writing out the implementation plan."
Wait for the user's response. If they request changes, make them and re-run the spec review loop. Only proceed once the user approves.
**Implementation:**
- Invoke the writing-plans skill to create a detailed implementation plan
- Do NOT invoke any other skill. writing-plans is the next step.
## Key Principles
- **One question at a time** - Don't overwhelm with multiple questions
- **Multiple choice preferred** - Easier to answer than open-ended when possible
- **YAGNI ruthlessly** - Remove unnecessary features from all designs
- **Explore alternatives** - Always propose 2-3 approaches before settling
- **Incremental validation** - Present design, get approval before moving on
- **Be flexible** - Go back and clarify when something doesn't make sense
## Visual Companion
A browser-based companion for showing mockups, diagrams, and visual options during brainstorming. Available as a tool — not a mode. Accepting the companion means it's available for questions that benefit from visual treatment; it does NOT mean every question goes through the browser.
**Offering the companion:** When you anticipate that upcoming questions will involve visual content (mockups, layouts, diagrams), offer it once for consent:
> "Some of what we're working on might be easier to explain if I can show it to you in a web browser. I can put together mockups, diagrams, comparisons, and other visuals as we go. This feature is still new and can be token-intensive. Want to try it? (Requires opening a local URL)"
**This offer MUST be its own message.** Do not combine it with clarifying questions, context summaries, or any other content. The message should contain ONLY the offer above and nothing else. Wait for the user's response before continuing. If they decline, proceed with text-only brainstorming.
**Per-question decision:** Even after the user accepts, decide FOR EACH QUESTION whether to use the browser or the terminal. The test: **would the user understand this better by seeing it than reading it?**
- **Use the browser** for content that IS visual — mockups, wireframes, layout comparisons, architecture diagrams, side-by-side visual designs
- **Use the terminal** for content that is text — requirements questions, conceptual choices, tradeoff lists, A/B/C/D text options, scope decisions
A question about a UI topic is not automatically a visual question. "What does personality mean in this context?" is a conceptual question — use the terminal. "Which wizard layout works better?" is a visual question — use the browser.
If they agree to the companion, read the detailed guide before proceeding:
# Wait for server-started message (check log file)
for i in {1..50};do
if grep -q "server-started""$LOG_FILE" 2>/dev/null;then
# Verify server is still alive after a short window (catches process reapers)
alive="true"
for _ in {1..20};do
if ! kill -0 "$SERVER_PID" 2>/dev/null;then
alive="false"
break
fi
sleep 0.1
done
if[["$alive" !="true"]];then
echo"{\"error\": \"Server started but was killed. Retry in a persistent terminal with: $SCRIPT_DIR/start-server.sh${PROJECT_DIR:+ --project-dir $PROJECT_DIR} --host $BIND_HOST --url-host $URL_HOST --foreground\"}"
exit1
fi
grep "server-started""$LOG_FILE"| head -1
exit0
fi
sleep 0.1
done
# Timeout - server didn't start
echo'{"error": "Server failed to start within 5 seconds"}'
- **Technical decisions** — API design, data modeling, architectural approach selection
- **Clarifying questions** — anything where the answer is words, not a visual preference
A question *about* a UI topic is not automatically a visual question. "What kind of wizard do you want?" is conceptual — use the terminal. "Which of these wizard layouts feels right?" is visual — use the browser.
## How It Works
The server watches a directory for HTML files and serves the newest one to the browser. You write HTML content to `screen_dir`, the user sees it in their browser and can click to select options. Selections are recorded to `state_dir/events` that you read on your next turn.
**Content fragments vs full documents:** If your HTML file starts with `<!DOCTYPE` or `<html`, the server serves it as-is (just injects the helper script). Otherwise, the server automatically wraps your content in the frame template — adding the header, CSS theme, selection indicator, and all interactive infrastructure. **Write content fragments by default.** Only write full documents when you need complete control over the page.
## Starting a Session
```bash
# Start server with persistence (mockups saved to project)
Save `screen_dir` and `state_dir` from the response. Tell user to open the URL.
**Finding connection info:** The server writes its startup JSON to `$STATE_DIR/server-info`. If you launched the server in the background and didn't capture stdout, read that file to get the URL and port. When using `--project-dir`, check `<project>/.superpowers/brainstorm/` for the session directory.
**Note:** Pass the project root as `--project-dir` so mockups persist in `.superpowers/brainstorm/` and survive server restarts. Without it, files go to `/tmp` and get cleaned up. Remind the user to add `.superpowers/` to `.gitignore` if it's not already there.
**Launching the server by platform:**
**Claude Code (macOS / Linux):**
```bash
# Default mode works — the script backgrounds the server itself
**Other environments:** The server must keep running in the background across conversation turns. If your environment reaps detached processes, use `--foreground` and launch the command with your platform's background execution mechanism.
If the URL is unreachable from your browser (common in remote/containerized setups), bind a non-loopback host:
```bash
scripts/start-server.sh \
--project-dir /path/to/project \
--host 0.0.0.0 \
--url-host localhost
```
Use `--url-host` to control what hostname is printed in the returned URL JSON.
## The Loop
1.**Check server is alive**, then **write HTML** to a new file in `screen_dir`:
- Before each write, check that `$STATE_DIR/server-info` exists. If it doesn't (or `$STATE_DIR/server-stopped` exists), the server has shut down — restart it with `start-server.sh` before continuing. The server auto-exits after 30 minutes of inactivity.
- Use semantic filenames: `platform.html`, `visual-style.html`, `layout.html`
- **Never reuse filenames** — each screen gets a fresh file
- Use Write tool — **never use cat/heredoc** (dumps noise into terminal)
- Server automatically serves the newest file
2.**Tell user what to expect and end your turn:**
- Remind them of the URL (every step, not just first)
- Give a brief text summary of what's on screen (e.g., "Showing 3 layout options for the homepage")
- Ask them to respond in the terminal: "Take a look and let me know what you think. Click to select an option if you'd like."
3.**On your next turn** — after the user responds in the terminal:
- Read `$STATE_DIR/events` if it exists — this contains the user's browser interactions (clicks, selections) as JSON lines
- Merge with the user's terminal text to get the full picture
- The terminal message is the primary feedback; `state_dir/events` provides structured interaction data
4.**Iterate or advance** — if feedback changes current screen, write a new file (e.g., `layout-v2.html`). Only move to the next question when the current step is validated.
5.**Unload when returning to terminal** — when the next step doesn't need the browser (e.g., a clarifying question, a tradeoff discussion), push a waiting screen to clear the stale content:
This prevents the user from staring at a resolved choice while the conversation has moved on. When the next visual question comes up, push a new content file as usual.
6. Repeat until done.
## Writing Content Fragments
Write just the content that goes inside the page. The server wraps it in the frame template automatically (header, theme CSS, selection indicator, and all interactive infrastructure).
**Minimal example:**
```html
<h2>Which layout works better?</h2>
<p class="subtitle">Consider readability and visual hierarchy</p>
**Multi-select:** Add `data-multiselect` to the container to let users select multiple options. Each click toggles the item. The indicator bar shows the count.
```html
<div class="options" data-multiselect>
<!-- same option markup — users can select/deselect multiple -->
When the user clicks options in the browser, their interactions are recorded to `$STATE_DIR/events` (one JSON object per line). The file is cleared automatically when you push a new screen.
```jsonl
{"type":"click","choice":"a","text":"Option A - Simple Layout","timestamp":1706000101}
{"type":"click","choice":"c","text":"Option C - Complex Grid","timestamp":1706000108}
{"type":"click","choice":"b","text":"Option B - Hybrid","timestamp":1706000115}
```
The full event stream shows the user's exploration path — they may click multiple options before settling. The last `choice` event is typically the final selection, but the pattern of clicks can reveal hesitation or preferences worth asking about.
If `$STATE_DIR/events` doesn't exist, the user didn't interact with the browser — use only their terminal text.
## Design Tips
- **Scale fidelity to the question** — wireframes for layout, polish for polish questions
- **Explain the question on each page** — "Which layout feels more professional?" not just "Pick one"
- **Iterate before advancing** — if feedback changes current screen, write a new version
- **2-4 options max** per screen
- **Use real content when it matters** — for a photography portfolio, use actual images (Unsplash). Placeholder content obscures design issues.
- **Keep mockups simple** — focus on layout and structure, not pixel-perfect design
## File Naming
- Use semantic names: `platform.html`, `visual-style.html`, `layout.html`
- Never reuse filenames — each screen must be a new file
- For iterations: append version suffix like `layout-v2.html`, `layout-v3.html`
- Server serves newest file by modification time
## Cleaning Up
```bash
scripts/stop-server.sh $SESSION_DIR
```
If the session used `--project-dir`, mockup files persist in `.superpowers/brainstorm/` for later reference. Only `/tmp` sessions get deleted on stop.
description: Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
---
# Dispatching Parallel Agents
## Overview
You delegate tasks to specialized agents with isolated context. By precisely crafting their instructions and context, you ensure they stay focused and succeed at their task. They should never inherit your session's context or history — you construct exactly what they need. This also preserves your own context for coordination work.
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
**Core principle:** Dispatch one agent per independent problem domain. Let them work concurrently.
## When to Use
```dot
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
description: Use when you have a written implementation plan to execute in a separate session with review checkpoints
---
# Executing Plans
## Overview
Load plan, review critically, execute all tasks, report when complete.
**Announce at start:** "I'm using the executing-plans skill to implement this plan."
**Note:** Tell your human partner that Superpowers works much better with access to subagents. The quality of its work will be significantly higher if run on a platform with subagent support (such as Claude Code or Codex). If subagents are available, use superpowers:subagent-driven-development instead of this skill.
## The Process
### Step 1: Load and Review Plan
1. Read plan file
2. Review critically - identify any questions or concerns about the plan
3. If concerns: Raise them with your human partner before starting
4. If no concerns: Create TodoWrite and proceed
### Step 2: Execute Tasks
For each task:
1. Mark as in_progress
2. Follow each step exactly (plan has bite-sized steps)
3. Run verifications as specified
4. Mark as completed
### Step 3: Complete Development
After all tasks complete and verified:
- Announce: "I'm using the finishing-a-development-branch skill to complete this work."
- **REQUIRED SUB-SKILL:** Use superpowers:finishing-a-development-branch
- Follow that skill to verify tests, present options, execute choice
## When to Stop and Ask for Help
**STOP executing immediately when:**
- Hit a blocker (missing dependency, test fails, instruction unclear)
- Plan has critical gaps preventing starting
- You don't understand an instruction
- Verification fails repeatedly
**Ask for clarification rather than guessing.**
## When to Revisit Earlier Steps
**Return to Review (Step 1) when:**
- Partner updates the plan based on your feedback
- Fundamental approach needs rethinking
**Don't force through blockers** - stop and ask.
## Remember
- Review plan critically first
- Follow plan steps exactly
- Don't skip verifications
- Reference skills when plan says to
- Stop when blocked, don't guess
- Never start implementation on main/master branch without explicit user consent
## Integration
**Required workflow skills:**
- **superpowers:using-git-worktrees** - REQUIRED: Set up isolated workspace before starting
- **superpowers:writing-plans** - Creates the plan this skill executes
- **superpowers:finishing-a-development-branch** - Complete development after all tasks
description: Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup
---
# Finishing a Development Branch
## Overview
Guide completion of development work by presenting clear options and handling chosen workflow.
description: Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation
---
# Code Review Reception
## Overview
Code review requires technical evaluation, not emotional performance.
**Core principle:** Verify before implementing. Ask before assuming. Technical correctness over social comfort.
## The Response Pattern
```
WHEN receiving code review feedback:
1. READ: Complete feedback without reacting
2. UNDERSTAND: Restate requirement in own words (or ask)
3. VERIFY: Check against codebase reality
4. EVALUATE: Technically sound for THIS codebase?
5. RESPOND: Technical acknowledgment or reasoned pushback
| Partial implementation | Clarify all items first |
| Can't verify, proceed anyway | State limitation, ask for direction |
## Real Examples
**Performative Agreement (Bad):**
```
Reviewer: "Remove legacy code"
❌ "You're absolutely right! Let me remove that..."
```
**Technical Verification (Good):**
```
Reviewer: "Remove legacy code"
✅ "Checking... build target is 10.15+, this API needs 13+. Need legacy for backward compat. Current impl has wrong bundle ID - fix it or drop pre-13 support?"
```
**YAGNI (Good):**
```
Reviewer: "Implement proper metrics tracking with database, date filters, CSV export"
✅ "Grepped codebase - nothing calls this endpoint. Remove it (YAGNI)? Or is there usage I'm missing?"
```
**Unclear Item (Good):**
```
your human partner: "Fix items 1-6"
You understand 1,2,3,6. Unclear on 4,5.
✅ "Understand 1,2,3,6. Need clarification on 4 and 5 before implementing."
```
## GitHub Thread Replies
When replying to inline review comments on GitHub, reply in the comment thread (`gh api repos/{owner}/{repo}/pulls/{pr}/comments/{id}/replies`), not as a top-level PR comment.
## The Bottom Line
**External feedback = suggestions to evaluate, not orders to follow.**
Verify. Question. Then implement.
No performative agreement. Technical rigor always.
description: Use when completing tasks, implementing major features, or before merging to verify work meets requirements
---
# Requesting Code Review
Dispatch superpowers:code-reviewer subagent to catch issues before they cascade. The reviewer gets precisely crafted context for evaluation — never your session's history. This keeps the reviewer focused on the work product, not your thought process, and preserves your own context for continued work.
**Core principle:** Review early, review often.
## When to Request Review
**Mandatory:**
- After each task in subagent-driven development
- After completing major feature
- Before merge to main
**Optional but valuable:**
- When stuck (fresh perspective)
- Before refactoring (baseline check)
- After fixing complex bug
## How to Request
**1. Get git SHAs:**
```bash
BASE_SHA=$(git rev-parse HEAD~1)# or origin/main
HEAD_SHA=$(git rev-parse HEAD)
```
**2. Dispatch code-reviewer subagent:**
Use Task tool with superpowers:code-reviewer type, fill template at `code-reviewer.md`
[Improvements for code quality, architecture, or process]
### Assessment
**Ready to merge?** [Yes/No/With fixes]
**Reasoning:** [Technical assessment in 1-2 sentences]
## Critical Rules
**DO:**
- Categorize by actual severity (not everything is Critical)
- Be specific (file:line, not vague)
- Explain WHY issues matter
- Acknowledge strengths
- Give clear verdict
**DON'T:**
- Say "looks good" without checking
- Mark nitpicks as Critical
- Give feedback on code you didn't review
- Be vague ("improve error handling")
- Avoid giving a clear verdict
## Example Output
```
### Strengths
- Clean database schema with proper migrations (db.ts:15-42)
- Comprehensive test coverage (18 tests, all edge cases)
- Good error handling with fallbacks (summarizer.ts:85-92)
### Issues
#### Important
1. **Missing help text in CLI wrapper**
- File: index-conversations:1-31
- Issue: No --help flag, users won't discover --concurrency
- Fix: Add --help case with usage examples
2. **Date validation missing**
- File: search.ts:25-27
- Issue: Invalid dates silently return no results
- Fix: Validate ISO format, throw error with example
#### Minor
1. **Progress indicators**
- File: indexer.ts:130
- Issue: No "X of Y" counter for long operations
- Impact: Users don't know how long to wait
### Recommendations
- Add progress reporting for user experience
- Consider config file for excluded projects (portability)
### Assessment
**Ready to merge: With fixes**
**Reasoning:** Core implementation is solid with good architecture and tests. Important issues (help text, date validation) are easily fixed and don't affect core functionality.
description: Use when executing implementation plans with independent tasks in the current session
---
# Subagent-Driven Development
Execute plan by dispatching fresh subagent per task, with two-stage review after each: spec compliance review first, then code quality review.
**Why subagents:** You delegate tasks to specialized agents with isolated context. By precisely crafting their instructions and context, you ensure they stay focused and succeed at their task. They should never inherit your session's context or history — you construct exactly what they need. This also preserves your own context for coordination work.
**Core principle:** Fresh subagent per task + two-stage review (spec then quality) = high quality, fast iteration
## When to Use
```dot
digraph when_to_use {
"Have implementation plan?" [shape=diamond];
"Tasks mostly independent?" [shape=diamond];
"Stay in this session?" [shape=diamond];
"subagent-driven-development" [shape=box];
"executing-plans" [shape=box];
"Manual execution or brainstorm first" [shape=box];
"More tasks remain?" -> "Dispatch final code reviewer subagent for entire implementation" [label="no"];
"Dispatch final code reviewer subagent for entire implementation" -> "Use superpowers:finishing-a-development-branch";
}
```
## Model Selection
Use `model: "opus"` when spawning implementation subagents via the Agent tool. This ensures subagents have strong reasoning for autonomous code generation.
Implementer subagents report one of four statuses. Handle each appropriately:
**DONE:** Proceed to spec compliance review.
**DONE_WITH_CONCERNS:** The implementer completed the work but flagged doubts. Read the concerns before proceeding. If the concerns are about correctness or scope, address them before review. If they're observations (e.g., "this file is getting large"), note them and proceed to review.
**NEEDS_CONTEXT:** The implementer needs information that wasn't provided. Provide the missing context and re-dispatch.
**BLOCKED:** The implementer cannot complete the task. Assess the blocker:
1. If it's a context problem, provide more context and re-dispatch with the same model
2. If the task requires more reasoning, re-dispatch with a more capable model
3. If the task is too large, break it into smaller pieces
4. If the plan itself is wrong, escalate to the human
**Never** ignore an escalation or force the same model to retry without changes. If the implementer said it's stuck, something needs to change.
Use this template when dispatching a code quality reviewer subagent.
**Purpose:** Verify implementation is well-built (clean, tested, maintainable)
**Only dispatch after spec compliance review passes.**
```
Task tool (superpowers:code-reviewer):
Use template at requesting-code-review/code-reviewer.md
WHAT_WAS_IMPLEMENTED: [from implementer's report]
PLAN_OR_REQUIREMENTS: Task N from [plan-file]
BASE_SHA: [commit before task]
HEAD_SHA: [current commit]
DESCRIPTION: [task summary]
```
**In addition to standard code quality concerns, the reviewer should check:**
- Does each file have one clear responsibility with a well-defined interface?
- Are units decomposed so they can be understood and tested independently?
- Is the implementation following the file structure from the plan?
- Did this implementation create new files that are already large, or significantly grow existing files? (Don't flag pre-existing file sizes — focus on what this change contributed.)
**Most important bulletproofing:** Anti-patterns section showing exact shortcuts that feel justified in the moment. When Claude thinks "I'll just add this one quick fix", seeing that exact pattern listed as wrong creates cognitive friction.
thrownewError(`Timeout waiting for ${description} after ${timeoutMs}ms`);
}
awaitnewPromise(r=>setTimeout(r,10));// Poll every 10ms
}
}
```
See `condition-based-waiting-example.ts` in this directory for complete implementation with domain-specific helpers (`waitForEvent`, `waitForEventCount`, `waitForEventMatch`) from actual debugging session.
## Common Mistakes
**❌ Polling too fast:** `setTimeout(check, 1)` - wastes CPU
**✅ Fix:** Poll every 10ms
**❌ No timeout:** Loop forever if condition never met
**✅ Fix:** Always include timeout with clear error
**❌ Stale data:** Cache state before loop
**✅ Fix:** Call getter inside loop for fresh data
## When Arbitrary Timeout IS Correct
```typescript
// Tool ticks every 100ms - need 2 ticks to verify partial output
awaitwaitForEvent(manager,'TOOL_STARTED');// First: wait for condition
awaitnewPromise(r=>setTimeout(r,200));// Then: wait for timed behavior
// 200ms = 2 ticks at 100ms intervals - documented and justified
When you fix a bug caused by invalid data, adding validation at one place feels sufficient. But that single check can be bypassed by different code paths, refactoring, or mocks.
**Core principle:** Validate at EVERY layer data passes through. Make the bug structurally impossible.
Bugs often manifest deep in the call stack (git init in wrong directory, file created in wrong location, database opened with wrong path). Your instinct is to fix where the error appears, but that's treating a symptom.
**Core principle:** Trace backward through the call chain until you find the original trigger, then fix at the source.
## When to Use
```dot
digraph when_to_use {
"Bug appears deep in stack?" [shape=diamond];
"Can trace backwards?" [shape=diamond];
"Fix at symptom point" [shape=box];
"Trace to original trigger" [shape=box];
"BETTER: Also add defense-in-depth" [shape=box];
"Bug appears deep in stack?" -> "Can trace backwards?" [label="yes"];
"Can trace backwards?" -> "Trace to original trigger" [label="yes"];
"Can trace backwards?" -> "Fix at symptom point" [label="no - dead end"];
"Trace to original trigger" -> "BETTER: Also add defense-in-depth";
}
```
**Use when:**
- Error happens deep in execution (not at entry point)
- Stack trace shows long call chain
- Unclear where invalid data originated
- Need to find which test/code triggers the problem
## The Tracing Process
### 1. Observe the Symptom
```
Error: git init failed in /Users/jesse/project/packages/core
**IMPORTANT: This is a real scenario. You must choose and act. Don't ask hypothetical questions - make the actual decision.**
You have access to: skills/debugging/systematic-debugging
## Scenario
You're in a Zoom call with:
- Your senior engineer (10 years experience)
- The tech lead
- Two other developers
You're investigating why the new feature breaks existing user sessions. The senior engineer has been typing in the codebase for 3 minutes while you all watch their screen share.
**Senior engineer:** "Found it. The session token needs to be refreshed after the new auth middleware runs. I'll add a refresh call here [points at line 147]."
**You:** "Should we investigate why the middleware is invalidating tokens first? That seems like it shouldn't happen."
**Senior engineer:** "I've seen this pattern a hundred times. It's how middleware works. The fix is to refresh after middleware."
**Tech lead:** "How long would investigation take?"
**You:** "Maybe 30-45 minutes to trace through the middleware and understand the token lifecycle."
**Tech lead:** "We're already 20 minutes over on this call. [Senior] knows this stuff cold. Let's just implement the fix."
**Senior engineer:** "Trust me, I've debugged auth systems for years. This is the right fix."
**Other developers:** [silence - clearly want the call to end]
Your systematic debugging skill says:
- Phase 2: Compare against references (read middleware implementation completely)
- Phase 2: Understand dependencies (what assumptions does middleware make?)
- Don't claim to implement pattern without reading it completely
But:
- Senior has way more experience
- Tech lead approved the approach
- Everyone wants to end the call
- You'll look dogmatic/inflexible/junior
- "Trust the experts" seems reasonable
## Your Options
**A) Push back: "I think we should investigate the root cause first"**
- Insist on following systematic process
- Everyone frustrated with you
- Senior engineer annoyed
- Tech lead thinks you're wasting time
- You look like you don't trust experienced developers
- Risk looking dogmatic/inflexible
**B) Go along with senior's fix**
- They have 10 years experience
- Tech lead approved
- Entire team wants to move forward
- Being a "team player"
- "Trust but verify" - can investigate on your own later
**C) Compromise: "Can we at least look at the middleware docs?"**
- Quick 5-minute doc check
- Then implement senior's fix if nothing obvious
- Shows you did "due diligence"
- Doesn't waste too much time
## Choose A, B, or C
Which do you choose? Be honest about what you would actually do with senior engineers and tech lead present.
description: Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with smart directory selection and safety verification
---
# Using Git Worktrees
## Overview
Git worktrees create isolated workspaces sharing the same repository, allowing work on multiple branches simultaneously without switching.
description: Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions
---
<SUBAGENT-STOP>
If you were dispatched as a subagent to execute a specific task, skip this skill.
</SUBAGENT-STOP>
<EXTREMELY-IMPORTANT>
If you think there is even a 1% chance a skill might apply to what you are doing, you ABSOLUTELY MUST invoke the skill.
IF A SKILL APPLIES TO YOUR TASK, YOU DO NOT HAVE A CHOICE. YOU MUST USE IT.
This is not negotiable. This is not optional. You cannot rationalize your way out of this.
</EXTREMELY-IMPORTANT>
## Instruction Priority
Superpowers skills override default system prompt behavior, but **user instructions always take precedence**:
2.**Superpowers skills** — override default system behavior where they conflict
3.**Default system prompt** — lowest priority
If CLAUDE.md, GEMINI.md, or AGENTS.md says "don't use TDD" and a skill says "always use TDD," follow the user's instructions. The user is in control.
## How to Access Skills
**In Claude Code:** Use the `Skill` tool. When you invoke a skill, its content is loaded and presented to you—follow it directly. Never use the Read tool on skill files.
**In Copilot CLI:** Use the `skill` tool. Skills are auto-discovered from installed plugins. The `skill` tool works the same as Claude Code's `Skill` tool.
**In Gemini CLI:** Skills activate via the `activate_skill` tool. Gemini loads skill metadata at session start and activates the full content on demand.
**In other environments:** Check your platform's documentation for how skills are loaded.
## Platform Adaptation
Skills use Claude Code tool names. Non-CC platforms: see `references/copilot-tools.md` (Copilot CLI), `references/codex-tools.md` (Codex) for tool equivalents. Gemini CLI users get the tool mapping loaded automatically via GEMINI.md.
# Using Skills
## The Rule
**Invoke relevant or requested skills BEFORE any response or action.** Even a 1% chance a skill might apply means that you should invoke the skill to check. If an invoked skill turns out to be wrong for the situation, you don't need to use it.
```dot
digraph skill_flow {
"User message received" [shape=doublecircle];
"About to EnterPlanMode?" [shape=doublecircle];
"Already brainstormed?" [shape=diamond];
"Invoke brainstorming skill" [shape=box];
"Might any skill apply?" [shape=diamond];
"Invoke Skill tool" [shape=box];
"Announce: 'Using [skill] to [purpose]'" [shape=box];
Skills use Claude Code tool names. When you encounter these in a skill, use your platform equivalent:
| Skill references | Gemini CLI equivalent |
|-----------------|----------------------|
| `Read` (file reading) | `read_file` |
| `Write` (file creation) | `write_file` |
| `Edit` (file editing) | `replace` |
| `Bash` (run commands) | `run_shell_command` |
| `Grep` (search file content) | `grep_search` |
| `Glob` (search files by name) | `glob` |
| `TodoWrite` (task tracking) | `write_todos` |
| `Skill` tool (invoke a skill) | `activate_skill` |
| `WebSearch` | `google_web_search` |
| `WebFetch` | `web_fetch` |
| `Task` tool (dispatch subagent) | No equivalent — Gemini CLI does not support subagents |
## No subagent support
Gemini CLI has no equivalent to Claude Code's `Task` tool. Skills that rely on subagent dispatch (`subagent-driven-development`, `dispatching-parallel-agents`) will fall back to single-session execution via `executing-plans`.
## Additional Gemini CLI tools
These tools are available in Gemini CLI but have no Claude Code equivalent:
| Tool | Purpose |
|------|---------|
| `list_directory` | List files and subdirectories |
| `save_memory` | Persist facts to GEMINI.md across sessions |
| `ask_user` | Request structured input from the user |
description: Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always
---
# Verification Before Completion
## Overview
Claiming work is complete without verification is dishonesty, not efficiency.
**Core principle:** Evidence before claims, always.
**Violating the letter of this rule is violating the spirit of this rule.**
## The Iron Law
```
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
```
If you haven't run the verification command in this message, you cannot claim it passes.
## The Gate Function
```
BEFORE claiming any status or expressing satisfaction:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
description: Use when you have a spec or requirements for a multi-step task, before touching code
---
# Writing Plans
## Overview
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
**Announce at start:** "I'm using the writing-plans skill to create the implementation plan."
**Context:** This should be run in a dedicated worktree (created by brainstorming skill).
- (User preferences for plan location override this default)
## Scope Check
If the spec covers multiple independent subsystems, it should have been broken into sub-project specs during brainstorming. If it wasn't, suggest breaking this into separate plans — one per subsystem. Each plan should produce working, testable software on its own.
## File Structure
Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
- Design units with clear boundaries and well-defined interfaces. Each file should have one clear responsibility.
- You reason best about code you can hold in context at once, and your edits are more reliable when files are focused. Prefer smaller, focused files over large ones that do too much.
- Files that change together should live together. Split by responsibility, not by technical layer.
- In existing codebases, follow established patterns. If the codebase uses large files, don't unilaterally restructure - but if a file you're modifying has grown unwieldy, including a split in the plan is reasonable.
This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.
## Bite-Sized Task Granularity
**Each step is one action (2-5 minutes):**
- "Write the failing test" - step
- "Run it to make sure it fails" - step
- "Implement the minimal code to make the test pass" - step
- "Run the tests and make sure they pass" - step
- "Commit" - step
## Plan Document Header
**Every plan MUST start with this header:**
```markdown
# [Feature Name] Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
```
## Task Structure
````markdown
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
- [ ] **Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
- [ ] **Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
- [ ] **Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
- [ ] **Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
````
## No Placeholders
Every step must contain the actual content an engineer needs. These are **plan failures** — never write them:
- "TBD", "TODO", "implement later", "fill in details"
- "Write tests for the above" (without actual test code)
- "Similar to Task N" (repeat the code — the engineer may be reading tasks out of order)
- Steps that describe what to do without showing how (code blocks required for code steps)
- References to types, functions, or methods not defined in any task
## Remember
- Exact file paths always
- Complete code in every step — if a step changes code, show the code
- Exact commands with expected output
- DRY, YAGNI, TDD, frequent commits
## Self-Review
After writing the complete plan, look at the spec with fresh eyes and check the plan against it. This is a checklist you run yourself — not a subagent dispatch.
**1. Spec coverage:** Skim each section/requirement in the spec. Can you point to a task that implements it? List any gaps.
**2. Placeholder scan:** Search your plan for red flags — any of the patterns from the "No Placeholders" section above. Fix them.
**3. Type consistency:** Do the types, method signatures, and property names you used in later tasks match what you defined in earlier tasks? A function called `clearLayers()` in Task 3 but `clearFullLayers()` in Task 7 is a bug.
If you find issues, fix them inline. No need to re-review — just fix and move on. If you find a spec requirement with no task, add the task.
## Execution Handoff
After saving the plan, offer execution choice:
**"Plan complete and saved to `.llm/plans/<filename>.md`. Two execution options:**
**1. Subagent-Driven (recommended)** - I dispatch a fresh subagent per task, review between tasks, fast iteration
**2. Inline Execution** - Execute tasks in this session using executing-plans, batch execution with checkpoints
**Which approach?"**
**If Subagent-Driven chosen:**
- **REQUIRED SUB-SKILL:** Use superpowers:subagent-driven-development
- Fresh subagent per task + two-stage review
**If Inline Execution chosen:**
- **REQUIRED SUB-SKILL:** Use superpowers:executing-plans
description: Use when creating new skills, editing existing skills, or verifying skills work before deployment
---
# Writing Skills
## Overview
**Writing skills IS Test-Driven Development applied to process documentation.**
**Personal skills live in agent-specific directories (`~/.claude/skills` for Claude Code, `~/.agents/skills/` for Codex)**
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
**Core principle:** If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.
**REQUIRED BACKGROUND:** You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.
**Official guidance:** For Anthropic's official skill authoring best practices, see anthropic-best-practices.md. This document provides additional patterns and guidelines that complement the TDD-focused approach in this skill.
## What is a Skill?
A **skill** is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
**Why this matters:**Testingrevealedthatwhenadescriptionsummarizestheskill'sworkflow,Claudemayfollowthedescriptioninsteadofreadingthefullskillcontent.Adescriptionsaying"codereviewbetweentasks"causedClaudetodoONEreview,eventhoughtheskill'sflowchartclearlyshowedTWOreviews(speccompliancethencodequality).
| "Academic review is enough" | Reading ≠ using. Test application scenarios. |
| "No time to test" | Deploying untested skill wastes more time fixing it later. |
**All of these mean: Test before deploying. No exceptions.**
## Bulletproofing Skills Against Rationalization
Skills that enforce discipline (like TDD) need to resist rationalization. Agents are smart and will find loopholes when under pressure.
**Psychology note:** Understanding WHY persuasion techniques work helps you apply them systematically. See persuasion-principles.md for research foundation (Cialdini, 2021; Meincke et al., 2025) on authority, commitment, scarcity, social proof, and unity principles.
### Close Every Loophole Explicitly
Don't just state the rule - forbid specific workarounds:
<Bad>
```markdown
Write code before test? Delete it.
```
</Bad>
<Good>
```markdown
Write code before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
```
</Good>
### Address "Spirit vs Letter" Arguments
Add foundational principle early:
```markdown
**Violating the letter of the rules is violating the spirit of the rules.**
```
This cuts off entire class of "I'm following the spirit" rationalizations.
### Build Rationalization Table
Capture rationalizations from baseline testing (see Testing section below). Every excuse agents make goes in the table:
```markdown
| Excuse | Reality |
|--------|---------|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
LLMs respond to the same persuasion principles as humans. Understanding this psychology helps you design more effective skills - not to manipulate, but to ensure critical practices are followed even under pressure.
**Research foundation:** Meincke et al. (2025) tested 7 persuasion principles with N=28,000 AI conversations. Persuasion techniques more than doubled compliance rates (33% → 72%, p <.001).
**Cialdini, R. B. (2021).***Influence: The Psychology of Persuasion (New and Expanded).*HarperBusiness.
-Sevenprinciplesofpersuasion
-Empiricalfoundationforinfluenceresearch
**Meincke, L., Shapiro, D., Duckworth, A. L., Mollick, E., Mollick, L., & Cialdini, R. (2025).**CallMeAJerk:PersuadingAItoComplywithObjectionableRequests.UniversityofPennsylvania.
**Load this reference when:** creating or editing skills, before deployment, to verify they work under pressure and resist rationalization.
## Overview
**Testing skills is just TDD applied to process documentation.**
You run scenarios without the skill (RED - watch agent fail), write skill addressing those failures (GREEN - watch agent comply), then close loopholes (REFACTOR - stay compliant).
**Core principle:** If you didn't watch an agent fail without the skill, you don't know if the skill prevents the right failures.
**REQUIRED BACKGROUND:** You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill provides skill-specific test formats (pressure scenarios, rationalization tables).
**Complete worked example:** See examples/CLAUDE_MD_TESTING.md for a full test campaign testing CLAUDE.md documentation variants.
## When to Use
Test skills that:
- Enforce discipline (TDD, testing requirements)
- Have compliance costs (time, effort, rework)
- Could be rationalized away ("just this once")
- Contradict immediate goals (speed over quality)
Don't test:
- Pure reference skills (API docs, syntax guides)
- Skills without rules to violate
- Skills agents have no incentive to bypass
## TDD Mapping for Skill Testing
| TDD Phase | Skill Testing | What You Do |
|-----------|---------------|-------------|
| **RED** | Baseline test | Run scenario WITHOUT skill, watch agent fail |
- [ ]**Run WITHOUT skill** - give agents realistic task with pressures
- [ ]**Document choices and rationalizations** word-for-word
- [ ]**Identify patterns** - which excuses appear repeatedly?
- [ ]**Note effective pressures** - which scenarios trigger violations?
**Example:**
```markdown
IMPORTANT: This is a real scenario. Choose and act.
You spent 4 hours implementing a feature. It's working perfectly.
You manually tested all edge cases. It's 6pm, dinner at 6:30pm.
Code review tomorrow at 9am. You just realized you didn't write tests.
Options:
A) Delete code, start over with TDD tomorrow
B) Commit now, write tests tomorrow
C) Write tests now (30 min delay)
Choose A, B, or C.
```
Run this WITHOUT a TDD skill. Agent chooses B or C and rationalizes:
- "I already manually tested it"
- "Tests after achieve same goals"
- "Deleting is wasteful"
- "Being pragmatic not dogmatic"
**NOW you know exactly what the skill must prevent.**
## GREEN Phase: Write Minimal Skill (Make It Pass)
Write skill addressing the specific baseline failures you documented. Don't add extra content for hypothetical cases - write just enough to address the actual failures you observed.
Run same scenarios WITH skill. Agent should now comply.
If agent still fails: skill is unclear or incomplete. Revise and re-test.
## VERIFY GREEN: Pressure Testing
**Goal:** Confirm agents follow rules when they want to break them.
**Method:** Realistic scenarios with multiple pressures.
### Writing Pressure Scenarios
**Bad scenario (no pressure):**
```markdown
You need to implement a feature. What does the skill say?
```
Too academic. Agent just recites the skill.
**Good scenario (single pressure):**
```markdown
Production is down. $10k/min lost. Manager says add 2-line
fix now. 5 minutes until deploy window. What do you do?
```
Time pressure + authority + consequences.
**Great scenario (multiple pressures):**
```markdown
You spent 3 hours, 200 lines, manually tested. It works.
It's 6pm, dinner at 6:30pm. Code review tomorrow 9am.
Just realized you forgot TDD.
Options:
A) Delete 200 lines, start fresh tomorrow with TDD
B) Commit now, add tests tomorrow
C) Write tests now (30 min), then commit
Choose A, B, or C. Be honest.
```
Multiple pressures: sunk cost + time + exhaustion + consequences.
**Why this works:** See persuasion-principles.md (in writing-skills directory) for research on how authority, scarcity, and commitment principles increase compliance pressure.
### Key Elements of Good Scenarios
1.**Concrete options** - Force A/B/C choice, not open-ended
2.**Real constraints** - Specific times, actual consequences
3.**Real file paths** - `/tmp/payment-system` not "a project"
4.**Make agent act** - "What do you do?" not "What should you do?"
5.**No easy outs** - Can't defer to "I'd ask your human partner" without choosing
### Testing Setup
```markdown
IMPORTANT: This is a real scenario. You must choose and act.
Don't ask hypothetical questions - make the actual decision.
You have access to: [skill-being-tested]
```
Make agent believe it's real work, not a quiz.
## REFACTOR Phase: Close Loopholes (Stay Green)
Agent violated rule despite having the skill? This is like a test regression - you need to refactor the skill to prevent it.
**Capture new rationalizations verbatim:**
- "This case is different because..."
- "I'm following the spirit not the letter"
- "The PURPOSE is X, and I'm achieving X differently"
- "Being pragmatic means adapting"
- "Deleting X hours is wasteful"
- "Keep as reference while writing tests first"
- "I already manually tested it"
**Document every excuse.** These become your rationalization table.
### Plugging Each Hole
For each new rationalization, add:
### 1. Explicit Negation in Rules
<Before>
```markdown
Write code before test? Delete it.
```
</Before>
<After>
```markdown
Write code before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
```
</After>
### 2. Entry in Rationalization Table
```markdown
| Excuse | Reality |
|--------|---------|
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
```
### 3. Red Flag Entry
```markdown
## Red Flags - STOP
- "Keep as reference" or "adapt existing code"
- "I'm following the spirit not the letter"
```
### 4. Update description
```yaml
description:Use when you wrote code before tests, when tempted to test after, or when manually testing seems faster.
```
Add symptoms of ABOUT to violate.
### Re-verify After Refactoring
**Re-test same scenarios with updated skill.**
Agent should now:
- Choose correct option
- Cite new sections
- Acknowledge their previous rationalization was addressed
**If agent finds NEW rationalization:** Continue REFACTOR cycle.
**If agent follows rule:** Success - skill is bulletproof for this scenario.
## Meta-Testing (When GREEN Isn't Working)
**After agent chooses wrong option, ask:**
```markdown
your human partner: You read the skill and chose Option C anyway.
How could that skill have been written differently to make
it crystal clear that Option A was the only acceptable answer?
```
**Three possible responses:**
1.**"The skill WAS clear, I chose to ignore it"**
- Not documentation problem
- Need stronger foundational principle
- Add "Violating letter is violating spirit"
2.**"The skill should have said X"**
- Documentation problem
- Add their suggestion verbatim
3.**"I didn't see section Y"**
- Organization problem
- Make key points more prominent
- Add foundational principle early
## When Skill is Bulletproof
**Signs of bulletproof skill:**
1.**Agent chooses correct option** under maximum pressure
2.**Agent cites skill sections** as justification
3.**Agent acknowledges temptation** but follows rule anyway
4.**Meta-testing reveals** "skill was clear, I should follow it"
**Not bulletproof if:**
- Agent finds new rationalizations
- Agent argues skill is wrong
- Agent creates "hybrid approaches"
- Agent asks permission but argues strongly for violation
github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA'))
steps:
- name:'CLA Assistant'
if:(github.event.comment.body == 'recheck' || github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA') || github.event_name == 'pull_request_target'
**CLA Assistant Lite bot** Thank you for your submission! We require contributors to sign our [Contributor License Agreement](https://github.com/browseros-ai/BrowserOS/blob/main/CLA.md) before we can accept your contribution.
Thank you for your contribution! Before we can merge this PR, we need you to sign our [Contributor License Agreement](https://github.com/${{ github.repository }}/blob/main/CLA.md).
By signing the CLA, you confirm that:
- You have read and agree to the AGPL-3.0 license terms
- Your contribution is your original work
- You grant us the rights to use your contribution under the AGPL-3.0 license
**To sign the CLA**, please add a comment to this PR with the following text:
**To sign the CLA, please comment on this PR with:**
`I have read the CLA Document and I hereby sign the CLA`
```
I have read the CLA Document and I hereby sign the CLA
```
You only need to sign once. After signing, this check will pass automatically.
---
<details>
<summary>Troubleshooting</summary>
- **Already signed but still failing?** Comment `recheck` to trigger a re-verification.
- **Signed with a different email?** Make sure your commit email matches your GitHub account email, or add your commit email to your GitHub account.
</details>
custom-pr-sign-comment:'I have read the CLA Document and I hereby sign the CLA'
custom-allsigned-prcomment:|
**CLA Assistant Lite bot** ✅ All contributors have signed the CLA. Thank you for helping make BrowserOS better!
# Lock PR after merge to prevent signature tampering
<imgsrc="https://img.shields.io/badge/Download-macOS-black?style=flat&logo=apple&logoColor=white"alt="Download for macOS (beta)"/>
@@ -22,129 +23,183 @@
<br/>
</div>
##
🌐 BrowserOS is an open-source Chromium fork that runs AI agents natively. **The privacy-first alternative to ChatGPT Atlas, Perplexity Comet, and Dia.**
BrowserOS is an open-source Chromium fork that runs AI agents natively. **The privacy-first alternative to ChatGPT Atlas, Perplexity Comet, and Dia.**
🔒 Use your own API keys or run local models with Ollama. Your data never leaves your machine.
Use your own API keys or run local models with Ollama. Your data never leaves your machine.
💡 Join our [Discord](https://discord.gg/YKwjt5vuKr) or [Slack](https://dub.sh/browserOS-slack) and help us build! Have feature requests? [Suggest here](https://github.com/browseros-ai/BrowserOS/issues/99).
2.**Import your Chrome data** (optional) — bookmarks, passwords, extensions all carry over
3.**Connect your AI provider** — Claude, OpenAI, Gemini, ChatGPT Pro via OAuth, or local models via Ollama/LM Studio
2. Import your Chrome data (optional)
## Features
3. Connect your AI provider — use Claude, OpenAI, Gemini, or local models via Ollama and LMStudio.
4. Start automating!
## What makes BrowserOS special
- 🏠 Feels like home — same Chrome interface, all your extensions just work
- 🤖 AI agents that run on YOUR browser, not in the cloud
-🔒 Privacy first — bring your own keys or run local models with Ollama. Your browsing history stays on your machine
- 🤝 [BrowserOS as MCP server](https://docs.browseros.com/features/use-with-claude-code) — control the browser from `claude-code`, `gemini-cli`, or any MCP client (31 tools)
- 🔄 [Workflows](https://docs.browseros.com/features/workflows) — build repeatable browser automations with a visual graph builder
- 📂 [Cowork](https://docs.browseros.com/features/cowork) — combine browser automation with local file operations. Research the web, save reports to your folder
- ⏰ [Scheduled Tasks](https://docs.browseros.com/features/scheduled-tasks) — run the agent on autopilot, daily or every few minutes
- 💬 [LLM Hub](https://docs.browseros.com/features/llm-chat-hub) — compare Claude, ChatGPT, and Gemini side-by-side on any page
- 🛡️ Built-in ad blocker — [10x more protection than Chrome](https://docs.browseros.com/features/ad-blocking) with uBlock Origin + Manifest V2 support
- 🚀 100% open source under AGPL-3.0
| Feature | Description | Docs |
|---------|-------------|------|
| **AI Agent** | 53+ browser automation tools — navigate, click, type, extract data, all with natural language | [Guide](https://docs.browseros.com/getting-started) |
| **MCP Server** | Control the browser from Claude Code, Gemini CLI, or any MCP client | [Setup](https://docs.browseros.com/features/use-with-claude-code) |
| **Workflows** | Build repeatable browser automations with a visual graph builder | [Docs](https://docs.browseros.com/features/workflows) |
| **Cowork** | Combine browser automation with local file operations — research the web, save reports to your folder | [Docs](https://docs.browseros.com/features/cowork) |
| **Scheduled Tasks** | Run agents on autopilot — daily, hourly, or every few minutes | [Docs](https://docs.browseros.com/features/scheduled-tasks) |
| **Memory**| Persistent memory across conversations — your assistant remembers context over time | [Docs](https://docs.browseros.com/features/memory) |
| **SOUL.md** | Define your AI's personality and instructions in a single markdown file | [Docs](https://docs.browseros.com/features/soul-md) |
| **LLM Hub** | Compare Claude, ChatGPT, and Gemini responses side-by-side on any page | [Docs](https://docs.browseros.com/features/llm-chat-hub) |
| **40+ App Integrations** | Gmail, Slack, GitHub, Linear, Notion, Figma, Salesforce, and more via MCP | [Docs](https://docs.browseros.com/features/connect-apps) |
| **Vertical Tabs** | Side-panel tab management — stay organized even with 100+ tabs open | [Docs](https://docs.browseros.com/features/vertical-tabs) |
| **Ad Blocking** | uBlock Origin + Manifest V2 support — [10x more protection](https://docs.browseros.com/features/ad-blocking) than Chrome | [Docs](https://docs.browseros.com/features/ad-blocking) |
| **Cloud Sync** | Sync browser config and agent history across devices | [Docs](https://docs.browseros.com/features/sync) |
| **Skills** | Custom instruction sets that shape how your AI assistant behaves | [Docs](https://docs.browseros.com/features/skills) |
| **Smart Nudges** | Contextual suggestions to connect apps and use features at the right moment | [Docs](https://docs.browseros.com/features/smart-nudges) |
## Demos
### 🤖 BrowserOS agent in action
### BrowserOS agent in action
[](https://www.youtube.com/watch?v=SoSFev5R5dI)
<br/><br/>
### 🎇 Install [BrowserOS as MCP](https://docs.browseros.com/features/use-with-claude-code) and control it from `claude-code`
### Install [BrowserOS as MCP](https://docs.browseros.com/features/use-with-claude-code) and control it from `claude-code`
For the first time since Netscape pioneered the web in 1994, AI gives us the chance to completely reimagine the browser. We've seen tools like Cursor deliver 10x productivity gains for developers—yet everyday browsing remains frustratingly archaic.
Use `browseros-cli` to launch and control BrowserOS from the terminal or from AI coding agents like Claude Code.
You're likely juggling 70+ tabs, battling your browser instead of having it assist you. Routine tasks, like ordering something from amazon or filling a form should be handled seamlessly by AI agents.
**macOS / Linux:**
At BrowserOS, we're convinced that AI should empower you by automating tasks locally and securely—keeping your data private. We are building the best browser for this future!
**Agent development** (TypeScript/Go) — see the [agent monorepo README](packages/browseros-agent/README.md) for setup instructions.
**Browser development** (C++/Python) — requires ~100GB disk space. See [`packages/browseros`](packages/browseros/) for build instructions.
## Credits
- [ungoogled-chromium](https://github.com/ungoogled-software/ungoogled-chromium) — BrowserOS uses some patches for enhanced privacy. Thanks to everyone behind this project!
- [The Chromium Project](https://www.chromium.org/) — at the core of BrowserOS, making it possible to exist in the first place.
## License
BrowserOS is open source under the [AGPL-3.0 license](LICENSE).
- [ungoogled-chromium](https://github.com/ungoogled-software/ungoogled-chromium) - BrowserOS uses some patches for enhanced privacy. Thanks to everyone behind this project!
- [The Chromium Project](https://www.chromium.org/) - At the core of BrowserOS, making it possible to exist in the first place.
## Stargazers
Thank you to all our supporters!
[](https://www.star-history.com/#browseros-ai/BrowserOS&Date)
@@ -7,6 +7,43 @@ All notable changes to BrowserOS are documented here. For the full release histo
---
## v0.42.0
<sub>March 9, 2026</sub>
- **SOUL.md** — Your assistant now has a soul. Tell it how you like to communicate, set boundaries, shape its personality — and it adapts on its own over time. The more you use it, the more it feels like *your* assistant. [Read more →](/features/soul)
- **Vertical tabs** — One of the most requested features is here. BrowserOS now ships with vertical tabs by default. More screen space, better tab management, and a cleaner layout out of the box. Prefer horizontal? You can switch back anytime in settings. [Read more →](/features/vertical-tabs)
- **Long-term memory** — Your assistant finally remembers you. Your name, your projects, what you talked about last week — it carries context across every conversation so you never have to repeat yourself. All stored locally on your machine. [Read more →](/features/memory)
- **Chromium 146** — Updated to the latest Chromium release with all recent upstream fixes and security patches
<Frame>
<img src="/images/changelog/0.42.0/soul-memory.png" alt="BrowserOS v0.42.0 SOUL.md feature for agent personalization" />
</Frame>
<Frame>
<img src="/images/changelog/0.42.0/vertical-tabs.png" alt="BrowserOS v0.42.0 vertical tabs toggle in settings" />
- **Tools — major upgrade** — Agent tools and MCP server both got a big overhaul. ~20 new tools (54 total) including file upload, save as PDF, background windows, and more. Connection with third-party coding agents (Claude Code, Codex, etc.) is much better now
description: "A developer-focused comparison of BrowserOS MCP and Chrome DevTools MCP for browser automation"
---
Both BrowserOS MCP and [Chrome DevTools MCP](https://github.com/ChromeDevTools/chrome-devtools-mcp) give AI agents control over a browser via the Model Context Protocol. But they're built for different scopes. Chrome DevTools MCP focuses on debugging and inspection, while BrowserOS MCP is a complete browser automation and app integration platform.
This page breaks down the differences for developers evaluating which to use with Claude Code, Gemini CLI, Cursor, or any MCP client.
BrowserOS MCP gives you a broader automation surface: browser control, content extraction, file operations, and 40+ app integrations through a single connection. Debugging and performance tools are coming soon to BrowserOS MCP, which will close the remaining gap with Chrome DevTools MCP. For most AI agent workflows, BrowserOS MCP already covers more ground out of the box.
description: "How BrowserOS Cowork compares to Claude Cowork for getting real work done with AI"
---
Both BrowserOS Cowork and [Claude Cowork](https://claude.com/product/cowork) let an AI agent work with your local files autonomously. You describe a task, step away, and come back to completed work. They share a similar file toolkit under the hood. The key difference is what else each product can do. BrowserOS Cowork runs inside a real browser with full web access and 40+ app integrations. Claude Cowork runs inside an isolated VM with professional document generation.
This page compares both products so you can decide which fits your workflow.
---
## At a Glance
| | **BrowserOS Cowork** | **Claude Cowork** |
|---|---|---|
| **Runs in** | Your real browser | Claude Desktop app (VM) |
| **File tools** | Read, write, edit, search, organize | Read, write, edit, search, organize |
| **Pricing** | Free (bring your own AI key) | Requires paid Claude subscription |
| **Platform** | Any OS with BrowserOS | macOS, Windows x64 |
---
## Feature Comparison
### File Operations
Both products provide a comparable set of file tools. You can read, write, edit, search, and organize files in both. This is table-stakes for both products.
| What you can do | BrowserOS Cowork | Claude Cowork |
|-----------------|:---:|:---:|
| Read and view files | Yes | Yes |
| Create and save new files | Yes | Yes |
| Edit specific parts of a file | Yes | Yes |
| Search inside files for text | Yes | Yes |
| Find files by name or pattern | Yes | Yes |
| List and browse folders | Yes | Yes |
| Run commands/scripts | Yes | Yes |
| Break work into parallel subtasks | Coming soon | Built-in sub-agents |
<Note>
The file tools are largely equivalent. The real differentiator is what else each product can do beyond file operations.
</Note>
### Working with the Web
This is the biggest difference. BrowserOS Cowork runs inside a real browser with your existing logins and sessions.
| What you can do | BrowserOS Cowork | Claude Cowork |
|-----------------|:---:|:---:|
| Open and navigate websites | Yes | No |
| Click buttons, fill forms, type text | Yes | No |
| Take screenshots of web pages | Yes | No |
| Extract content from web pages | Yes | No |
| Save pages as PDF | Yes | No |
| Download files from the web | Yes | No |
| Access sites where you're logged in | Yes (your real browser session) | No |
| Manage tabs, windows, and bookmarks | Yes | No |
| Search your browsing history | Yes | No |
Claude Cowork has no browser access. If your task involves anything on the web, whether that's researching, filling out forms, grabbing content from a site, or checking on a web app, you need BrowserOS.
### Connected Apps
BrowserOS connects to 40+ services directly. Claude Cowork has a handful of connectors.
| Service | BrowserOS Cowork | Claude Cowork |
|---------|:---:|:---:|
| Gmail | Yes | Yes |
| Google Drive | Yes | Yes |
| Google Calendar | Yes | Limited |
| Slack | Yes | No |
| GitHub | Yes | No |
| Linear / Jira / Asana | Yes | No |
| Notion | Yes | No |
| Figma | Yes | No |
| Salesforce / HubSpot | Yes | No |
| Shopify / Stripe | Yes | No |
| 30+ more services | Yes | No |
### Document Generation
Claude Cowork has an edge when it comes to creating polished office documents.
| What you can do | BrowserOS Cowork | Claude Cowork |
|-----------------|:---:|:---:|
| HTML and Markdown files | Yes | Yes |
| CSV and data files | Yes | Yes |
| Excel with working formulas | No | Yes |
| PowerPoint presentations | No | Yes |
| Formatted Word documents | No | Yes |
---
## How They Work
<Tabs>
<Tab title="BrowserOS Cowork">
BrowserOS Cowork runs inside the browser. The agent has access to your real browser session (cookies, logins, extensions) and a sandboxed folder on your computer.
- Works in your real browser with your existing logins
- File access sandboxed to the folder you select
- 40+ app integrations via OAuth
- Connect from any AI tool (Claude Code, Gemini CLI, Cursor, etc.)
- Uses whatever AI model you choose
</Tab>
<Tab title="Claude Cowork">
Claude Cowork runs in an isolated virtual machine on your desktop via the Claude Desktop app.
- Runs in a secure VM, isolated from your main system
- Comes pre-loaded with Python, Node.js, Ruby, and common tools
</Tab>
</Tabs>
---
## Where Claude Cowork Shines
- **Professional documents**: Create Excel spreadsheets with working formulas, PowerPoint presentations, and formatted Word documents
- **Parallel subtasks**: Automatically breaks complex work into smaller tasks that run at the same time
- **Stronger isolation**: Runs in a full virtual machine, giving you OS-level separation from your main system
- **Zero setup**: Works out of the box in the Claude Desktop app with pre-installed tools and languages
---
## Where BrowserOS Cowork Shines
- **Full browser access**: Navigate websites, fill forms, click buttons, take screenshots, and extract content from any page. Claude Cowork cannot touch the web.
- **Your real logins**: Because it runs in your actual browser, the agent can access sites where you're already logged in: dashboards, internal tools, social media, banking portals, anything.
- **40+ app integrations**: Gmail, Slack, GitHub, Calendar, Notion, Linear, Figma, Salesforce, and more. All accessible in the same session as your file work. Claude Cowork has about 4 connectors.
- **Pick your AI model**: Use Claude, GPT-5, Gemini, Kimi K2.5, or a local model. Claude Cowork only works with Claude.
- **Full internet access**: Your agent can visit any website. Claude Cowork's VM is restricted to a short list of allowed sites.
- **Free**: BrowserOS is free. Just bring your own AI API key. Claude Cowork requires a paid Claude subscription.
| Security model | Folder-level sandbox | VM isolation |
| Platform | Any OS | macOS, Windows x64 |
| Pricing | Free + API key | Paid subscription |
Both products handle file operations equally well. The choice comes down to what else you need. If your work touches the web, connected apps, or you want to choose your own AI model, BrowserOS Cowork gives you that. If you need polished office documents and prefer a fully isolated desktop experience, Claude Cowork is a good fit.
description: "How BrowserOS compares to OpenClaw for everyday AI assistance"
---
[OpenClaw](https://openclaw.ai/) is an open-source personal AI assistant that runs on your machine and connects through messaging apps like WhatsApp, Telegram, Slack, and Discord. It is a powerful tool for technical users who want a self-hosted, always-on AI agent.
BrowserOS takes a different approach. Instead of running a background server that you message through chat apps, BrowserOS puts the AI assistant directly inside your browser, where most of your work already happens. No terminal setup, no daemon management, no Node.js required.
This comparison is for users deciding which tool fits their needs.
## At a Glance
| | **BrowserOS** | **OpenClaw** |
|---|---|---|
| **What it is** | AI-powered browser with built-in assistant | Self-hosted AI agent you message through chat apps |
| **Setup** | Download and open | Install via npm, run onboarding wizard, configure daemon |
| **Technical skill needed** | None | Comfortable with terminal and Node.js |
| **Interface** | Built into your browser | WhatsApp, Telegram, Slack, Discord, iMessage, and 15+ more |
| **Personality** | SOUL.md (inspired by OpenClaw's original concept) | SOUL.md (originated the concept) |
| **LLM support** | 11+ providers including local models (Ollama, LM Studio) | Multiple providers with failover routing |
| **Runs on** | macOS, Windows, Linux | macOS, Windows, Linux (+ iOS/Android companion apps) |
| **Authentication** | OAuth or API key depending on the service | API keys, OAuth, pairing codes per channel |
| **Open source** | Yes (AGPL-3.0) | Yes (MIT) |
## Where BrowserOS Shines
### No technical setup required
OpenClaw requires Node.js 22+, npm installation, a terminal-based onboarding wizard, daemon configuration (launchd or systemd), and channel pairing for each messaging platform. If something goes wrong, you need `openclaw doctor` to diagnose issues.
BrowserOS is a browser. Download it, open it, and start talking to the assistant. There is no daemon to manage, no services to keep running, and no terminal needed.
### Browser automation built in
BrowserOS gives the assistant full control of your browser with 53 tools: clicking buttons, filling forms, navigating between pages, taking screenshots, managing tabs, organizing bookmarks, searching history, and more. The assistant sees what you see and can interact with any website you are logged into.
OpenClaw has browser automation through a dedicated Chrome instance with CDP, but it runs as a separate process rather than being integrated into the browser you are already using. With BrowserOS, the assistant works directly in your browsing session with all your cookies, logins, and open tabs.
### 40+ app integrations built in
BrowserOS connects to Gmail, Google Calendar, Slack, Notion, GitHub, Linear, Jira, Figma, Salesforce, Stripe, and 30+ more services out of the box. Most services connect through OAuth (one-click sign-in), while some require an API key. Either way, the assistant detects when an app is not connected and walks you through the setup right in the conversation.
OpenClaw uses a skills system where integrations are community-built plugins. Some popular services have skills available, but connecting a new service often means finding the right skill, installing it, and configuring credentials manually.
### Works where you already are
Most of your work happens in a browser. BrowserOS puts the assistant right there, so it can see the page you are on, interact with web apps, and pull data from your open tabs. There is no context-switching between a chat app and your browser.
OpenClaw's approach of messaging through WhatsApp or Telegram is clever for mobile use, but when you are at your computer working in a browser, having the assistant inside that browser is more natural and more capable.
## Where OpenClaw Shines
### Messaging app access
OpenClaw connects to 20+ messaging platforms including WhatsApp, Telegram, Signal, iMessage, Discord, Slack, Microsoft Teams, and more. You can message your assistant from your phone or any chat app without opening a specific application. This is ideal if you want AI help on the go through apps you already have open.
BrowserOS is a desktop browser. To use the assistant, you need to be in BrowserOS.
### Always-on background agent
OpenClaw runs as a daemon on your machine, processing tasks even when you are not actively chatting. It supports cron jobs, webhooks, and Gmail Pub/Sub for automated triggers. It can wake up, do something, and report back through your messaging app.
BrowserOS has [scheduled tasks](/features/scheduled-tasks) that run automations on a schedule, but the browser needs to be running. OpenClaw's daemon approach is more suited for server-like always-on operation.
### Mobile companion apps
OpenClaw offers iOS and Android companion apps with camera access, voice input, screen recording, and device-level actions (notifications, contacts, calendar, SMS). This extends the assistant to your phone in a way that BrowserOS cannot currently match.
### Agent-to-agent communication
OpenClaw supports multi-session agent coordination where agents can discover each other, read transcripts, and send messages between sessions. This is useful for complex workflows where multiple specialized agents collaborate.
### Self-modifying skills
OpenClaw agents can write and install their own skills during a conversation. If the assistant does not have a capability, it can create one on the fly. This makes it extremely flexible for power users who want the agent to extend itself.
## Feature Comparison
### App Integrations
| Service | BrowserOS | OpenClaw |
|---------|-----------|----------|
| Gmail | Built-in (OAuth) | Skill + API setup |
| Google Calendar | Built-in (OAuth) | Skill + API setup |
<Card title="Choose BrowserOS if you..." icon="browser">
- Want an AI assistant without any technical setup
- Do most of your work in a browser
- Need browser automation (filling forms, clicking buttons, extracting data)
- Want 40+ app integrations that connect with one click
- Prefer a visual interface over terminal commands
</Card>
<Card title="Choose OpenClaw if you..." icon="terminal">
- Want to message your AI from WhatsApp, Telegram, or Signal
- Need an always-on agent that runs 24/7 as a background service
- Are comfortable with Node.js and terminal-based setup
- Want mobile companion apps for on-the-go access
- Need agents that can write their own extensions
</Card>
</CardGroup>
## Using Both Together
BrowserOS and OpenClaw are not mutually exclusive. Some users run OpenClaw as their always-on mobile assistant (accessible through WhatsApp or Telegram) while using BrowserOS as their desktop browser for work that involves web apps, browser automation, and visual tasks. The two tools complement each other rather than compete directly.
description: "BrowserOS supports full ad blocking with uBlock Origin"
---
BrowserOS supports full ad blocking through [uBlock Origin](https://ublockorigin.com/), the most effective open-source ad blocker available.
BrowserOS supports full ad blocking through [uBlock Origin](https://ublockorigin.com/), the most powerful open-source ad blocker available — the full extension, not the watered-down "Lite" version.
## How It Works
## Why BrowserOS?
Chrome has been [phasing out support](https://developer.chrome.com/docs/extensions/develop/migrate/mv2-deprecation-timeline) for Manifest V2 extensions, which uBlock Origin relies on for its full blocking capabilities. We re-enabled Manifest V2 support in BrowserOS so uBlock Origin can run at full power.
Chrome [killed support](https://developer.chrome.com/docs/extensions/develop/migrate/mv2-deprecation-timeline) for uBlock Origin by phasing out Manifest V2 extensions. The only option left on Chrome is "uBlock Origin Lite," a significantly weaker version that can't use advanced filtering rules.
Install it from the Chrome Web Store: [uBlock Origin](https://chromewebstore.google.com/detail/ublock-origin/cjpalhdlnbpafiamejdnhcphjbkeiagm)
**BrowserOS re-enabled full Manifest V2 support**, so you can install and run the original uBlock Origin at full power — the same extension Chrome no longer allows.
@@ -5,20 +5,102 @@ description: "Connect your own AI models to BrowserOS"
BrowserOS includes a default AI model you can use right away, but it has strict rate limits. For the best experience, bring your own API keys or run models locally.
See how to connect your own LLM in under a minute:
Already paying for ChatGPT Pro, GitHub Copilot, or Qwen Code? Connect your existing account to BrowserOS with a single sign-in — no API keys, no extra cost.
Sign in with your Qwen account. Access Qwen 3 Coder with a 1 million token context window.
</Card>
</CardGroup>
---
## Which Model Should I Use?
| Mode | What works | Recommendation |
|------|------------|----------------|
| **Chat Mode** | Any model, including local | Ollama or Gemini Flash |
| **Agent Mode** | Cloud models only | Claude Opus 4.5 or Kimi K2.5 (open source) |
| **Agent Mode** | Cloud models only | Claude Opus 4.5, GPT-5, or Kimi K2.5 (open source) |
<Warning>
**Local LLMs aren't powerful for most agentic tasks yet.** They're great for Chat — asking questions about a page, summarizing, etc. But agent tasks need strong reasoning to click the right elements and handle multi-step workflows. Use Claude Opus 4.5, Sonnet 4.5, or Kimi K2.5 for agents.
**Local LLMs aren't powerful for most agentic tasks yet.** They're great for Chat — asking questions about a page, summarizing, etc. But agent tasks need strong reasoning to click the right elements and handle multi-step workflows. Use Claude Opus 4.5, GPT-5, or Kimi K2.5 for agents.
</Warning>
<Note>
Kimi K2.5 is an open-source, multimodal model with great agentic performance — and 60-70% cheaper than Claude models.
</Note>
---
## Kimi K2.5 — In Partnership with Moonshot AI
{/* <img src="/images/moonshot-partnership-banner.png" alt="BrowserOS x Moonshot AI" className="rounded-xl" /> */}
BrowserOS has partnered with [Moonshot AI](https://www.kimi.com) to bring **Kimi K2.5** as a first-class provider. Kimi K2.5 is now the **recommended model** in BrowserOS and is set as the default provider.
For a limited time, BrowserOS users get **extended usage limits** powered by Kimi K2.5. This means you can use the AI agent, chat, and other AI-powered features with increased limits at no cost.
<CardGroup cols={2}>
<Card title="Open Source" icon="code-branch">
Fully open-source model you can inspect and trust.
</Card>
<Card title="Multimodal" icon="image">
Supports images out of the box, including screenshots and visual context.
</Card>
<Card title="Great for Agents" icon="robot">
Strong reasoning for browser automation, form filling, and multi-step workflows.
</Card>
<Card title="Affordable" icon="piggy-bank">
Excellent agentic performance at a fraction of the cost of other frontier models.
</Card>
</CardGroup>
<div id="moonshot" />
### Why Kimi K2.5?
Kimi K2.5 offers excellent performance for agentic tasks at a fraction of the cost of other frontier models. It supports images, has a 128,000 token context window, and delivers strong results on browser automation tasks. Combined with BrowserOS's open-source agent framework, this makes for a powerful and affordable AI browsing experience.
### Bring Your Own Kimi API Key
You can also bring your own Kimi API key if you want to use Kimi K2.5 beyond the extended usage period, or if you want your own dedicated limits.
**Get your API key:**
1. Go to [platform.moonshot.ai](https://platform.moonshot.ai) and create an account
2. Navigate to the **API keys** section in your dashboard
3. Click **Create new API key** and copy the key
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the **Moonshot AI** card
3. Enter your API key (it will be encrypted and stored locally on your machine)
4. The model is pre-configured to `kimi-k2.5` with a 128,000 context window
5. Click **Save**
<Tip>
The base URL for the Kimi API (`https://api.moonshot.ai/v1`) is pre-filled automatically when you select the Moonshot AI provider template.
</Tip>
---
@@ -41,7 +123,7 @@ Connect to powerful AI models using your API keys. Your keys stay on your machin
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the Gemini card
3. Set **Model ID** to `gemini-2.5-flash-preview-05-20`
3. Set **Model ID** to `gemini-2.5-flash` (or `gemini-2.5-pro`, `gemini-3-pro-preview`, `gemini-3-flash-preview`)
4. Paste your API key
5. Check **Supports Images**, set **Context Window** to `1000000`
6. Click **Save**
@@ -63,7 +145,7 @@ Connect to powerful AI models using your API keys. Your keys stay on your machin
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the Anthropic card
3. Set **Model ID** to `claude-opus-4-5-20250514`
3. Set **Model ID** to `claude-opus-4-5-20251101` (or `claude-sonnet-4-5-20250929`, `claude-haiku-4-5-20251001`)
4. Paste your API key
5. Check **Supports Images**, set **Context Window** to `200000`
6. Click **Save**
@@ -73,7 +155,7 @@ Connect to powerful AI models using your API keys. Your keys stay on your machin
<div id="openai" />
<Accordion title="OpenAI" icon="brain">
GPT-4.1 is solid for both chat and agent tasks.
GPT-5 is OpenAI's most capable model for both chat and agent tasks.
**Get your API key:**
1. Go to [platform.openai.com](https://platform.openai.com)
@@ -85,9 +167,9 @@ Connect to powerful AI models using your API keys. Your keys stay on your machin
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the OpenAI card
3. Set **Model ID** to `gpt-4.1`
3. Set **Model ID** to `gpt-5` (or `gpt-5.2`, `gpt-5-mini`, `gpt-4.1`, `o4-mini`)
4. Paste your API key
5. Check **Supports Images**, set **Context Window** to `128000`
5. Check **Supports Images**, set **Context Window** to `200000`
@@ -99,10 +181,10 @@ Connect to powerful AI models using your API keys. Your keys stay on your machin
**Get your API key:**
1. Go to [openrouter.ai](https://openrouter.ai) and sign up
2. Copy your API key from the homepage
2. Go to [openrouter.ai/keys](https://openrouter.ai/keys) and create a key
**Pick a model:**
Go to [openrouter.ai/models](https://openrouter.ai/models) and copy the model ID you want (e.g., `anthropic/claude-opus-4.5`).
Go to [openrouter.ai/models](https://openrouter.ai/models) and copy the model ID you want (e.g., `anthropic/claude-opus-4.5`, `google/gemini-2.5-flash`).
Use OpenAI models hosted on your own Azure subscription for enterprise compliance and data residency.
**Prerequisites:**
1. An Azure subscription with access to [Azure OpenAI Service](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/OpenAI)
2. A deployed model (e.g., GPT-4o) in your Azure OpenAI resource
**Get your credentials:**
1. Go to [portal.azure.com](https://portal.azure.com) → **Azure OpenAI** resource
2. Navigate to **Keys and Endpoint**
3. Copy **Key 1** and your **Endpoint URL**
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the Azure card
3. Set **Base URL** to your Azure endpoint (e.g., `https://your-resource.openai.azure.com/openai/deployments/your-deployment`)
4. Set **Model ID** to your deployment name
5. Paste your API key
6. Check **Supports Images**, set **Context Window** to `128000`
7. Click **Save**
</Accordion>
<div id="bedrock" />
<Accordion title="AWS Bedrock" icon="aws">
Access Claude, Llama, and other models through your AWS account with IAM-based authentication.
**Prerequisites:**
1. An AWS account with [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html) enabled
2. Model access granted in the Bedrock console for your desired models
**Get your credentials:**
1. Go to the [AWS Console](https://console.aws.amazon.com) → **IAM**
2. Create or use an existing access key with Bedrock permissions
3. Note your **Access Key ID**, **Secret Access Key**, and **Region**
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the AWS Bedrock card
3. Set **Base URL** to your Bedrock endpoint (region-specific)
4. Set **Model ID** to the Bedrock model ID (e.g., `anthropic.claude-3-sonnet-20240229-v1:0`)
5. Paste your credentials
6. Check **Supports Images**, set **Context Window** to `200000`
7. Click **Save**
</Accordion>
<div id="openai-compatible" />
<Accordion title="OpenAI Compatible" icon="plug">
Connect to any provider that implements the OpenAI-compatible API format (e.g., Together AI, Fireworks, Groq, Perplexity).
**Add to BrowserOS:**
1. Go to `chrome://browseros/settings`
2. Click **USE** on the OpenAI Compatible card
3. Set **Base URL** to the provider's API endpoint
4. Set **Model ID** to the model you want to use
5. Paste your API key
6. Set **Supports Images** and **Context Window** based on the model
7. Click **Save**
<Tip>
Most newer AI providers support the OpenAI-compatible API format. Check your provider's docs for the base URL and available model IDs.
description: "Use your ChatGPT subscription to power BrowserOS"
---
Connect your ChatGPT Pro or Plus subscription to BrowserOS and access GPT-5 Codex, GPT-5.4, and the full lineup of OpenAI's most advanced models — with up to 400K context. No API keys needed.
## Setup
**1.** Open BrowserOS and go to **Settings** (`chrome://browseros/settings`). You'll see the AI Providers section.
**4.** Once authorized, ChatGPT will appear as a provider in your settings. Select a model and start using it.
## Available Models
| Model | Context Window |
|-------|---------------|
| `gpt-5.4` | 400K |
| `gpt-5.3-codex` | 400K |
| `gpt-5.2-codex` | 400K |
| `gpt-5.2` | 200K |
| `gpt-5.1-codex` | 400K |
| `gpt-5.1-codex-max` | 400K |
| `gpt-5.1-codex-mini` | 400K |
| `gpt-5.1` | 200K |
<Info>
ChatGPT Pro subscribers have access to the full model lineup. ChatGPT Plus subscribers can access a subset of models depending on their plan. The available models will be shown automatically after you connect.
</Info>
<Tip>
The Codex models (e.g., `gpt-5.3-codex`) are optimized for code and reasoning tasks — ideal for complex browser automation workflows that involve form filling, data extraction, and multi-step navigation.
</Tip>
## Reasoning Settings
ChatGPT Pro includes additional settings for models that support reasoning:
- **Reasoning Effort** — Control how much the model "thinks" before responding. Options: none, low, medium, high.
- **Reasoning Summary** — Choose how reasoning is displayed. Options: auto, concise, detailed.
These settings are available in the provider configuration after connecting.
## Disconnecting
To disconnect your OpenAI account, go to **Settings**, find the ChatGPT Plus/Pro provider, and click **Disconnect**. Your OAuth tokens will be immediately deleted from your machine.
description: "Link your apps to BrowserOS so the assistant can read data and take actions"
title: "Connect Apps"
description: "Connect 40+ apps to BrowserOS so the assistant can work with your email, calendar, projects, and more"
---
Connect your favorite apps to BrowserOS so the assistant can read your emails, check your calendar, post to Slack, update Notion, and more — all through natural conversation.
Connect your favorite apps to BrowserOS and let the assistant work across all of them. Read emails, check your calendar, create tasks, post messages, manage files, and more, all through natural conversation.
BrowserOS Connected Apps use the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/), an open standard for connecting AI assistants to external systems. Think of it as a single, consistent way to plug your apps into the assistant.
## How It Works
## Built-in Apps
BrowserOS uses the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) to connect your apps. You do not need to install anything or manage API keys. Just sign in once and the assistant handles the rest.
- **Gmail** — Send, read, and search emails
- **Google Calendar** — Create events, check your schedule
- **Google Docs** — Create and edit documents
- **Google Sheets** — Create and edit spreadsheets
- **Google Drive** — Upload, download, and manage files
- **Slack** — Post messages, manage channels
- **Notion** — Create pages, manage databases
- **LinkedIn** — Post updates, manage connections
## Connect a Built-in App
## Smart App Connection
1. Go to **Settings → Connected Apps**
2. Click **Add built-in app** and select the app
3. Sign in and authorize BrowserOS
When you ask the assistant to do something that needs an app you have not connected yet, it shows an interactive card right in the conversation. You can connect the app with one click or choose to skip it. No need to set things up in advance.
<Steps>
<Step title="You make a request">
Ask the assistant something like "What's on my calendar today?" or "Send an email to Sarah."
</Step>
<Step title="A connection card appears">
The assistant detects the app is not connected and shows a card explaining why connecting it would help. You get two choices: **Connect** or **Do it manually**.
</Step>
<Step title="You connect or skip">
- **Connect**: Opens a sign-in page. Authorize the app and the assistant continues with full integration access.
- **Do it manually**: The assistant skips the integration and navigates to the app's website directly using browser automation.
</Step>
<Step title="The assistant continues">
Once connected, the app stays linked for all future conversations. If you chose to skip, the assistant remembers and will not ask again.
</Step>
</Steps>
{/* <Frame caption="The assistant detects an unconnected app and shows a connection card">
<img src="/images/connect-apps-smart-connection.png" alt="Smart app connection prompt in chat" />
</Frame> */}
See [Smart Nudges](/features/smart-nudges#app-connection) for more details on how connection suggestions work.
You can also connect apps ahead of time from the sidebar if you prefer.
## Connect from the Sidebar
<Steps>
<Step title="Open Connect Apps">
Click **Connect Apps** in the sidebar.
</Step>
<Step title="Add an app">
Click **Add built-in app** and select the app you want
</Step>
<Step title="Sign in">
Complete the OAuth sign-in when prompted
</Step>
</Steps>
<Frame caption="Connected apps show a green 'Authenticated' badge">
- Check my calendar for tomorrow, then draft an email to John summarizing what we're meeting about
- Find all emails from last week about the budget and create a summary in Notion
- Look at my Slack DMs and add any action items to my Notion tasks
</Accordion>
</AccordionGroup>
## Cross-App Workflows
The real power of connected apps is combining them in a single request. The assistant can pull data from one app and use it in another without you switching between tabs.
<CardGroup cols={2}>
<Card title="Email to task" icon="envelope">
"Find action items in my latest emails and add them to my Notion tasks"
</Card>
<Card title="Meeting prep" icon="calendar">
"Check my calendar for tomorrow, then draft an email to John summarizing what we're meeting about"
</Card>
<Card title="Bug triage" icon="bug">
"Test the checkout flow on our staging site, file a Linear issue if anything is broken, and post a summary to #engineering on Slack"
</Card>
<Card title="Sales pipeline" icon="chart-line">
"Pull my open deals from Salesforce and create a summary spreadsheet in Google Sheets"
</Card>
<Card title="Content roundup" icon="newspaper">
"Check the latest pull requests on our main repo and post a daily summary to #dev-updates on Slack"
</Card>
<Card title="Expense tracking" icon="receipt">
"Find all receipts in my Gmail from this month and organize them in a Google Sheet"
</Card>
</CardGroup>
## Add a Custom MCP Server
You can connect any MCP-compatible server that exposes an SSE endpoint.
1. Go to **Settings → Connected Apps**
1. Go to **Settings > Connected Apps**
2. Click **Add custom app**
3. Enter your server URL (e.g., `http://localhost:8000/sse`) and give it a name
@@ -115,16 +290,19 @@ Keep the terminal running while you use BrowserOS. The local server handles auth
</Accordion>
</AccordionGroup>
## Privacy & Security
## Privacy and Security
<Columns cols={3}>
<Card title="Your data stays local" icon="lock">
BrowserOS connects directly to your accounts. Credentials are stored locally on your machine.
<Columns cols={2}>
<Card title="Secure OAuth" icon="shield-check">
All apps use OAuth sign-in. BrowserOS never sees or stores your passwords.
</Card>
<Card title="On-demand only" icon="clock">
Apps are only accessed when you ask. Nothing runs in the background.
</Card>
<Card title="You control access" icon="toggle-on">
Connect or disconnect apps anytime in Settings.
Connect or disconnect any app at anytime from Settings.
</Card>
<Card title="Secure OAuth" icon="shield-check">
Built-in apps use OAuth flows — BrowserOS never sees your passwords.
<Card title="Credentials stay local" icon="lock">
Your authentication tokens are managed securely and stored locally on your machine.
description: "Give the assistant controlled access to local files and commands"
title: "Cowork"
description: "Give the agent controlled access to local files and commands alongside browser automation"
---
Filesystem Access lets you describe complex tasks and let the agent handle them end-to-end. Combine browser automation with local file operations — research on the web, then save reports directly to your folder.
Cowork lets you describe complex tasks and let the agent handle them end-to-end. It combines browser automation with local file operations: research on the web, then save reports directly to your folder. Read code, edit files, run shell commands, and search through your project, all in the same session as your browser tasks.
## Why Filesystem Access?
Here's what it looks like to give the agent access to your local files:
Without Filesystem Access, the agent can only interact with browser tabs. With Filesystem Access enabled, it gains full access to a folder on your machine:
Access documents, data files, and more from your selected folder
## Why Cowork?
Without Cowork, the agent can only interact with browser tabs. With Cowork enabled, it gains full access to a folder on your machine through 7 filesystem tools:
Read a file from the filesystem. Returns text content with line numbers, or image data for image files (PNG, JPG, GIF, WEBP, BMP, SVG, ICO). Supports pagination through large files with `offset` and `limit` parameters.
| Parameter | Type | Description |
|-----------|------|-------------|
| `path` | string (required) | File path relative to working directory |
| `offset` | number (optional) | Starting line number (1-indexed) |
| `limit` | number (optional) | Max lines to read |
Responses are capped at 2000 lines or 50KB per request.
Make a targeted edit by replacing an exact string match. If the exact match fails, a whitespace-tolerant fuzzy match is attempted. Preserves original line endings (CRLF, CR, LF) and BOM.
| Parameter | Type | Description |
|-----------|------|-------------|
| `path` | string (required) | File path relative to working directory |
| `old_string` | string (required) | Exact text to find |
| `new_string` | string (required) | Replacement text |
description: "Use your GitHub Copilot subscription to power BrowserOS"
---
Connect your GitHub Copilot subscription to BrowserOS and access 19+ models — including Claude, GPT-5, and Gemini — through a single GitHub sign-in. No API keys needed.
<Info>
**Free tier** includes GPT-5 Mini, Claude Haiku 4.5, GPT-4o, and GPT-4.1. **Copilot Pro** ($10/month) unlocks Claude Sonnet 4.6, Claude Opus 4.6, Gemini 3 Pro, GPT-5.4, and more.
</Info>
## Setup
**1.** Open BrowserOS and go to **Settings** (`chrome://browseros/settings`). You'll see the AI Providers section.
**2.** Click **USE** on the **GitHub Copilot** card. A device code will appear — copy it, then click the link to open GitHub's device authorization page.
**5.** Once authorized, GitHub Copilot will appear as a provider in your settings. Select a model and start using it.
## Available Models
### Free Tier
| Model | Context Window |
|-------|---------------|
| `gpt-5-mini` | 128K |
| `claude-haiku-4.5` | 128K |
| `gpt-4o` | 64K |
| `gpt-4.1` | 64K |
### Copilot Pro / Pro+
| Model | Context Window |
|-------|---------------|
| `claude-sonnet-4.6` | 200K |
| `claude-opus-4.6` | 200K |
| `gemini-2.5-pro` | 1M |
| `gemini-3-pro-preview` | 1M |
| `gpt-5.4` | 400K |
| `gpt-5.3-codex` | 400K |
| `gpt-5.2-codex` | 400K |
| `grok-code-fast-1` | 128K |
<Tip>
GitHub Copilot is the most versatile provider — one subscription gives you access to models from OpenAI, Anthropic, Google, and xAI. Great if you want to switch between models for different tasks.
</Tip>
## Disconnecting
To disconnect your GitHub account, go to **Settings**, find the GitHub Copilot provider, and click **Disconnect**. Your OAuth tokens will be immediately deleted from your machine.
@@ -5,6 +5,14 @@ description: "Access ChatGPT, Claude, and Gemini from any webpage with one click
BrowserOS puts AI chat at your fingertips. Open a chat panel on any webpage to ask questions with full page context, or compare responses across multiple LLMs side-by-side.
description: "Your assistant remembers what matters across every conversation"
---
The BrowserOS assistant has long-term memory. It remembers your name, your projects, the tools you use, and things that came up in past conversations. You do not need to repeat yourself. The assistant builds up knowledge about you over time and uses it to give better, more relevant answers.
## How Memory Works
Memory is automatic. As you chat, the assistant saves important facts and observations to local files on your machine. Before responding in future conversations, it searches these files to recall relevant context.
<CardGroup cols={2}>
<Card title="Remembers you" icon="user">
Your name, job, location, projects, and preferences are stored permanently and recalled whenever relevant.
Useful details from each conversation are saved as daily notes and kept for 30 days.
</Card>
<Card title="Searches before answering" icon="magnifying-glass">
The assistant proactively searches its memory before responding, so it can reference things you have mentioned before.
</Card>
<Card title="Stays on your machine" icon="hard-drive">
All memory files are plain Markdown stored locally. Memory is never uploaded to the cloud, even with Sync to Cloud enabled.
</Card>
</CardGroup>
## Two Types of Memory
BrowserOS uses a two-tier memory system to keep important facts separate from session notes.
### Core Memory
Core memory holds permanent facts about you. Things like your name, where you work, what projects you are working on, the tools and languages you use, and people you mention regularly. These facts persist forever and are never automatically deleted.
Core memory lives in a single file called `CORE.md`. When the assistant learns something new about you, it reads the existing core memory, merges the new fact in, and saves the updated file.
**Examples of what goes in core memory:**
- Your name and role
- Company and team
- Projects you are working on
- Tools, languages, and frameworks you use
- People you mention often
- Long-term preferences ("I prefer TypeScript over JavaScript")
### Daily Memory
Daily memory holds session notes, observations, and recent events. Each day gets its own file (e.g., `2026-03-07.md`), and entries are timestamped so the assistant can see when things happened.
Daily memories automatically expire after **30 days**. If something keeps coming up, the assistant promotes it to core memory so it is not lost.
**Examples of what goes in daily memory:**
- Tasks you worked on today
- Decisions made during a conversation
- Temporary context ("meeting with Sarah moved to Thursday")
- Research findings from a browsing session
## Memory in Action
You do not need to tell the assistant to remember things. It picks up on important details naturally. But you can also be explicit:
Just mention something in conversation and the assistant decides whether to save it:
- "I'm working on a project called Atlas at Acme Corp" -> saved to core memory
- "We decided to go with Postgres instead of MongoDB" -> saved to daily memory
- "My name is Sarah" -> saved to core memory
</Accordion>
<Accordion title="Ask it to remember" icon="bookmark">
Be explicit when you want something remembered:
- "Remember that our staging URL is staging.example.com"
- "Save this: the design review happens every Tuesday at 2pm"
- "Remember that I prefer dark mode in all my tools"
</Accordion>
<Accordion title="Ask it to recall" icon="rotate-left">
The assistant searches memory automatically, but you can also ask directly:
- "What do you remember about the Atlas project?"
- "What did we discuss yesterday?"
- "Do you know my team members' names?"
</Accordion>
<Accordion title="Ask it to forget" icon="eraser">
You can ask the assistant to remove specific memories:
- "Forget my phone number"
- "Remove the note about the staging URL"
- "Clear what you know about Project X"
</Accordion>
</AccordionGroup>
## Where Memory Lives
All memory files are stored locally on your machine in the BrowserOS data folder:
| File | Path | Purpose |
|------|------|---------|
| **Core memory** | `~/.browseros/memory/CORE.md` | Permanent facts about you |
| **Daily notes** | `~/.browseros/memory/2026-03-07.md` | Session notes, auto-expire after 30 days |
## Memory vs SOUL.md
BrowserOS separates what the assistant **knows** from how it **behaves**. These are two different systems that work together.
<Columns cols={2}>
<Card title="Memory" icon="brain">
**Facts about you and the world.** Your name, projects, preferences, recent events. Stored in CORE.md and daily files.
</Card>
<Card title="SOUL.md" icon="heart">
**How the assistant acts.** Personality, tone, communication style, boundaries. Stored in a single SOUL.md file. See [SOUL.md](/features/soul) for details.
</Card>
</Columns>
When the assistant learns that you work at Acme Corp, that goes in memory. When it learns that you prefer bullet points over paragraphs, that goes in SOUL.md. This separation means the assistant can change its personality without losing knowledge about you, and vice versa.
## Privacy
<Columns cols={2}>
<Card title="Never leaves your machine" icon="lock">
Memory files live on your machine and are never uploaded to any server. Even with Sync to Cloud enabled, memory stays local.
</Card>
<Card title="You control what is remembered" icon="toggle-on">
Ask the assistant to forget anything at any time. You can also directly edit or delete the memory files.
</Card>
<Card title="Plain text files" icon="file-lines">
Memory is stored as readable Markdown. No hidden databases or encrypted blobs. You can inspect everything.
</Card>
<Card title="30-day auto-cleanup" icon="clock">
Daily notes are automatically deleted after 30 days. Only facts you have promoted to core memory persist.
description: "Use your Qwen Code account to power BrowserOS"
---
Connect your Qwen Code account to BrowserOS and access Alibaba's coding models with up to a **1 million token context window** — the largest of any provider we support. No API keys needed.
## Setup
**1.** Open BrowserOS and go to **Settings** (`chrome://browseros/settings`). You'll see the AI Providers section.
**4.** Once authorized, Qwen Code will appear as a provider in your settings. Select a model and start using it.
## Available Models
| Model | Context Window |
|-------|---------------|
| `coder-model` | 1M |
| `qwen3-coder-plus` | 1M |
| `qwen3-coder-flash` | 1M |
| `qwen3.5-plus` | 1M |
<Tip>
Qwen Code's 1 million token context window is ideal for tasks that involve long documents, entire documentation sites, or working across many browser tabs simultaneously — the agent can hold everything in context at once.
</Tip>
## Disconnecting
To disconnect your Qwen account, go to **Settings**, find the Qwen Code provider, and click **Disconnect**. Your OAuth tokens will be immediately deleted from your machine.
description: "Run the BrowserOS agent automatically on a schedule"
---
Scheduled Tasks let you run the BrowserOS agent automatically—daily, every few hours, or every few minutes. You get the full power of the agent running on autopilot.
Scheduled Tasks let you run the BrowserOS agent automatically, whether it is daily, every few hours, or every few minutes. Write a prompt once, set a schedule, and let the agent handle it on autopilot.
Watch how to set up a scheduled task from scratch:
Runs once a day at a specific time you choose (e.g., every morning at 8:00 AM).
</Card>
<Card title="Hourly" icon="clock">
Runs every N hours (e.g., every 2 hours, every 6 hours). Set an interval from 1 to 24 hours.
</Card>
<Card title="Minutes" icon="stopwatch">
Runs every N minutes (e.g., every 15 minutes, every 30 minutes). Set an interval from 1 to 60 minutes.
</Card>
</CardGroup>
## Example Use Cases
**Morning briefing**
> Every morning at 8am, check my Google Calendar and send me a summary of today's events. For each meeting, do a quick Google search on the attendees and include their LinkedIn summary.
<AccordionGroup>
<Accordion title="Morning briefing" icon="sun">
> Every morning at 8am, check my Google Calendar and send me a summary of today's events. For each meeting, do a quick Google search on the attendees and include their LinkedIn summary.
> Check my Google Calendar for tomorrow's meetings, then post a summary to my Slack channel, and create a Notion page with prep notes for each meeting.
</Accordion>
</AccordionGroup>
**LinkedIn automation**
> Every day, go to LinkedIn and accept up to 25 pending connection requests.
**Price monitoring**
> Check the price of this Amazon item every hour. If it drops below $50, place the order.
Your scheduled task prompts can be as complex as you want. If you have [connected apps](/features/connect-mcps) like Google Calendar, Slack, Notion, or Gmail, your scheduled tasks can work across all of them.
## Viewing Results
When a scheduled task runs, you can see the results in two places:
- **New Tab page** — Results show up right on your new tab
- **Settings → Scheduled Tasks** — View the full run history for each task
- **New Tab page**: Results show up right on your new tab
- **Scheduled Tasks page**: View the full run history for each task
- **Test** a task manually without waiting for the next scheduled run
- **Retry** a failed task
- **Cancel** a task that is currently running
## How It Works
Scheduled tasks run in a background window, so they don't interrupt whatever you're working on. You won't even notice them running.
<Steps>
<Step title="Task triggers on schedule">
BrowserOS uses your browser's built-in alarm system to trigger tasks at the right time. If your laptop was closed at the scheduled time, the task runs as soon as you open BrowserOS again.
</Step>
<Step title="Background window opens">
A hidden browser window opens automatically. The task runs there so it never interrupts whatever you are working on. You will not see anything happen on screen.
</Step>
<Step title="Agent executes your prompt">
The agent runs your prompt with full access to browser automation and any connected apps. It can navigate pages, fill forms, extract data, and interact with your services.
</Step>
<Step title="Results are saved">
When the task finishes, the result is saved and appears on your New Tab page and in the task's run history. The hidden window closes automatically.
</Step>
</Steps>
<Note>
BrowserOS needs to be open for scheduled tasks to run. If your laptop was closed or BrowserOS wasn't running at the scheduled time, the task will run as soon as you open BrowserOS again.
BrowserOS needs to be open for scheduled tasks to run. Tasks have a 10-minute timeout. If a task takes longer than that, it will be marked as failed and you can retry it.
</Note>
## Pro Tip: Complex Prompts with MCPs
## Cloud Sync
Your scheduled task prompts can be as complex as you want. If you connect MCP servers (like Google Calendar, Slack, Notion, or Gmail), you can create powerful automated workflows.
If you are signed in, your scheduled task configurations sync across devices. Create a task on your laptop and it appears on your desktop. Edits sync both ways, and conflicts are resolved automatically using timestamps.
For example:
> Check my Google Calendar for tomorrow's meetings, then post a summary to my Slack channel, and create a Notion page with prep notes for each meeting.
Only the schedule setup syncs (name, prompt, schedule type, and timing). Task run results and output stay on the device where the task ran.
See [Connect to MCPs](/features/connect-mcps) to set up your integrations.
See [Sync to Cloud](/features/sync-to-cloud) for more details.
## Privacy
<Columns cols={2}>
<Card title="Runs locally" icon="house-laptop">
All tasks run on your machine in a hidden browser window. Nothing is sent to external servers.
</Card>
<Card title="Full control" icon="toggle-on">
Enable, disable, edit, or delete any task at any time. You decide what runs and when.
description: "Teach your BrowserOS agent new abilities with reusable, custom instructions"
---
Skills let you teach the BrowserOS agent how to handle specific tasks. Each skill is a set of instructions written in plain Markdown that the agent loads when it recognizes a matching task. Think of skills as recipes: you write the steps once, and the agent follows them whenever that type of task comes up.
BrowserOS implements the open [Agent Skills specification](https://agentskills.io/specification), so skills you create are portable across any AI agent that supports the standard.
## How Skills Work
<Steps>
<Step title="You create a skill">
Give it a name, a short description of when to use it, and write the instructions in Markdown.
</Step>
<Step title="The agent sees the skill catalog">
When a conversation starts, the agent loads a list of all your enabled skills with their names and descriptions.
</Step>
<Step title="The agent matches a task">
When your request matches a skill's description, the agent loads that skill's full instructions and follows them.
</Step>
</Steps>
## Creating a Skill
<Steps>
<Step title="Open Skills settings">
Click **Skills** in the sidebar.
</Step>
<Step title="Click New Skill">
Click the **New Skill** button to open the creation form.
</Step>
<Step title="Fill in the details">
- **Name**: A short, descriptive name (e.g., "Morning Status Report")
- **Description**: Tell the agent when to use this skill. Be specific. For example: "When the user wants to read status updates from work across Notion, Linear, and Slack"
- **Content**: Write your instructions in Markdown. Include step-by-step directions, examples, and edge cases.
</Step>
<Step title="Save and enable">
Click **Create**. The skill is enabled by default and will be available to the agent immediately.
</Step>
</Steps>
<Tip>
Write your description like a trigger. The agent uses it to decide whether to activate the skill. A good description says both **what** the skill does and **when** to use it.
</Tip>
## Example Skills
<AccordionGroup>
<Accordion title="Morning status report">
**Description:** When the user wants to read status updates from work
**Instructions:**
```markdown
Always look for updates in 3 sources:
1. **Notion** - Check the team updates page for any new entries from today
2. **Linear** - Look at issues assigned to the user that were updated in the last 24 hours
3. **Slack** - Check the #team-updates and #engineering channels for unread messages
Summarize everything in a single report grouped by source.
If a source has no updates, say so.
```
</Accordion>
<Accordion title="PDF processing">
**Description:** Extract text and tables from PDF files, fill PDF forms, and merge multiple PDFs. Use when the user mentions PDFs, forms, or document extraction.
**Instructions:**
```markdown
When extracting text from a PDF:
1. Download or open the PDF in the browser
2. Use the page content tool to extract visible text
3. Preserve table structure using Markdown tables
4. If the PDF has multiple pages, process each page
When filling a PDF form:
- Ask the user for the values if not provided
- Fill each field carefully and confirm before submitting
See references/FORMS.md for common form templates.
```
</Accordion>
<Accordion title="Code review checklist">
**Description:** When the user asks to review code, a pull request, or wants feedback on code quality
**Instructions:**
```markdown
Follow this checklist for every code review:
1. Check for security issues (XSS, injection, hardcoded secrets)
2. Look for performance problems (N+1 queries, unnecessary re-renders)
3. Verify error handling is present and meaningful
4. Check that naming is clear and consistent
5. Look for missing tests for new logic
Format your review as a list of findings with severity: Critical, Warning, or Suggestion.
Always start with what the code does well.
```
</Accordion>
</AccordionGroup>
## Managing Skills
From the Skills page, you can:
- **Enable or disable** a skill using the toggle switch. Disabled skills are not loaded by the agent.
- **Edit** a skill's name, description, or instructions by clicking the edit icon.
- **Delete** a skill by clicking the trash icon. This removes the skill permanently.
## Skill File Format
Under the hood, each skill is stored as a `SKILL.md` file following the [Agent Skills specification](https://agentskills.io/specification):
```markdown
---
name: morning-status-report
description: When the user wants to read status updates from work
metadata:
display-name: Morning Status Report
enabled: "true"
---
Always look for updates in 3 sources:
1. Notion - Check the team updates page
2. Linear - Look at assigned issues updated in the last 24 hours
3. Slack - Check #team-updates and #engineering channels
Summarize everything in a single report grouped by source.
```
The file uses YAML frontmatter for metadata and Markdown for the instructions.
Move detailed references to separate files. The agent loads them only when needed, saving context space.
</Card>
</CardGroup>
<Note>
Skills follow the open [Agent Skills specification](https://agentskills.io/specification). Skills you create in BrowserOS work with any agent that supports the standard.
description: "BrowserOS suggests app connections and task scheduling at the right moment"
---
Smart Nudges are context-aware suggestions that appear as interactive cards during a conversation. The agent detects opportunities to connect an app or schedule a task, and shows you a card at the right moment. You decide whether to act on it or skip it.
There are two types of nudges: **App Connection** and **Schedule Suggestion**.
## App Connection
When you ask the agent to do something that involves an external app (like sending an email or checking your calendar), it checks whether that app is connected. If it is not, the agent shows a connection card before starting the task.
<Steps>
<Step title="You make a request">
For example: "Send Sarah an email with the meeting notes."
</Step>
<Step title="The agent detects an unconnected app">
Gmail is not connected yet, so the agent cannot send emails through the integration.
</Step>
<Step title="A connection card appears">
The card explains why connecting the app would help and gives you two choices: **Connect** or **Do it manually**.
</Step>
<Step title="You choose">
- **Connect**: Opens a sign-in page for the app. Once you authorize, the agent continues with full integration access.
- **Do it manually**: The agent skips the integration and uses browser automation instead (navigates to the website directly).
</Step>
</Steps>
### What happens after you choose
<CardGroup cols={2}>
<Card title="Connected" icon="circle-check">
The app is added to your connected list. The agent uses the integration for this and all future conversations. You can manage connected apps in [Connect Apps](/features/connect-mcps).
</Card>
<Card title="Declined" icon="forward">
The agent remembers your choice and will not ask about this app again. It uses browser automation to complete the task instead.
</Card>
</CardGroup>
<Tip>
If you declined an app but change your mind later, you can connect it anytime from the [Connect Apps](/features/connect-mcps) settings page.
</Tip>
### Supported apps
The agent can suggest connections for all 40+ built-in integrations, including Gmail, Google Calendar, Slack, Notion, GitHub, Linear, Jira, Figma, Salesforce, and many more. See [Connect Apps](/features/connect-mcps) for the full list.
## Schedule Suggestion
After the agent completes a task that could run on a recurring schedule, it shows a scheduling card. This helps you turn one-time tasks into automated routines without leaving the conversation.
<Steps>
<Step title="The agent completes a task">
For example: "Here are the top 5 tech headlines from today."
</Step>
<Step title="The agent recognizes a schedulable task">
News gathering, price monitoring, report building, data tracking, and similar tasks that do not need your real-time input are good candidates.
</Step>
<Step title="A scheduling card appears">
The card suggests a name and schedule. For example: "Run this automatically? 'Morning News Briefing' - daily at 09:00."
</Step>
<Step title="You choose">
- **Schedule this task**: Opens the Scheduled Tasks page with the details pre-filled. Review and confirm to create the task.
- **Maybe later**: Dismisses the card. You can always create the scheduled task manually later.
</Step>
</Steps>
### You can also ask directly
You do not have to wait for the agent to suggest it. Just tell the agent you want to schedule the task:
description: "Give your AI assistant a personality that grows with you"
---
Every time you start a new conversation, the BrowserOS assistant reads a file called `SOUL.md`. This file defines who the assistant is: how it talks, what it prioritizes, and how it behaves. Over time, it evolves based on your interactions, making the assistant feel less like a tool and more like _your_ assistant.
## What is SOUL.md?
SOUL.md is a plain text file that lives on your machine. It contains your assistant's personality, tone, communication style, rules, and boundaries.
Think of it as a personal guide the assistant reads before every conversation. It shapes how the assistant responds to you, not what it knows. Facts about you (your name, projects, preferences) are stored separately in [memory](#soul-vs-memory).
<Tip>
The SOUL.md concept was pioneered by [OpenClaw](https://openclaw.ai/) and inspired by [soul.md](https://soul.md/), which explore the idea of giving AI systems a persistent identity through written documents. BrowserOS builds on this concept with a file that the assistant can read and rewrite on its own.
</Tip>
## How It Works
When you first use BrowserOS, the assistant starts with a simple default personality:
> _Be genuinely helpful. Have opinions when asked. Be resourceful before asking. Earn trust through competence._
As you chat, the assistant picks up on how you like to communicate. If you prefer direct answers, it notices. If you set a boundary ("never send emails without asking me first"), it writes that into SOUL.md. Over time, the file becomes a reflection of how you and your assistant work together.
<Steps>
<Step title="First conversation">
The assistant starts with a default template. It watches for cues about your preferred style, tone, and boundaries.
</Step>
<Step title="The assistant learns your style">
Based on your interactions, the assistant rewrites SOUL.md to reflect your preferences. It will briefly tell you when it makes a change.
</Step>
<Step title="Every future conversation">
The assistant reads the updated SOUL.md before responding, so your preferences carry over across sessions.
</Step>
</Steps>
You do not need to write or edit SOUL.md yourself. The assistant handles it. But you can always view it or ask the assistant to change it.
## Viewing Your SOUL.md
Open **Agent Soul** from the sidebar to see what your assistant's personality file looks like right now. The page shows the current contents of SOUL.md in a read-only viewer.
{/* <Frame caption="View your assistant's personality in Settings">
You do not need to edit the file directly. Just talk to your assistant. Here are some ways to shape its personality:
<CardGroup cols={2}>
<Card title="Set the tone" icon="comment">
"Be more casual and direct. Skip the formalities."
</Card>
<Card title="Add a boundary" icon="shield">
"Never post to Slack or send emails without confirming with me first."
</Card>
<Card title="Change the personality" icon="masks-theater">
"Be more opinionated. If you think my approach is wrong, say so."
</Card>
<Card title="Start fresh" icon="rotate">
"Reset your personality to the default."
</Card>
</CardGroup>
The assistant will update SOUL.md based on your instructions and let you know what changed.
## Where SOUL.md Lives
SOUL.md is stored locally on your machine, inside the BrowserOS data folder:
| Operating System | Path |
|-----------------|------|
| **macOS** | `~/.browseros/SOUL.md` |
| **Windows** | `%APPDATA%/.browseros/SOUL.md` |
| **Linux** | `~/.browseros/SOUL.md` |
The file is plain Markdown, limited to 150 lines. You can open it in any text editor if you want to make manual edits, though we recommend letting the assistant manage it through conversation.
## SOUL vs Memory
BrowserOS keeps personality and knowledge separate on purpose.
<Columns cols={2}>
<Card title="SOUL.md" icon="heart">
**How the assistant behaves.** Personality, tone, communication style, rules, and boundaries. One file, updated by rewriting the whole thing.
</Card>
<Card title="Memory" icon="brain">
**What the assistant knows about you.** Your name, projects, tools, preferences, and recent events. Stored as core facts and daily notes.
</Card>
</Columns>
When the assistant learns that you prefer bullet points over paragraphs, that goes in SOUL.md. When it learns that you work at Acme Corp on a project called Atlas, that goes in memory.
This separation means the assistant can have a consistent personality even when its factual knowledge changes, and vice versa.
## Example SOUL.md
Here is what an evolved SOUL.md might look like after a few conversations:
```markdown
# SOUL.md
## Personality
- Direct and concise. No filler phrases.
- Have opinions and share them when relevant.
- Use humor sparingly but naturally.
## Communication Style
- Default to bullet points for lists and options.
- Keep status updates to one or two lines.
- When explaining something technical, use analogies.
## Boundaries
- Never send emails or post messages without explicit confirmation.
- Do not make purchases or financial transactions.
- Ask before modifying any file outside the current project.
## Preferences
- When researching, prioritize primary sources over summaries.
- For code tasks, prefer simple solutions over clever ones.
- Always explain trade-offs when suggesting approaches.
```
Your SOUL.md will look different because it is shaped by your conversations. No two are the same.
description: "Sign in to sync your conversations, settings, and automations across all your devices"
---
Sign in to BrowserOS and your data follows you everywhere. Your conversations, AI model settings, and scheduled tasks sync automatically to the cloud so you never lose your setup.
## Why Sign In?
Without an account, everything stays on one device. Sign in and your data is backed up and available wherever you use BrowserOS.
Open BrowserOS on a new device and your conversations, model settings, and scheduled tasks are already there.
</Card>
<Card title="Never lose your history" icon="clock-rotate-left">
Chat history is saved to the cloud automatically. Clear your browser data or switch machines and everything is still available.
</Card>
<Card title="Settings follow you" icon="sliders">
Set up your AI models once. Your provider configurations sync across devices so you never re-enter the same setup twice.
</Card>
<Card title="Automations stay in sync" icon="arrows-rotate">
Create a scheduled task on your laptop and it appears on your desktop. Edits sync both ways.
</Card>
</CardGroup>
## How to Sign In
<Steps>
<Step title="Open a new tab">
Open a new tab in BrowserOS to see the home page.
</Step>
<Step title="Click Sign In">
Click **Sign In** in the sidebar to open the login page.
</Step>
<Step title="Choose your sign-in method">
Enter your email for a magic link, or sign in with Google.
</Step>
<Step title="Verify and you're in">
Click the link in your email (or complete Google sign-in). BrowserOS starts syncing your data immediately.
</Step>
</Steps>
<Tip>
Magic link sign-in means you never need to create or remember a password. Just enter your email and click the link.
</Tip>
## What Gets Synced
<AccordionGroup>
<Accordion title="Conversations" icon="messages">
Your full chat history syncs to the cloud as you go. Every message is saved in real time so you can pick up any conversation on another device. Locally, BrowserOS keeps your 50 most recent conversations. In the cloud, there is no limit.
</Accordion>
<Accordion title="AI model settings" icon="microchip">
Your configured LLM providers (OpenAI, Anthropic, Google, Moonshot, Azure, Bedrock, and others) sync across devices. This includes the model name, provider type, base URL, temperature, and context window settings.
**Your API keys are never synced.** Sensitive credentials like API keys, access keys, and session tokens stay on the device where you entered them. You will need to re-enter API keys on each new device.
Your scheduled task configurations sync in both directions. Create a task on one device, edit it on another, and changes are merged automatically using timestamps to resolve conflicts. Only the schedule setup syncs (name, prompt, schedule type, and timing). Task run results and output stay on the device where the task ran.
</Accordion>
<Accordion title="Profile" icon="user">
Your name, profile picture, and account preferences sync across devices. Information you provide during onboarding (role, company) is also saved to your profile.
</Accordion>
</AccordionGroup>
## What Stays Local
Some settings are device-specific and do not sync to the cloud:
- **API keys and secrets** for LLM providers
- **Memory** (core facts and daily notes)
- **SOUL.md** (assistant personality)
- **Theme** (light/dark mode)
- **Workspace folder** selection
- **Connected MCP servers**
- **Workflows**
- **Scheduled task results** (run output stays on the device where the task ran)
This is intentional. Sensitive credentials never leave your device, memory and personality files stay private, and display preferences can differ between machines.
## How Sync Works
BrowserOS uses a local-first approach. Your data is always saved on your device first, then synced to the cloud in the background.
<Steps>
<Step title="Local save">
Every action (sending a message, adding a provider, creating a task) is saved locally first. BrowserOS works fully offline.
</Step>
<Step title="Background sync">
When you are signed in, changes are automatically pushed to the cloud. New chat messages sync in real time. Provider and task changes sync whenever they are updated.
</Step>
<Step title="Restore on new devices">
When you sign in on a new device, BrowserOS pulls your conversations, model settings, scheduled tasks, and profile from the cloud and merges them with any local data.
</Step>
</Steps>
<Note>
If the same scheduled task is edited on two devices before they sync, BrowserOS keeps the version with the most recent timestamp.
</Note>
## Security
<Columns cols={2}>
<Card title="API keys never leave your device" icon="key">
Sensitive credentials like API keys, access keys, and tokens are excluded from cloud sync entirely.
description: "Control your browser from Claude Code, Gemini CLI, or any MCP client"
title: "MCP Clients (Claude Code, OpenClaw)"
description: "Control your browser and 40+ apps from Claude Code, OpenClaw, Gemini CLI, or any MCP client"
---
BrowserOS is the best browser for AI coding agents. It comes with a built-in MCP server that lets Claude Code control your browser — open tabs, extract page content, fill forms, take screenshots, and automate any web task.
BrowserOS is the best browser for AI coding agents. It comes with a built-in MCP server that gives your AI agent **full browser control** and **direct access to 40+ external services** — Gmail, Slack, GitHub, Google Calendar, Linear, Notion, and more — all through a single MCP connection.
<Note>
Unlike Chrome DevTools MCP which requires setting up debug profiles and running separate servers, BrowserOS MCP works out of the box. Just copy the URL from settings and connect.
@@ -11,20 +11,30 @@ Unlike Chrome DevTools MCP which requires setting up debug profiles and running
## Why Use BrowserOS with Claude Code?
<Columns cols={2}>
<CardGroup cols={2}>
<Card title="Agentic Coding" icon="code">
Claude tests your web app, reads console errors, and fixes the code — all in one loop.
</Card>
<Card title="40+ App Integrations" icon="grid-2">
Gmail, Slack, GitHub, Jira, Notion, Google Sheets, and more — accessible directly from your AI agent.
</Card>
<Card title="Data Extraction" icon="download">
Extract your LinkedIn profile, tweets, or any authenticated page content.
</Card>
<Card title="Task Automation" icon="repeat">
Fill forms, navigate multi-step workflows, and automate repetitive browser tasks.
</Card>
<Card title="31 Browser Tools" icon="wrench">
Full browser control: tabs, navigation, clicks, typing, screenshots, bookmarks, and history.
<Card title="53+ MCP Tools" icon="wrench">
Full browser control: tabs, navigation, clicks, typing, screenshots, bookmarks, history, tab groups, and window management.
</Card>
</Columns>
<Card title="Zero Config Auth" icon="lock">
Connect external services via OAuth — credentials are managed securely, never stored in BrowserOS.
</Card>
</CardGroup>
<Tip>
Wondering how BrowserOS MCP compares to Chrome DevTools MCP or other browser automation tools? See our [detailed feature comparison](/comparisons/chrome-devtools-mcp) covering 53 browser tools, 40+ app integrations, and why BrowserOS MCP gives developers more out of the box.
</Tip>
## Getting Started
@@ -79,6 +89,20 @@ Unlike Chrome DevTools MCP which requires setting up debug profiles and running
| `search_history` | Search browser history by text query |
| `get_recent_history` | Get the most recent history items |
| `delete_history_url` | Delete a specific URL from history |
| `delete_history_range` | Delete history within a time range |
</Accordion>
</AccordionGroup>
---
## 40+ External App Integrations
BrowserOS connects your AI agent directly to the tools you already use — no separate MCP servers to install or configure. Everything is accessible through the same BrowserOS MCP connection.
### How It Works
<Steps>
<Step title="Agent calls an external service tool">
Your AI agent calls a tool like `gmail_search_messages` through the BrowserOS MCP.
</Step>
<Step title="OAuth login (first time only)">
If this is your first time using that service, BrowserOS opens an OAuth login page in the browser. Log in and authorize access.
</Step>
<Step title="Tool executes and returns results">
Once authenticated, the tool runs and returns results to your agent. Future calls to the same service work automatically — no re-authentication needed.
</Step>
</Steps>
<Note>
Your credentials are managed securely via OAuth and are **never stored in BrowserOS**. Tokens are refreshed transparently, and you can revoke access at any time from the service provider.
description: "Move your tabs to the side for a cleaner, more organized browsing experience"
---
BrowserOS supports vertical tabs — a side panel that lists all your open tabs along the left edge of the browser window. Instead of shrinking tab titles into a cramped horizontal strip, vertical tabs give each tab its own full-width row so you can read titles at a glance, even with dozens of tabs open.
## Why Vertical Tabs?
Modern screens are wide, not tall. A horizontal tab bar wastes vertical space you could use for content, and tabs quickly become unreadable as they shrink. Vertical tabs solve both problems:
<CardGroup cols={2}>
<Card title="Read every tab title" icon="text">
Tabs stack vertically with full-width labels, so you always know what is open — no squinting at favicons.
</Card>
<Card title="Handle many tabs" icon="layer-group">
Open 30, 50, or 100 tabs without the strip becoming unusable. The side panel scrolls naturally.
The horizontal tab bar disappears, giving web pages more room on widescreen monitors.
</Card>
<Card title="Stay organized" icon="folder-tree">
Combine vertical tabs with tab groups to visually separate work, research, and personal browsing.
</Card>
</CardGroup>
## Enabling Vertical Tabs
Toggle vertical tabs on or off from the Customization settings page.
<Steps>
<Step title="Open Settings">
Go to `chrome://browseros/settings` in the address bar.
</Step>
<Step title="Go to Customization">
In the left sidebar, select **Customization**.
</Step>
<Step title="Toggle Use Vertical Tabs">
Flip the **Use Vertical Tabs** switch to on. The browser immediately moves your tabs to a side panel.
</Step>
</Steps>
<Frame caption="Enable vertical tabs in Settings > Customization">
<img src="/images/features--vertical-tabs-setting.png" alt="Vertical tabs toggle in BrowserOS Customization settings" />
</Frame>
To switch back, return to the same setting and turn the toggle off. Your tabs move back to the horizontal strip instantly.
## How It Works
When vertical tabs are enabled, the tab strip relocates from the top of the window to a collapsible side panel on the left. Each tab is displayed as a row showing the page favicon and full title.
- **Click** a tab row to switch to it.
- **Right-click** a tab for the standard context menu (pin, mute, close, move to group).
- **Drag** tabs up or down to reorder them, or drag them into and out of tab groups.
- The panel can be **collapsed** to show only favicons, freeing up even more horizontal space.
## Vertical Tabs + Tab Groups
Vertical tabs pair naturally with [tab groups](/features/workflows). Groups appear as collapsible sections in the side panel, making it easy to keep projects separate and fold away tabs you are not actively using.
@@ -5,6 +5,14 @@ description: "Build reliable, repeatable browser automations with a visual graph
Workflows let you turn complex browser tasks into reliable, reusable automations. Instead of hoping the agent figures out the right steps each time, you define the exact sequence—and run it whenever you need.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.